CN113124635A - Refrigerator with a door - Google Patents

Refrigerator with a door Download PDF

Info

Publication number
CN113124635A
CN113124635A CN201911416104.XA CN201911416104A CN113124635A CN 113124635 A CN113124635 A CN 113124635A CN 201911416104 A CN201911416104 A CN 201911416104A CN 113124635 A CN113124635 A CN 113124635A
Authority
CN
China
Prior art keywords
food material
target
image
shelf
controller
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911416104.XA
Other languages
Chinese (zh)
Other versions
CN113124635B (en
Inventor
赵启东
李正义
曲磊
刘明
高桢
孙菁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hisense Co Ltd
Original Assignee
Hisense Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hisense Co Ltd filed Critical Hisense Co Ltd
Priority to CN201911416104.XA priority Critical patent/CN113124635B/en
Publication of CN113124635A publication Critical patent/CN113124635A/en
Application granted granted Critical
Publication of CN113124635B publication Critical patent/CN113124635B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F25REFRIGERATION OR COOLING; COMBINED HEATING AND REFRIGERATION SYSTEMS; HEAT PUMP SYSTEMS; MANUFACTURE OR STORAGE OF ICE; LIQUEFACTION SOLIDIFICATION OF GASES
    • F25DREFRIGERATORS; COLD ROOMS; ICE-BOXES; COOLING OR FREEZING APPARATUS NOT OTHERWISE PROVIDED FOR
    • F25D29/00Arrangement or mounting of control or safety devices
    • F25D29/003Arrangement or mounting of control or safety devices for movable devices
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F25REFRIGERATION OR COOLING; COMBINED HEATING AND REFRIGERATION SYSTEMS; HEAT PUMP SYSTEMS; MANUFACTURE OR STORAGE OF ICE; LIQUEFACTION SOLIDIFICATION OF GASES
    • F25DREFRIGERATORS; COLD ROOMS; ICE-BOXES; COOLING OR FREEZING APPARATUS NOT OTHERWISE PROVIDED FOR
    • F25D29/00Arrangement or mounting of control or safety devices
    • F25D29/005Mounting of control devices
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F25REFRIGERATION OR COOLING; COMBINED HEATING AND REFRIGERATION SYSTEMS; HEAT PUMP SYSTEMS; MANUFACTURE OR STORAGE OF ICE; LIQUEFACTION SOLIDIFICATION OF GASES
    • F25DREFRIGERATORS; COLD ROOMS; ICE-BOXES; COOLING OR FREEZING APPARATUS NOT OTHERWISE PROVIDED FOR
    • F25D2500/00Problems to be solved
    • F25D2500/06Stock management

Landscapes

  • Engineering & Computer Science (AREA)
  • Chemical & Material Sciences (AREA)
  • Combustion & Propulsion (AREA)
  • Physics & Mathematics (AREA)
  • Mechanical Engineering (AREA)
  • Thermal Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Cold Air Circulating Systems And Constructional Details In Refrigerators (AREA)

Abstract

The application discloses refrigerator belongs to electron technical field. The refrigerator includes: a cabinet including a storage compartment; a door for shielding an opening of the storage chamber; the camera equipment is used for acquiring an image at the opening; a shelf for dividing the storage chamber into a plurality of storage layers; the weight sensor is used for detecting the weight of the corresponding shelf and the object carried on the shelf; a controller to: determining a moving track of the hand according to a multi-frame image including the hand acquired by the camera equipment; determining a target storage area through which a hand passes in the plurality of storage areas according to the moving track, wherein the storage areas comprise at least two storage layers, and the storage layers in the plurality of storage areas are different; determining a target storage layer in which the food material stored in the target storage area is changed, based on the weight detected by the weight sensor corresponding to at least one shelf located between the respective storage layers of the target storage area. The method and the device solve the problem that the accuracy of determining the storage position of the food material is low. The application is used for storing food materials.

Description

Refrigerator with a door
Technical Field
The application relates to the technical field of electronics, in particular to a refrigerator.
Background
With the development of electronic technology, the requirements for the use intelligence and convenience of household appliances (such as a refrigerator) are higher and higher. For example, it is currently required that a refrigerator can automatically determine from which compartment in the refrigerator a user accesses food material.
In the related art, a refrigerator includes a storage chamber, a camera, and a controller. When a user accesses food materials in a storage chamber of the refrigerator, the camera can capture images of the food materials. Further, the controller may determine a location where the user accesses the food material in the refrigerator according to the location of the food material in the image.
However, since the field of view of the camera cannot cover the entire storage chamber, the accuracy of determining the position where the user accesses the food materials in the refrigerator in the related art is often low.
Disclosure of Invention
The application provides a refrigerator, which can solve the problem that the accuracy of determining the position of food material stored and taken by a user in the refrigerator is often low. The refrigerator includes:
a cabinet including a storage compartment having an opening;
the door is movably connected with the shell and used for shielding the opening;
the camera equipment is used for acquiring an image at the opening;
a shelf in the storage compartment for dividing the storage compartment into a plurality of storage layers;
the weight sensor is used for detecting the weight of the corresponding shelf and the object borne by the shelf;
a controller to:
determining a moving track of the hand according to a multi-frame image including the hand acquired by the camera equipment;
determining a target storage area which is passed by the hand in a plurality of storage areas according to the moving track, wherein the storage areas comprise at least two storage layers, and the storage layers in the plurality of storage areas are different;
determining a target storage layer in which the food material stored in the target storage area is changed according to the weight detected by the weight sensor corresponding to at least one shelf located between the respective storage layers of the target storage area.
The beneficial effect that technical scheme that this application provided brought includes at least:
in the refrigerator provided by the application, the controller can determine the target storage layer with the changed food materials stored in the target storage area according to the weight detected by the weight sensor corresponding to at least one shelf positioned between the storage layers of the target storage, and the storage position of the food materials accessed by a user is determined to be the target storage layer. Because the weight that the weight sensor that is located on the shelf and corresponds in the access storing layer will change the shelf inevitably detects, so can avoid eating the condition that its storage position can't be confirmed when sheltering from and leading to the access to eat the material, improved the accuracy of confirming the storage position of eating the material.
Drawings
Fig. 1 is a schematic structural diagram of a refrigerator provided in an embodiment of the present application;
fig. 2 is a flowchart of a method for determining a storage location of a food material according to an embodiment of the present disclosure;
FIG. 3 is a schematic structural diagram of another refrigerator provided in an embodiment of the present application;
fig. 4 is a schematic structural diagram of a sensing module according to an embodiment of the present disclosure;
fig. 5 is a flowchart of a method for determining a storage location of another food material according to an embodiment of the present application;
fig. 6 is a flowchart of a food material identification method according to an embodiment of the present application;
FIG. 7 is a diagram of a key image provided by an embodiment of the present application;
fig. 8 is a flowchart of a method for determining target training data according to an embodiment of the present disclosure.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
With the development of electronic technology, the requirements for various household appliances are higher and higher. At present, the intelligent management of food materials in the refrigerator is one of the trends of the development of the refrigerator, and the determination of the storage position of the food materials in the refrigerator is an important link for realizing the intelligent management. At present, the storage position of food materials accessed by a user is determined by identifying images of a storage room captured by a camera. However, a blind area exists when the camera collects images, when a user accesses food materials in the blind area, the camera cannot collect the images of the food materials, and the storage positions of the food materials accessed by the user cannot be determined, so that the accuracy of determining the storage positions of the food materials by identifying the images collected by the camera is low, and the limitation is large. The embodiment of the application provides a refrigerator, which can improve the accuracy of determining the storage position of food materials.
Fig. 1 is a schematic structural diagram of a refrigerator provided in an embodiment of the present application. As shown in fig. 1, the refrigerator 10 may include: a cabinet 101, a door 102, a camera 103, a shelf 104, a weight sensor, and a controller (not shown in fig. 1). The cabinet 101 includes a storage compartment having an opening (the storage compartment, the opening, and the weight sensor are not labeled in fig. 1); the door 102 is movably connected with the cabinet 101 and used for shielding the opening of the storage room; and the camera device 103 is used for acquiring images at the opening. The shelf 104 is located in the storage compartment for dividing the storage compartment into a plurality of storage layers C; each weight sensor is used to detect the weight of the corresponding shelf 104 and the objects carried thereon. Fig. 1 illustrates an example in which three shelves 104 are provided in the storage room, and the three shelves 104 may divide the storage room into 4 storage levels C. Optionally, the number of the shelves 104 may also be two or four or even more, which is not limited in the embodiment of the present application. Alternatively, the storage compartments in the embodiments of the present application may include a refrigerating compartment and a freezing compartment, and the door may include at least one door corresponding to the refrigerating compartment and at least one door corresponding to the freezing compartment, and fig. 1 illustrates only the structure in the refrigerating compartment, and reference may be made to the description of the refrigerating compartment for the freezing compartment.
Alternatively, the controller may be located anywhere inside the housing 101, enclosed by the housing 101; or the controller may be located anywhere outside the housing 101; or the controller may be located anywhere inside the door 102, enclosed by the door; or the controller may be located anywhere outside of the door 102; or the controller may also be independent of the casing 101 and the door 102, and the setting position of the controller is not limited in the embodiment of the present application. The controller may be in communication with the camera device 103 for acquiring images captured by the camera device 103. The controller can also be in communication with the weight sensor for acquiring the weight detected by the weight sensor.
The controller may be used to perform the method of determining the storage location of food material as shown in fig. 2. As shown in fig. 2, the method may include:
step 201, determining a moving track of a hand according to a multi-frame image including the hand acquired by the camera equipment.
Step 202, determining a target storage area through which a hand passes in the plurality of storage areas according to the moving track, wherein the storage areas comprise at least two storage layers, and the storage layers in the plurality of storage areas are different.
And step 203, determining the target storage layer with the changed food materials stored in the target storage area according to the weight detected by the weight sensor corresponding to at least one shelf positioned between the storage layers of the target storage area.
In summary, in the refrigerator provided in the embodiment of the present application, the controller may determine the target storage layer where the food material stored in the target storage area changes according to the weight detected by the weight sensor corresponding to the at least one shelf located between the storage layers of the target storage, and determine that the storage position of the food material accessed by the user is the target storage layer. Because the weight that the weight sensor that is located on the shelf and corresponds in the access storing layer will change the shelf inevitably detects, so can avoid eating the condition that its storage position can't be confirmed when sheltering from and leading to the access to eat the material, improved the accuracy of confirming the storage position of eating the material.
Optionally, with continued reference to fig. 1, in an embodiment of the present disclosure, the weight sensor corresponding to each shelf 104 may include a plurality of sub-weight sensors 105 located on a bottom surface of the shelf 104, the bottom surface of the shelf is opposite to the bearing surface of the shelf, and the weight detected by the weight sensor corresponding to the shelf may be: an average value of the weights detected by the plurality of sub-weight sensors 105, or a sum of the weights detected by the plurality of sub-weight sensors 105. Alternatively, each shelf may also include only one sub-weight sensor, which is not limited in this embodiment.
Optionally, each shelf may also be located on a corresponding carrying structure, and a weight sensor corresponding to the shelf may be located between the shelf and the corresponding carrying structure to detect the weight of the shelf and the object carried by the shelf.
For example, with continued reference to fig. 1, the refrigerator 10 may further include at least one set of bosses T on the side walls of the storage compartment, where each set of bosses T may be a load bearing structure. The at least one set of bosses T corresponds to the at least one shelf 104 one-to-one, and for any set of bosses T and its corresponding shelf 104, the set of bosses T is used to carry the shelf 104, and the weight sensor corresponding to the shelf 104 is located between the shelf 104 and the bosses T. Alternatively, each set of bosses may include at least two bosses supporting opposite ends of the shelf. For example, the weight sensor may include a plurality of sub-weight sensors located at an edge region of the bottom surface of the shelf. For example, the weight sensor includes two sub-weight sensors, which may be respectively located at edge positions of opposite ends in the bottom surface of the shelf. For another example, the weight sensor includes four sub weight sensors, and the four sub weight sensors may be respectively located at four corners of the bottom surface of the shelf.
Alternatively, the load bearing structure may be a plate-like structure that may be secured in the storage compartment (e.g., by being supported by bosses on the side walls of the storage compartment), the rack may be located on a corresponding plate-like structure, and the rack's corresponding weight sensor may be located between the rack and the corresponding plate-like structure. At this time, the sub-weight sensor included in the weight sensor corresponding to the shelf may be located at a middle region of the bottom surface of the shelf.
It should be noted that fig. 1 only shows one set of bosses T for carrying the uppermost shelf 104 in the storage compartment, and that other sets of bosses for carrying other shelves 104 may be the same as the illustrated set of bosses.
It should be noted that fig. 1 illustrates the image capturing apparatus 103 as being located at the top of the chassis 101, alternatively, the image capturing apparatus 103 may also be located at other positions, such as the top of the door 102 or the top of the storage room, which is not limited in this embodiment of the present application. It should be noted that the "top" of a structure described in the embodiments of the present application is the end of the structure away from the ground when the refrigerator is placed on the ground for normal use. As shown in fig. 1, the field of view of the image pickup apparatus 103 may be a conical region (e.g., a region between two broken lines in fig. 1) with the image pickup apparatus 103 as a vertex.
The refrigerator provided by the embodiment of the application can further comprise a memory, a door switch detector, a display screen, a loudspeaker and a microphone, wherein the memory, the door switch detector, the display screen, the loudspeaker and the microphone can be in communication connection with the controller.
Alternatively, the memory may be provided in the cabinet or the door, or may be independent of the cabinet and the door. The memory can be used for storing images collected by the camera equipment, identification results obtained by identifying the food material images by the controller and other information needing to be stored. Alternatively, a door opening and closing detector may be used to detect whether the door is in an open state or a closed state. For example, a door opening and closing detector may be provided to a position in the housing capable of contacting the door, and the door is determined to be in an open state when the door opening and closing detector does not contact the door, and the door is determined to be in a closed state when the door opening and closing detector contacts the door. Alternatively, the display screen may be provided on a surface of the door of the refrigerator remote from the cabinet. The controller may control the display screen to display the recognition result of the food material image or other information. The display screen can also be a touch display screen, and at this time, a user can interact with the refrigerator through the touch display screen, for example, the user can perform touch on the touch display screen to trigger the controller to generate a corresponding instruction, and execute the generated instruction.
Alternatively, the speaker and microphone may be provided on the cabinet or on the door. For example, the speaker and the microphone may be provided at the same position as the image pickup apparatus. Such as a loudspeaker, a microphone and a camera device, can jointly form a sensing module, which is arranged at the top of the cabinet, or the sensing module can also be arranged at the top of the storage room. The controller can control the loudspeaker to emit voice information to prompt the user to carry out corresponding operation. The controller can control the microphone to collect sound information in the environment where the refrigerator is located so as to generate corresponding instructions.
For example, fig. 3 is a schematic structural diagram of another refrigerator provided in an embodiment of the present application, and the refrigerator shown in fig. 3 may be a left side view of the refrigerator, and fig. 3 shows a state in which a door 102 in the refrigerator is closed. As shown in fig. 3, the refrigerator 10 further includes a driving part 106 and a sensing module 100 located at the top of the cabinet 101, and the sensing module 100 may be connected to the driving part 106. The drive component 106 is also communicatively coupled to the controller. Fig. 4 is a schematic structural diagram of a sensing module according to an embodiment of the present disclosure, and fig. 4 is a bottom view of the sensing module 100. As shown in fig. 4, the sensing module 100 includes: an image pickup apparatus 103, a speaker 107, and a microphone 108, the image pickup apparatus 103 including a depth camera 1031 and a color camera 1032. As shown in fig. 3, the field of view of the image pickup apparatus 103 is a conical region (region between two broken lines in fig. 3) having the image pickup apparatus 103 as a vertex. As shown in fig. 4, the microphone 108 may be a linear 4-microphone array, and the speaker 107 is located at the side of the sensing module, so that the distance between the microphone 108 and the speaker 107 can be increased.
Alternatively, the controller may control the driving part 106 to move the sensing module 100 according to the state (open state or closed state) of the door 102 or a voice command issued by the user. For example, when the controller determines that the door 102 is in the open state, the driving part 106 may be controlled to push the sensing module 100 out in a direction approaching the door 102, and when the sensing module is pushed out to a predetermined position, the driving part 106 may be controlled to stop the pushing-out action. And then triggers the image pickup device 103 in the sensing module 100 to work. When the controller determines that the door 102 is in the closed state, the image pickup apparatus 103 may be controlled to stop operating, and the driving part 106 may be controlled to retract the sensing module 100 in a direction away from the door 102, and when the sensing module is retracted to the home position, the driving part 106 may be controlled to stop the retracting action.
Optionally, the image capturing apparatus in the embodiment of the present application may include at least one of a depth camera and a color camera. The color camera can be a common color camera or a wide-angle color camera; the depth camera may be a binocular camera, a structured light camera, or a time of flight (TOF) based camera. The image captured by the depth camera may be a depth image (also referred to as range image), and the image captured by the color camera may be a color image. The pixel value of the pixel point in the depth image is the distance (also called depth) from the point corresponding to the pixel point to the depth camera in the scene covered by the visual field range of the depth camera, and the pixel value of the pixel point in the color image is a gray value. Alternatively, the frequency with which the depth camera and the color camera capture images may be the same, and the minimum time interval between the capture of the depth image and the capture of the color image may be less than the duration threshold. The multi-frame depth images collected by the depth camera and the multi-frame color images collected by the color camera can be in one-to-one correspondence, and the color image corresponding to each frame of depth image is the color image collected at the collecting time which is closest to the collecting time of the frame of depth image in the multi-frame color images. Furthermore, the scene difference between the corresponding depth image and the scene represented by the color image can be ensured to be small. For example, the depth camera and the color camera may each capture an image at the same time.
The pickup mode of the microphone can be selected according to the pickup distance, and can be near-field pickup or far-field pickup. A microphone can generally collect sound information within three meters in near-field sound pickup, and a microphone can generally collect sound information within a range of three meters to five meters in far-field sound pickup. The microphone can include native recording module when the near field picked up sound, also can include single microphone and disappear the module again. The microphone can include the module that includes when near field pickup when far field pickup, can also include many microphone module such as the six microphone modules of linear four microphone module one-level annular etc..
The controller may include a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), or a combination of a CPU and a GPU. The processor may further include a hardware chip. The hardware chip may be an application-specific integrated circuit (ASIC), a Programmable Logic Device (PLD), or a combination thereof. The PLD may be a Complex Programmable Logic Device (CPLD), a field-programmable gate array (FPGA), a General Array Logic (GAL), or any combination thereof. It should be noted that the controller in the embodiment of the present application may be disposed in the refrigerator, or may be disposed in other devices, and it is only necessary to ensure that the controller can be in communication connection with each component in the refrigerator, and can control each component.
The memory is connected to the controller through a bus or other means, and at least one instruction, at least one program, a code set, or an instruction set is stored in the memory, and the at least one instruction, the at least one program, the code set, or the instruction set is loaded and executed by the controller to implement the method provided by the embodiment of the present application. The memory may be a volatile memory (or a nonvolatile memory), a non-volatile memory (or a combination thereof). The volatile memory may be a random-access memory (RAM), such as a Static Random Access Memory (SRAM) or a Dynamic Random Access Memory (DRAM). The nonvolatile memory may be a Read Only Memory (ROM), such as a Programmable Read Only Memory (PROM), an Erasable Programmable Read Only Memory (EPROM), and an electrically erasable programmable read-only memory (EEPROM). The non-volatile memory may also be a flash memory, a magnetic memory, such as a magnetic tape, a floppy disk, or a hard disk. The non-volatile memory may also be an optical disc.
Fig. 5 is a flowchart of another method for determining a food material storage location according to an embodiment of the present application, and the method can be applied to a controller. As shown in fig. 5, the method may include:
step 501, controlling the camera device to collect images at the opening of the storage room.
The visual field range of the camera equipment at least needs to cover the opening of the storage chamber, and the food materials inevitably need to pass through the opening when the user accesses the food materials in the refrigerator, so that the camera equipment can be ensured to acquire images in the process of accessing the food materials by the user.
Alternatively, when the controller detects that the door is in an open state through the door switch detector, the controller may control the driving part to push out the sensing module, and then control the image pickup apparatus to start capturing an image. The controller may control the camera device to continuously acquire a certain number of images per second. When the controller determines that the door is changed from the open state to the closed state, the controller may control the image pickup apparatus to stop capturing images. Thus, in the process that the door is changed from the closed state to the open state and then to the closed state, the image pickup device can complete image pickup in one pickup cycle.
It should be noted that, in the embodiment of the present application, the image capturing apparatus captures an image under the control of the controller as an example for explanation. Optionally, the image capturing apparatus may also perform image acquisition without being controlled by the controller, which is not limited in this embodiment of the application.
Optionally, in this embodiment of the present application, the image capturing apparatus includes a depth camera and a color camera, and the depth camera and the color camera may capture images simultaneously, so that the images captured by the image capturing apparatus at the same capture time include a depth image and a color image corresponding to the depth image. The image collected by the camera device at the opening of the storage room comprises a group of depth images and a group of color images which are in one-to-one correspondence with the depth images of frames in the group of depth images, and the group of depth images can comprise a plurality of frames of depth images.
Step 502, acquiring a multi-frame image including a hand area acquired by the camera device.
It should be noted that in the embodiment of the present application, a multi-frame image including a hand region may be determined in a group of depth images acquired by an image capturing apparatus, that is, each frame of image including a hand region described in the embodiment of the present application is a depth image. Alternatively, the controller may acquire each frame of depth image acquired by the image capturing device in real time, that is, each time a frame of depth image is acquired by the image capturing device, the frame of depth image is acquired. The controller may then perform hand detection on the frame depth image to determine whether the frame depth image is an image that includes a hand region.
Optionally, the controller may train a hand recognition model through machine learning or a deep neural network, and then perform hand detection on the depth image by using the trained hand recognition model. The controller may, for example, express features of the hand region based on the depth information from a three-dimensional point cloud of the hand region in the depth image, such as determining a hand feature vector based on depth values of respective points in the image of the hand region. And then, training a hand recognition model by using the hand feature vectors extracted from the multi-frame depth images, and further recognizing the hand region in the image by using the hand recognition model. The three-dimensional point cloud of the hand region is a set of points in the hand region represented by three-dimensional coordinates.
For example, the controller performs hand detection on all depth images acquired by the camera device in one acquisition cycle, and then determines a multi-frame image including a hand region. Optionally, in the depth image acquired by the image pickup apparatus in one acquisition cycle, an image between the image of the first frame including the hand and the image of the last frame including the hand determined by the controller may be both an image including the hand.
And step 503, determining the moving track of the hand according to the multi-frame images.
For example, after each frame of image including the hand region is determined, the controller may further determine a centroid position of the hand region in each frame of image, and then determine a movement trajectory of the hand by tracking the centroid position, such as tracking the centroid position of the hand region by an adaptive higher-order predictive tracking model. For example, the controller may cluster pixel points of the hand region in the image to determine a centroid position of the hand. The controller can also determine a track vector of the hand according to the two adjacent frames of images including the hand, and further obtain a plurality of track vectors according to the multi-frame images including the hand. The controller may determine a continuous movement trajectory of the hand based on the trajectory vector obtained by combining the plurality of trajectory vectors.
And step 504, determining a target storage area passed by the hand according to the moving track of the hand.
The storage compartment may include a plurality of storage regions, each of which may include at least two storage layers, and the storage layers of the plurality of storage regions are different. Illustratively, the storage room in the refrigerator shown in fig. 1 is divided into four storage layers C by three shelves 104, and it is assumed that the four storage layers C are sequentially referred to as a first storage layer, a second storage layer, a third storage layer, and a fourth storage layer from top to bottom. The four reservoir layers may belong to two reservoir regions, e.g., the two reservoir regions are referred to as a high confidence region and a low confidence region, respectively. Wherein the low confidence region may include a first reservoir layer and a second reservoir layer and the high confidence region may include a third reservoir layer and a fourth reservoir layer.
The controller can determine m depth value ranges according to the height of the refrigerator, wherein m is larger than 1, and the depth value of each position in a scene where the refrigerator is located is the distance between the position and the image pickup device in the refrigerator. Illustratively, the respective depth values include [ Q0, Q1), [ Q2, Q3), …, [ Qm-1, Qm ], where Q0 is 0, and Qm may be greater than or equal to the height of the refrigerator. Alternatively, the respective depth value ranges may be equal in length or unequal. Alternatively, the plurality of depth value ranges may correspond one-to-one to a plurality of storage areas included in the storage room, and the depth value range corresponding to each storage area includes a distance from any one position in the storage area to the image pickup apparatus.
For example, the controller may determine an average depth value of a hand region in a target image according to a movement trajectory of the hand, where the target image is an image of the multi-frame image when the hand moves to a target position in the movement trajectory, and the target position is a position farthest from the starting point in the movement trajectory. Furthermore, the controller may determine the storage area corresponding to the depth value range in which the average depth value is located as the target storage area through which the hand passes; that is, the controller may determine that the hand passes through the target storage area when the average depth value is within the depth value range corresponding to the target storage area. In this way, the controller may determine that the user accesses food material from the target storage area, i.e., that the storage location of the accessed food material is located within the target storage area.
It should be noted that the average depth value of the hand region in the target image is also the average pixel value of each pixel point in the hand region in the target image. After determining the target image, the controller may obtain a pixel value of each pixel point in the hand region of the target image, and determine an average depth value of the hand region according to the pixel values of the pixel points in the hand region.
Step 505 determines whether or not the target storage area is a storage area near the image pickup apparatus among the plurality of storage areas. When the target storage area is not a storage area close to the image pickup apparatus, step 506 is executed; when the target storage area is a storage area near the image pickup apparatus, step 507 is executed.
When the image is collected by the camera device, a blind area (for example, an area outside a conical area enclosed by the two dotted lines in the storage room in fig. 1) often exists, at least a part of the area in the storage area close to the camera device is located in the blind area, and when a user accesses food materials in the storage area close to the camera device, the accuracy of determining the position of the food materials to be accessed through the image collected by the camera device is low. And camera equipment can gather the comparatively complete image apart from its distant storing district, and when the user accessed the food material in this storing district far away from camera equipment, the accuracy of the position of confirming access food material according to the image of camera equipment collection was higher. It is possible to determine the accuracy of determining the position of the access food material from the image captured by the image capturing apparatus by determining whether the target storage area is one of the plurality of storage areas that is close to the image capturing apparatus.
For example, the image pickup apparatus is located at the top of the cabinet or the top of the storage room, the storage area near the image pickup apparatus may be the low confidence area, and the storage area farther from the image pickup apparatus may be the high confidence area.
And step 506, determining the target storage layer through which the hand passes according to the moving track.
When the controller determines that the target storage area is not a storage area close to the camera equipment, the controller determines that the accuracy of the position for storing and taking food materials is higher according to the image acquired by the camera equipment; furthermore, the controller can directly determine the position of the food material to be stored and taken according to the image collected by the camera equipment.
For example, the depth value range corresponding to the storage area may include a plurality of sub-depth value ranges, the plurality of sub-depth value ranges may correspond to the storage layers in the storage area in a one-to-one manner, and the sub-depth value range corresponding to each storage layer includes a distance from any position in the storage layer to the image capturing apparatus. The controller can determine a target image according to the moving track of the hand, and then determine a sub-depth value range in which the average depth value of the hand area in the target image is located; furthermore, the controller may determine the storage layer corresponding to the sub-depth range as a target storage layer through which the hand passes, that is, determine the storage position of the stored food material as the target storage layer.
And 507, determining the target storage layer with the changed food materials stored in the target storage area according to the weight detected by the weight sensor corresponding to at least one shelf positioned between the storage layers of the target storage area.
When the controller determines that the target storage area is a storage area close to the camera equipment, the controller determines that the accuracy of determining the position of storing and taking food materials according to the image acquired by the camera equipment is low; furthermore, the controller can determine the position of the food material to be accessed by combining the weight detected by the weight sensor so as to ensure the accuracy of determining the position of the food material to be accessed. Alternatively, in the embodiment of the present application, the weight sensors may be provided only on the bottom surfaces of the shelves in the storage area near the image pickup apparatus, that is, only the weight sensors may be provided for each of the shelves in the storage area near the image pickup apparatus. It should be noted that the shelf in the storage area near the image pickup apparatus in the embodiment of the present application refers to a shelf between the respective storage layers in the storage area. Optionally, a weight sensor may be disposed on a bottom surface of each shelf in the storage chamber, which is not limited in this application.
For example, when it is determined that the image captured by the image capturing apparatus includes a hand, the controller may acquire the weights detected by the weight sensors corresponding to the respective shelves in the storage room, and determine that the weights detected by the weight sensors are the same for a set period of time, as an effective weight detected by the weight sensor. The controller can determine the storage layer with the changed stored food material according to the effective weight detected by each weight sensor, namely determine the storage position of the stored food material.
It should be noted that there are many ways to determine the target storage layer according to the weight detected by the weight sensor, and the embodiments of the present application are explained by taking the following two ways as examples.
In a first manner, the controller may determine the target reservoir layer based only on the weight detected by the weight sensor.
Alternatively, when Gi’-GiAnd when the storage layer is not equal to 0, the controller can determine the storage layer, which is close to the bearing surface of the ith shelf, in the target storage area as the target storage layer. Wherein i is more than or equal to 1, and the ith shelf is any one shelf in the target storage area; giIndicating the weight detected by the weight sensor corresponding to the ith shelf before the hand passes the target storage area, Gi' denotes the weight detected by the weight sensor corresponding to the ith shelf after the hand passes through the target storage area. GiAnd Gi' both are the effective weights detected by the weight sensors. Gi’-GiNot equal to 0, that is, after the hand passes through the target storage area, the weight detected by the weight sensor corresponding to the ith shelf changes.
For example, when the controller determines that the image captured by the image capturing device includes a hand, the controller may start to acquire the effective weight detected by the weight sensor corresponding to each shelf in the storage area close to the image capturing device, where the weight detected by the weight sensor corresponding to the ith shelf detected by the controller is Gi. The controller may then continuously obtain weight sensor measurements corresponding to each shelf of the controllerThe measured effective weight is determined, and when no hand is included in the image collected by the camera equipment in the preset time, the controller can determine the weight detected by the weight sensor corresponding to the ith shelf as Gi’。
Because weight sensor is used for detecting the weight of the shelf that corresponds and the object that it bore, so when the effectual weight that certain weight sensor detected changes, the object that the shelf that this weight sensor corresponds bore can be thought to the controller changes, and then the controller can directly confirm to have carried out the access of eating the material in the storing layer of this shelf top. Thus, the controller can determine the storage tier in the target storage area that is proximate to the carrying surface of the shelf as the target storage tier. Therefore Gi’-GiAnd when the storage layer is not equal to 0, the controller can determine the storage layer, which is close to the bearing surface of the ith shelf, in the target storage area as the target storage layer.
In the second mode, the controller may further determine the access state of the food material in the target storage area according to the multi-frame image including the hand captured by the image capturing apparatus. The access state comprises a food material storing state or a food material taking state. Furthermore, the target storage layer is determined according to the weight detected by the weight sensor corresponding to at least one shelf between the storage layers of the target storage area and the access state of the food material in the target storage area. The controller determines that the access state of the food in the target storage area is a food storing state, namely that the food is stored in the target storage area; the controller determines that the access state of the food in the target storage area is the food taking-out state, namely that the food is taken out of the target storage area.
The controller can determine a multi-frame color image corresponding to the multi-frame depth image according to the multi-frame depth image including the hand, further recognize the multi-frame color image, and determine the change condition of the hand state. The hand state comprises a state of not taking food materials or a state of taking food materials. When the controller determines that the hand state is changed from the food material not taken state to the food material taken state according to the multi-frame color images, the access state of the food materials can be determined to be the food material taken state; when the controller determines that the hand state is changed from the food material taking state to the food material non-taking state according to the multi-frame color images, the access state of the food material can be determined to be the food material storing state.
In one case, when Gi’-GiWhen not equal to 0, the controller may determine whether the first condition and the second condition are satisfied, and then determine the target reservoir layer. Wherein the first condition comprises: the access state is the state of storing food material, and Gi’-Gi> 0, or the access state is the food material taking-out state, and Gi’-GiLess than 0; the second condition includes: the access state is the state of storing food material, and Gi’-GiLess than 0; or the access state is the food material taking-out state, and Gi’-Gi>0。Gi’-GiIf the weight is more than 0, namely after the food material is stored and taken, the weight detected by the weight sensor corresponding to the ith shelf is increased; gi’-GiThat is, after the food material is stored and taken out, the weight detected by the weight sensor corresponding to the ith shelf is reduced.
When the first condition is met, the controller can determine a storage layer in the target storage area adjacent to the ith shelf and proximate to the carrying surface of the ith shelf (which can also be referred to as a storage layer above the ith shelf) as the target storage layer. When the second condition is met, the controller can determine a storage layer in the target storage area that is adjacent to the ith shelf and away from the carrying surface of the ith shelf (which can also be referred to as a storage layer under the ith shelf) as the target storage layer.
In another case, there is only one shelf between the individual storage levels of the target storage area, i.e. the target storage area comprises only two storage levels. When the weight detected by the weight sensor corresponding to the shelf is not changed after the food material is stored and taken, the controller may determine the target storage layer for storing and taking the food material in the target storage area according to the storage and taking state of the food material.
For example, the one shelf in the target storage area is the ith shelf, and the weight detected by the weight sensor corresponding to the shelf is not changed after the food material is stored and taken, that is, Gi’-Gi0. At this time, when the access state is the food material storing stateIn this state, the controller may determine that the target storage layer is a storage layer of the carrying surface far from the ith shelf in the target storage area; when the access state is a food material taking-out state, the controller may determine that the target storage layer is a storage layer close to the carrying surface of the ith shelf in the target storage area.
It is assumed that the target storage area comprises two storage layers including the first storage layer and the second storage layer described above, and the shelf between the two storage layers is the first shelf. The following table 1 is a table of correspondence between the change of the weight detected by the weight sensor, the access state, and the target storage layer according to the embodiment of the present application, and the following table 1 shows only six of the correspondence.
As shown in table 1 below, in the first corresponding relationship, the controller determines that there is food material stored in the target storage area, and the weight detected by the weight sensor corresponding to the first shelf is increased after the food material is stored and taken, at this time, the controller may determine that the food material is stored in the first storage layer.
In the second corresponding relation, the controller determines that the food material is stored in the target storage area, and the weight detected by the weight sensor corresponding to the first shelf is reduced after the food material is stored and taken out, and at the moment, the controller can determine that the food material is stored in the second storage layer. And the controller can also determine that the scene is that more food materials are stored in the second storage layer, and the food materials are ejected to the first storage layer when the food materials are stored in the second storage layer.
In the third corresponding relation, the controller determines that food materials are stored in the target storage area, the weight detected by the weight sensor corresponding to the first shelf is not changed after the food materials are stored and taken, and at the moment, the controller can determine that the food materials in the first storage layer are not changed and the food materials are stored in the second storage layer.
In the fourth corresponding relationship, the controller determines that the food material is taken out of the target storage area, and the weight detected by the weight sensor corresponding to the first shelf is increased after the food material is stored and taken out, at this time, the controller may determine that the food material is taken out of the second storage layer. Since the weight sensor corresponding to the shelf below the first storage area inevitably detects a decrease in weight when the food material is taken out of the first storage area, it can be determined that the controller can determine that the food material is taken out of the second storage layer when the weight sensor detects an increase in weight when the food material is taken out of the target storage area. And the controller can also determine that the scene is that more food materials are stored in the second storage layer, the food materials are jacked to the first storage layer before the food materials are taken out from the second storage layer, and the weight detected by the weight sensor corresponding to the shelf is smaller than the actual weight of the shelf and the objects borne by the shelf. When the food material is taken out of the second storage layer, the weight detected by the weight sensor is increased due to the lack of the load of the food material of the second storage layer.
In a fifth corresponding relationship, the controller determines that the food material is taken out of the target storage area, and the weight detected by the weight sensor corresponding to the first shelf is reduced after the food material is stored and taken out, at this time, the controller may determine that the food material is taken out of the first storage layer.
In a sixth corresponding relationship, the controller determines that the food material is taken out of the target storage area, and the weight detected by the weight sensor corresponding to the first shelf is not changed after the food material is stored and taken out, at this time, the controller may determine that the food material is taken out of the first storage layer.
TABLE 1
Serial number Access state of food material Change in weight Target reservoir layer
1 Food material storage state Increase of The first storage layer
2 Food material storage state Reduce Second storage layer
3 Food material storage state Is not changed Second storage layer
4 Food material taking-out state Increase of Second storage layer
5 Food material taking-out state Reduce The first storage layer
6 Food material taking-out state Is not changed Second storage layer
It should be noted that, in the embodiment of the present application, only the weight sensor is disposed on the bottom surface of the shelf in the storage area close to the image pickup apparatus, and when it is determined that the hand passes through the storage area close to the image pickup apparatus, the target storage layer of the stored food material is determined according to the weight detected by the weight sensor. Optionally, when the weight sensor is further arranged on the bottom surface of the shelf in the storage area far away from the image pickup device, when the hand is determined to pass through the storage area far away from the image pickup device, the storage layer of the stored food material can also be determined by combining the weight detected by the weight sensor.
Optionally, in the embodiment of the application, after the controller is in the access state of the food material and the target storage layer of the accessed food material, the storage information of the food material may be displayed on a display provided on the refrigerator. For example, when the controller determines that the access state of the food material is the food material stocking state and determines that the target storage layer in which the food material is stocked is the first storage layer, the display screen may display storage information of "stocking of food material in the first storage layer".
Optionally, the controller may further identify the type of the food material accessed according to the image acquired by the image pickup device, and further display the identified type of the food material through the display. For example, the controller may recognize that the type of the obtained food material is apple, and the display screen may display storage information of "apple is stored in the first storage layer".
It should be noted that, in the related art, by setting a fixed depth value range corresponding to each storage layer, when it is detected that the depth of the food material is within the depth value range corresponding to which storage layer when the food material is stored and taken, it is determined which storage layer the storage position of the food material is. At present, the position of the uppermost shelf (i.e., the shelf closest to the top of the refrigerator) of many storage rooms of the refrigerator can be adjusted up and down, at this time, the size of the uppermost storage layer in the storage room changes, and the actual depth range of the storage area is different from the set depth value range corresponding to the storage layer. Therefore, the accuracy of determining the storage location of the food material using the related art is low, and the manner of determining the storage location of the food material is not adaptable.
In the embodiment of the application, the target storage area is determined by detecting the depth of the hand area, and then the target storage layer stored in the stored food material is determined according to the weight detected by the weight sensor corresponding to the shelf. Because even if the position of the shelf is changed, the weight sensor corresponding to the shelf can still normally detect the weight of the shelf and the objects carried by the shelf (namely, the objects in the storage layer above the shelf). Therefore, when the position of the shelf is changed, the storage layer of the stored food material can be accurately determined according to the weight detected by the weight sensor corresponding to the shelf, and therefore the method for determining the storage position of the food material in the embodiment of the application has good adaptability.
In summary, in the refrigerator provided in the embodiment of the present application, the controller may determine the target storage layer where the food material stored in the target storage area changes according to the weight detected by the weight sensor corresponding to the at least one shelf located between the storage layers of the target storage, and determine that the storage position of the food material accessed by the user is the target storage layer. Because the weight that the weight sensor that is located on the shelf and corresponds in the access storing layer will change the shelf inevitably detects, so can avoid eating the condition that its storage position can't be confirmed when sheltering from and leading to the access to eat the material, improved the accuracy of confirming the storage position of eating the material.
Optionally, in the embodiment of the present application, the controller may further determine other information of the food material in the refrigerator, such as food material type, shelf life, volume and other food material information of the food material.
In an alternative embodiment of the present application, the controller may determine the food material type of the food material by performing the method shown in fig. 7:
step 601, determining at least one frame of auxiliary image in n frames of target images including hand regions acquired by the camera equipment.
In the embodiment of the present application, an image including a hand region acquired by an image capturing apparatus is referred to as a target image, and n > 1. Please refer to step 501 and step 502 for determining the target image. Optionally, the controller may perform filtering on the determined n frames of target images to obtain at least one frame of auxiliary image. Alternatively, the controller may determine the auxiliary image by:
step s11, determine the first frame of target image in the n frames of target images as a frame of auxiliary image.
For example, when the controller determines that a frame of depth image is a target image including a hand region for the first time during hand detection of a depth image acquired by the image capturing apparatus, the controller may directly determine the frame of target image as a frame of auxiliary image.
Step s12, when | hi-hi-1And when | ≧ epsilon, determining the ith frame target image in the n frame target images as a frame auxiliary image.
And the ith frame target image is any frame target image except the first frame target image in the n frame target images. h isiRepresenting the average depth value, h, of the hand region in the target image of the i-th framei-1And representing the average depth value of the hand area in the i-1 th frame target image in the n frames of target images, wherein epsilon represents a depth threshold value, and i is more than or equal to 1 and less than or equal to i and less than or equal to n.
When the controller determines each frame of the target image, the controller may obtain a pixel value of each pixel point in the hand region of the target image, and determine an average depth value of the hand region (that is, an average pixel value of each pixel point in the hand region) according to the pixel value of each pixel point in the hand region. In the present embodiment, the "average depth value of the hand region" is referred to as "hand depth". The controller can record the hand depth of each frame of target image, and further determine the change amplitude delta h ═ h of the hand depth in each frame of target imagei-hi-1And determining whether the change amplitude of the hand depth in each frame of the target image is greater than or equal to a depth threshold epsilon. When the change amplitude of the hand depth in a certain frame of target image is greater than or equal to the depth threshold epsilon, the controller determines the frame of target image as a frame of auxiliary image. The controller may identify the type of the food material in the image captured by the image capturing apparatus according to the determined auxiliary image.
It should be noted that in the embodiment of the present application, the auxiliary image is determined by the variation range of the hand depth, that is, the target image with a large hand variation may be determined as the auxiliary image, so as to avoid the waste of computing resources caused by identifying all the multiple frame images with a high similarity.
Step s13, when | hi-hi-1And if the frame number of the image between the ith frame target image and the previous frame auxiliary image in the depth image acquired by the camera equipment is greater than or equal to the frame number threshold value, determining the ith frame target image as a frame auxiliary image.
When the change amplitude of the hand in a certain frame of target image is smaller than the depth threshold, the controller can also determine the number of frames of the depth image between the frame of target image and the determined previous frame of auxiliary image in the depth image acquired by the camera equipment, and determine whether the number of frames is larger than or equal to the frame number threshold Deltak. When the number of frames is greater than or equal to the frame number threshold, the frame target image is determined as a frame auxiliary image. That is, when the change width of the hand in the target image is smaller than the depth threshold, the controller determines one frame of auxiliary image every Δ k frame of image. Therefore, the situation that the auxiliary image cannot be determined due to the fact that the moving amplitude of the hand of the user is small all the time can be avoided, and the situation that the image for food material identification can be obtained is guaranteed.
Optionally, in this embodiment of the application, after the at least one frame of auxiliary image is determined, a color image corresponding to the at least one frame of auxiliary image may also be directly used as a food material image input recognition model, so as to obtain at least one recognition result. And determining a target identification result of the food material image according to the at least one identification result so as to obtain the type of the food material in the food material image.
Step 602, at least one frame of key image is determined in the at least one frame of auxiliary image.
In an embodiment of the application, to reduce the number of images targeted for food material identification to reduce the calculation time, the controller may re-screen the key image from the determined at least one frame of auxiliary image for subsequent image processing and identification.
Alternatively, the controller may determine the key image by:
and step s21, determining a target hand area corresponding to the depth value range of the kth frame auxiliary image from a plurality of hand areas corresponding to the depth value ranges.
Wherein, the k frame auxiliary image is any one frame auxiliary image in the at least one frame auxiliary image. k is larger than or equal to 1, the pixel average value of the hand area of the k frame auxiliary image is positioned in the depth value range of the k frame auxiliary image, and in a plurality of hand areas corresponding to the depth value ranges, the depth value in the depth value range is in negative correlation with the corresponding hand area.
Alternatively, the controller may determine the m depth value ranges according to the height of the refrigerator. Refer to step 504 for an introduction of the depth value range. The depth value ranges in the embodiment of the present application may also correspond to the hand areas one by one, and the depth values in the depth value ranges are inversely related to the corresponding hand areas. The area of the hand corresponding to each depth value range is the area of the hand of the user in the image acquired by the image pickup device when the average depth of the hand of the user is within the depth value range. For example, the hand area in an image may be represented by the total number of pixel points included in the hand region in the image. When the distance between the hand of the user and the camera equipment is short, the hand area in the image acquired by the camera equipment is necessarily large, so the hand area is large; when the distance between the hand of the user and the camera shooting equipment is far, the hand area in the image collected by the camera shooting equipment is necessarily small, so that the hand area is small.
Optionally, when the hand of the user is located at different depths, the controller may cluster areas of hand regions in the multi-frame image collected by the camera device to obtain hand areas corresponding to different depth intervals, and then obtain hand areas corresponding to different depth value ranges.
In this embodiment, when the controller determines a frame of the auxiliary image, the controller may determine, according to the average pixel value of the hand region in the auxiliary image, a depth value range in which the average pixel value of the hand region is located, and further determine a target hand area corresponding to the depth value range.
And step s22, when the ratio of the area of the hand region in the k frame auxiliary image to the target hand area is larger than the ratio threshold, determining the k frame auxiliary image as a frame key image.
In this embodiment, when the controller determines a frame of the auxiliary image, the controller may determine an area of a hand region in the auxiliary image, that is, determine the number of pixels included in the hand region. The controller may then determine a scaling factor corresponding to the frame of auxiliary image, that is, a ratio of an area of a hand region in the frame of auxiliary image to a target hand area corresponding to the frame of auxiliary image. The controller may further detect whether the scale factor is greater than a ratio threshold, and determine the frame of auxiliary image as a frame of key image when the scale factor is greater than the ratio threshold.
Optionally, in the embodiment of the application, after the at least one frame of key image is determined in the auxiliary image, the at least one frame of key image may also be directly input as a food material image into the recognition model to obtain at least one recognition result. And determining a target identification result of the food material image according to the at least one identification result so as to obtain the type of the food material in the food material image.
Step 603, determining at least one frame of food material image according to the determined at least one frame of key image.
In the embodiment of the application, in order to further improve the identification accuracy, a partial Region of the key image where the food material is taken by the hand may be used as a Region of Interest (ROI), and then the Region of Interest in the color image corresponding to the key image is captured according to the ROI to obtain the food material image, so as to perform subsequent food material identification on the food material image. Optionally, the region of interest in the color image corresponding to the key image is at the same position as the region of interest in the key image of the frame. The food material image described in the embodiment of the present application represents an image including a food material. When the controller determines the region of interest in the key image, the controller may consider that the region of interest contains the food material. Furthermore, the food material image obtained by the controller intercepting the region of interest in the color image is also an image containing food materials.
Alternatively, the controller may determine the food material image by:
and step s31, acquiring the expansion coefficient corresponding to the depth value range of each frame of key image from the expansion coefficients corresponding to the depth value ranges.
It should be noted that, in step s21, the m depth value ranges determined according to the m depth intervals may also correspond to the m expansion coefficients one by one, and the depth values in the respective depth value ranges may be negatively correlated to the corresponding expansion coefficients.
Step s32, determining the food material area in each frame of key image according to the expansion coefficient corresponding to the depth value range of each frame of key image and the hand area in each frame of key image, where the food material area in each frame of key image includes the hand area in each frame of key image.
Optionally, both the hand region and the food material region in the key image in the embodiment of the application may be rectangular, and the center, the length direction and the width direction of the hand region and the food material region are the same. Optionally, the length w' ═ e of the food material areapWidth h' ═ h × ep(ii) a Where w represents the length of the hand region, h represents the width of the hand region, and p represents the expansion coefficient corresponding to the depth value range of the key image.
Illustratively, fig. 7 is a schematic diagram of a key image provided in an embodiment of the present application. As shown in fig. 7, the hand region in the key image is a region in the key image located in a rectangular frame a, and the food material region is a region in the key image located in a rectangular frame B.
And step s33, intercepting the food material area in the color image corresponding to each frame of key image as a frame of food material image corresponding to each frame of key image.
For example, the controller may intercept a region at the same position as the region in the rectangular frame B in the color image corresponding to the key image shown in fig. 7 as a frame of food material image corresponding to the frame of key image. The relation that the length and the width of the food material image satisfy is the same as the relation that the food material area in the frame key image satisfies, and details are not repeated herein in the embodiments of the present application.
When the user accesses the food material, the area of the food material may be larger than the area of the hand, or the food material may stick out of the hand of the user. According to the embodiment of the application, the hand region in the key image is enlarged to determine the food material region, and then the food material region in the color image corresponding to the key image is intercepted as the food material image, so that more food material features can be ensured to be included in the food material image, and the problem of low identification accuracy caused by the fact that only the hand region is intercepted as the food material image and then food material identification is carried out is solved.
And step 604, respectively inputting at least one frame of food material image into the identification model to obtain an identification result of each frame of food material image output by the identification model.
In the embodiment of the application, the controller can directly input the food material image into the recognition model after determining each frame of food material image, so as to obtain the recognition result of the frame of food material image output by the recognition model. Optionally, after determining all food material images in one acquisition period of the image pickup device, the controller may further input each frame of food material image into the identification model, so as to obtain an identification result of each frame of food material image output by the identification model.
The identification model in the embodiment of the application can be used for outputting the food material type based on the input food material image. Optionally, the recognition model may also be used to: outputting a sub-confidence of the food material type based on the input food material image. For example, the recognition result of the recognition model for recognizing and outputting a certain frame of food material image may include: the food material type is apple, and the confidence coefficient of the children is 60%.
In the embodiment of the application, in the recognition result of at least one frame of food material image output by the recognition model, there are x recognition results including food material types and sub-confidence degrees thereof, and there may be some food material types in the recognition results in the x recognition results that are the same, so the x recognition results may include y food material types, and y is greater than or equal to 0 and less than or equal to x. The ideal identification condition is that the food material types in the x identification results are all the same. It should be noted that x may be less than or equal to the number of frames of the food material image of the input recognition model. When the food material type can be identified by the identification model for each input frame of food material image, x is equal to the number of frames of the food material image of the input identification model. When the food material type cannot be identified by the partially input food material image identification model, x is smaller than the number of frames of the food material image input into the identification model. Alternatively, when the recognition model cannot recognize the type of the food material in the food material image, the recognition result output by the recognition model may include information indicating that the type of the food material is not recognized.
It should be noted that, when the recognition model recognizes a frame of food material image, a plurality of food material types and corresponding sub-confidence levels thereof may be output, and at this time, the recognition model may output the maximum sub-confidence level and the corresponding food material type as a recognition result.
Optionally, in this embodiment of the application, the recognition model may be any one of network structures such as a deep neural network, a convolutional neural network, a deep belief network, a deep stack neural network, a deep fusion network, a deep recurrent neural network, a deep bayesian neural network, a deep generation network, and deep reinforcement learning, or a derivative model thereof.
Step 605, determining a target recognition result according to the recognition result of at least one frame of food material image.
The controller in the embodiment of the application can further calculate according to the recognition result of at least one frame of food material image output by the recognition model, and further determine a target recognition result, namely a final recognition result obtained by recognizing the food material accessed by the user.
Optionally, when the controller determines that x is 0, that is, after the identification model identifies at least one input frame of food material image, at least one output identification result does not include an identification result including a food material type; at this time, the controller may directly determine that the target recognition result is the food material type of which the food material image is not recognized, and the target recognition result does not include any food material type.
Optionally, when the controller determines that x > 0, that is, the recognition model recognizes at least one input frame of food material image, at step 604, a recognition result including the type of the food material exists in at least one output recognition result. At this time, for each food material type of the y food material types included in the x recognition results, the controller may determine the comprehensive confidence of the food material type according to a target formula. The target formula is
Figure BDA0002351222590000211
Wherein s represents the number of the x identification results including the identification result of the food material type, QaAnd representing that the x identification results comprise the sub confidence coefficient in the a-th identification result of the food material type, wherein a is more than or equal to 1 and less than or equal to s. Further, the controller may eat the y kinds of food materialsAnd determining the food material type with the highest comprehensive confidence coefficient in the types as a target recognition result.
Illustratively, the controller determines five frames of food material images, wherein the food material type in the recognition results of three frames of food material images is a, and wherein the sub-confidence in each recognition result is 0.5; the food material type in the recognition results of two frames of food material images is B, and the sub-confidence in each recognition result is 0.8. According to the target formula, the comprehensive confidence coefficient Q of the food material type A can be calculatedA(0.5+0.5+0.5) × 3/5 ═ 0.90, the overall confidence of food material type B is QB=(0.8+0.8)×2/5=0.64。QA>QBAccordingly, the controller may further generate a target recognition result including the food material type a.
Optionally, the food material type in the target recognition result may also be: the food material type in the recognition result with the highest confidence coefficient of the food material types in the at least one recognition result.
For example, the controller determines five frames of food material images, wherein the food material type in the recognition results of three frames of food material images is a, and wherein the confidence in each recognition result is 0.5; the food material type in the recognition result of one frame of food material image is B, and the confidence coefficient of the food material type B in the recognition result is 0.8; the food material type in the recognition result of one frame of food material image is C, and the confidence coefficient of the food material type C in the recognition result is 0.7. Therefore, the controller can determine that the confidence of the food material type B is the highest among the five recognition results, and further, the controller can generate a target recognition result including the food material type B.
Step 606, detecting whether the target identification result comprises any food material type. When the target identification result includes any food material type, execute step 607; when the target identification result does not include any food material type, step 610 is executed.
The controller may determine whether the target recognition result includes any food material type after determining the target recognition result. When the target recognition result includes any food material type, the controller may execute the following steps 607 to 609 to further determine whether the food material type included in the target recognition result is correct. When the target recognition result does not include any food material type, the controller may directly determine that the target recognition result is incorrect, and then directly perform step 610.
Step 607, controlling the speaker to play the first prompt voice for confirming whether any food material type is wrong. Step 608 is performed.
In the embodiment of the application, the prompt voice for confirming whether any food material type is wrong is called as a first prompt voice. For example, when the target recognition result determined by the controller includes any food material type, the controller may send a voice playing instruction to the speaker, where the voice playing instruction may carry specific content of the first prompt voice, so that the speaker may play the first prompt voice for determining whether any food material type is wrong according to the voice playing instruction.
Optionally, the first prompt voice may also be used to prompt the user to input the target food material type (i.e. the correct food material type) when any food material type is wrong. For example, the first prompt voice may prompt the user to input the target food material type in a voice form, for example, the first prompt voice may be "please confirm whether the recognition result is wrong, and if so, please say the correct food material type". It should be noted that the content of the first prompting voice in the embodiment of the present application is only an exemplary description, and the content of the first prompting voice in practical application may be changed arbitrarily, which is not limited in the embodiment of the present application.
And 608, controlling the microphone to collect the first response voice in the target time length after the first prompt voice is played by the loudspeaker. Step 609 is performed.
In the embodiment of the application, after the microphone plays the prompt voice for confirming whether any food material type is wrong or not through the loudspeaker, the response voice collected within the target time length is called as the first response voice. For example, the target time duration may be 10 seconds, 20 seconds, or other time duration. The first response voice may indicate whether any of the food material types is wrong, and the first response voice may also indicate a target food material type. When the user hears the prompt voice played by the speaker, the user may speak the first response voice, for example, the first response voice is "recognition result is wrong, correct type is apple", or the first response voice is "error free". It should be noted that the content of the first response voice in the embodiment of the present application is only an exemplary description, and the content of the first response voice in practical application may be arbitrarily changed, which is not limited in the embodiment of the present application.
And step 609, determining whether any food material type is wrong according to the first response voice. When it is determined that any food material type is incorrect according to the first response voice, step 612 is executed. When it is determined that any of the food material types is not present according to the first response voice, step 615 is executed.
The controller may extract information from a first response voice collected by the microphone to determine whether the first response voice indicates that the food material type in the target recognition result is wrong, and when it is determined that the food material type is wrong, the controller may determine the target food material type (i.e., the correct food material type) according to the first response voice.
It should be noted that, in the embodiment of the application, the controller determines the type of the target food material in a form of collecting the response voice through the microphone. Optionally, the controller may also receive the target food material type input by the user through the touch display screen. At this time, the prompt voice in step 607 may also prompt the user to manually input the target food material type through the touch display screen on the refrigerator, for example, the prompt voice may be "please confirm whether the recognition result is wrong, and if so, please input the correct food material type on the touch display screen.
And step 610, controlling the loudspeaker to play a second prompt voice for prompting to speak out the type of the target food material. Step 611 is performed.
In the embodiment of the present application, the prompt voice prompting the type of the target food material to be spoken is referred to as a second prompt voice. When the controller determines that the target recognition result does not include any food material type in step 606, the controller may directly determine that the target recognition result is incorrect, and may further control the speaker to play a second prompt voice prompting the input of the target food material type, so as to instruct the user to input the target food material type. In the embodiment of the application, the example that the second prompt voice prompts the user to input the type of the target food material in a voice form is taken as an example for explanation, optionally, the second prompt voice may also prompt the user to manually input the type of the target food material on the touch display screen.
And 611, controlling the microphone to collect second response voice within the target time length after the loudspeaker plays the second prompt voice. Step 612 is performed.
In the embodiment of the application, after the microphone plays the prompt voice prompting the type of the target food material to be spoken out through the loudspeaker, the response voice collected within the target time length is called as the second response voice. For example, after the user hears the second prompt voice, the user may speak the type of the target food material according to the prompt of the second prompt voice, and then the controller may extract the type of the target food material according to the second response voice collected by the microphone. Optionally, the user may also manually input the type of the target food material on the touch display screen, and then the controller may obtain the type of the target food material according to the information input by the user. It should be noted that the target time duration here may be the same as or different from the target time duration in step 608, and this is not limited in this embodiment of the application.
It should be noted that, in the embodiment of the present application, after the step 611, the target food material type of the food material may be determined, and at this time, the controller may end the operation. Optionally, the controller may further continue to perform the following steps to train the recognition model to update the recognition model.
And step 612, generating a retraining instruction. Step 613 is performed.
When the controller determines that the target identification result is wrong and the type of the target food material is obtained, the controller can generate a retraining instruction to instruct the controller to train the identification model. The retraining instruction may carry the type of the target food material, and the retraining instruction may indicate that the target recognition result does not include the type of the target food material, that is, the target recognition result is incorrect.
Step 613, determining target training data according to the retraining instruction. Step 614 is performed.
When the controller determines that the target recognition result obtained by recognizing the food materials accessed by the user is wrong, the recognition accuracy of the food material image by the recognition model can be considered to be low, and then the recognition model can be trained continuously to optimize the recognition model.
It should be noted that, in the embodiment of the present application, different training methods may be adopted to continuously train the recognition model according to different recognition results output by the recognition model. For example, the recognition model may be determined to be retrained or incrementally trained according to the recognition result output by the recognition model. It should be noted that the retraining of the model is to train the model by using the newly added training data and the historical training data of the model to obtain a new model. And performing incremental training on the model, namely, adjusting parameters related to the newly added training data in the model according to the newly added training data to update the model. Optionally, retraining the model does not change the recognition result that the model can recognize, and only improves the accuracy of the model recognition, while performing incremental training on the model may increase the recognition result that the trained model can output.
It should be noted that the training data based on which the recognition model is retrained is different from the training data based on which the incremental training is performed, so that the controller can determine the target training data required for training the recognition model after generating the retraining instruction. For example, the controller may determine the target training data by the method shown in fig. 8, which may include, as shown in fig. 8:
step 6131, determine whether the target recognition result includes any food material type. When the target identification result does not include any food material type, step 6132 is performed; when the target identification result includes any food material type, step 6134 is performed.
In step 6131, the target identification result does not include any food material type, i.e. the controller determines that the type of the food material accessed by the user is not identified. The target identification result includes any food material type, that is, the controller has identified the type of the food material accessed by the user, but the identified type of the food material is wrong.
Step 6132, determining a first similarity between the feature vector of the target food material image and the feature vector of any food material type, and a second similarity between the feature vector of the target food material image and the feature vector of the target food material type. Step 6133 is performed.
The identification result of the target food material image does not include the type of the target food material, that is, the target food material image is a food material image corresponding to the identification result with error. It should be noted that the feature vector of a certain food material type may be: and clustering the characteristic vectors of the multi-frame food material images corresponding to the food material types to obtain the characteristic vectors. The food material image corresponding to the food material type comprises food materials, and the type of the food materials is the food material type. For example, the feature vector of the food material image can be obtained by extracting image features such as statistical features, texture features, structural features and color features of the food material image.
After the controller generates the retraining instruction, the characteristic vector of the target food material type indicated by the retraining instruction can be obtained. For the recognition result including the food material type but not the food material type as the target food material type, the controller may obtain the feature vector of the food material type included in the recognition result. Furthermore, the controller may calculate a first similarity between the feature vector of the food material type and the feature vector of the target food material image corresponding to the recognition result, and a second similarity between the feature vector of the target food material type and the feature vector of the target food material image.
For example, the controller may calculate the first similarity and the second similarity by the following similarity calculation formula. The similarity calculation formula may be
Figure BDA0002351222590000251
Wherein L isjA feature vector L representing the j frame of the target food material imageBA characteristic vector representing a food material type B, wherein the jth frame of target food material image is any frame of target food material image, and the food material type B is any food material type; σ represents a smoothing coefficient, and may take a fixed value such as 0.05; d (L)j,LB) Representing feature vectors Lj and LBThe distance of (c). From the similarity calculation formula, the feature vectorLj and LBThe larger the distance is, the smaller the similarity between the type of the food material in the jth frame of target food material image and the food material type B is, which indicates that the probability that the type of the food material in the jth frame of target food material image belongs to the food material type B is smaller.
Optionally, in the embodiments of the present application, Lj and LBMay be represented by, but is not limited to, any of the following distances. Such as Euclidean Distance (English: Euclidean Distance), Manhattan Distance (English: Manhattan Distance), Chebyshev Distance (English: Chebyshev Distance), Minkowski Distance (English: Minkowski Distance), Mahalanobis Distance (English: Mahalanobis Distance), Cosine Similarity (English: Cosine Similarity), Hamming Distance (English: Hamming Distance), Correlation Coefficient (English: Pearson Correlation Coefficient), and KL divergence (English: Kulback-Leibler divergence).
Step 6133, determine whether the first similarity is greater than the second similarity. When the first similarity is greater than the second similarity, step 6134 is performed; when the first similarity is less than or equal to the second similarity, step 6135 is performed.
If the first similarity is greater than the second similarity, the probability that the type of the food material in the target food material image is the food material type in the recognition result of the target food material image (i.e., the wrong food material type) is higher. If the first similarity is smaller than the second similarity, the probability that the type of the food material in the target food material image is the target food material type (i.e., the correct food material type) is higher. It should be noted that, in the embodiment of the present application, the step 6135 is taken as an example to be performed when the first similarity is equal to the second similarity, alternatively, the step 6134 may also be performed when the first similarity is equal to the second similarity, which is not limited in the embodiment of the present application.
Step 6134, determining the target training data includes: the method comprises the steps of obtaining a target food material image, a target food material type and historical training data of a recognition model.
The controller determining the target training data includes: the target food material image, the target food material type and the historical training data of the recognition model, and therefore the recognition model can be determined to need to be retrained.
6135, determining the target training data includes: the target food material image and the target food material type.
The controller determining the target training data includes: and the target food material image and the target food material type can further determine that incremental training needs to be carried out on the recognition model.
And 614, training the recognition model by adopting the target training data so as to update the recognition model.
The controller may determine to retrain or incrementally train the recognition model based on the target training data determined in step 613, and then update the recognition model after training is complete. And when the controller needs to identify food materials subsequently, the updated identification model of the food material image input can be identified.
Step 615, storing the target recognition result.
When the target recognition result determined by the controller is correct, the controller can store the target recognition result in the memory so as to realize intelligent management of food in the refrigerator. For example, the controller may recommend a recipe or the like to the user based on the type of food material stored in the memory.
In the embodiment of the application, the key image is selected from the images collected by the camera equipment in the process of accessing the food materials by the user, and then the food material image captured from the key image is adopted for food material identification. Therefore, dynamic identification of the food materials is achieved, the situation that when only images of the storage chamber are collected, part of the food materials are shielded to cause the situation that the part of the food materials cannot be identified is avoided, the situation that the deep part of the refrigerator is located outside the visual field range of the camera shooting device can be avoided, and the situation that the food materials stored and taken in the deep part of the refrigerator cannot be recorded by a user is made. In addition, the number of the food material images obtained in the embodiment of the application is small, and the characteristics of the food materials are obvious; and then the food material images are input into the identification model for identification, and the number of the food material images is small, so that the time consumption in the identification calculation process can be reduced, and the real-time performance of the identification model for food material identification is improved. In addition, in the recognition results of multiple frames of food material images, the comprehensive confidence coefficient of the food material type is determined according to the sub-confidence coefficient in each recognition result, and then the target recognition result is determined, so that the problem of low confidence coefficient of the recognition result obtained by recognizing a single frame of image is solved, and the accuracy of food material recognition is improved. And finally, selecting different model training methods for model optimization according to different recognition results, thereby realizing continuous improvement of recognition model performance.
In another optional embodiment of the present application, the refrigerator may further include a code scanning module. . The controller can be in communication connection with the code scanning module and is used for acquiring the graphic code acquired by the code scanning module. And then the controller can also confirm the food material type of eating the material according to the figure code that the yard module of sweeping gathered. Illustratively, the graphic code may include a bar code or a two-dimensional code, or the like. The code scanning module can be positioned on the surface of the door close to the opening for illustration, and can also be positioned on the surface of the door far away from the opening. Alternatively, the code scanning module may also be located at any other position on the casing or the door, which is not limited in this embodiment of the application.
Optionally, the controller may further perform the following steps to determine the food material type of the food material:
step 901, determining the access state of the food material and the identification state of the food material according to the multi-frame target images.
It should be noted that, after the controller determines multiple frames of target images in the above steps, the controller may execute step 901. Thereafter, the controller may perform step 601. The access state comprises a food material storing state or a food material taking state, and the identification state comprises a graphic code carrying state or a graphic code not carrying state.
For example, the controller may identify the plurality of frames of target images to determine whether an image including a graphic code exists in the plurality of frames of target images. When the image comprising the graphic code exists in the multi-frame target image, the controller can determine that the identification state of the food material is the state carrying the graphic code; when the image including the graphic code does not exist in the multi-frame target image, the controller may determine that the identification state of the food material is a state without carrying the graphic code.
The controller can determine a multi-frame color image corresponding to the multi-frame depth image according to the multi-frame depth image including the hand, further recognize the multi-frame color image, and determine the change condition of the hand state. The hand state comprises a state of not taking food materials or a state of taking food materials. When the controller determines that the hand state is changed from the food material not taken state to the food material taken state according to the multi-frame color images, the access state of the food materials can be determined to be the food material taken state; when the controller determines that the hand state is changed from the food material taking state to the food material non-taking state according to the multi-frame color images, the access state of the food material can be determined to be the food material storing state.
And 902, detecting whether the code scanning module collects the graphic code within the target time length after the target identification result is determined. When the target time length internal code scanning module after the target identification result is determined acquires the graphic code, executing the step 903; and when the target time length internal code scanning module after the target identification result is determined does not acquire the graphic code, executing the step 906.
It should be noted that, when the controller determines in step 901 that the identification status of the food material is in the state of carrying the graphic code, and determines in step 606 that the target identification result does not include any food material type, the controller executes step 902.
Optionally, when the target recognition result does not include the food material type, that is, the controller does not determine the food material type of the food material through image recognition, the controller may control the speaker to play an indication voice indicating that the user moves the target food material, so as to place the graphic code of the food material within the visual field of the code scanning module. When hearing the prompt voice, the user can place the graphic code on the food material in the visual field range of the code scanning module, so that the code scanning module can collect the graphic code on the food material. Optionally, the code scanning module may start collecting the graphic code when the door is determined to be in the open state.
And step 903, analyzing the graphic code acquired by the code scanning module in the target time length to obtain an analysis result. Step 904 is performed.
It should be noted that any graphic code may correspond to a character string, and a graphic code on an object is generally used to indicate related information of the object. The user (such as a manufacturer of the object) can set the related information of the object indicated by the graphic code by himself, and then the related information indicated by the graphic code is input into the graphic code information base, so that other users can search the related information of the object indicated by the graphic code in the graphic code information base according to the graphic code. It should be noted that the graphic code information base may be stored in a memory in the refrigerator, or may be stored on the internet, or other devices connected to the controller.
The graphic code information base may include information corresponding to a plurality of character strings, and when the controller parses the graphic code, the controller may first determine a target character string corresponding to the graphic code, and then may search information corresponding to the target character string in the graphic code information base. If the information corresponding to the target character string exists in the graphic code information base, the information corresponding to the target character string may be determined as the analysis result of the graphic code, and the analysis result includes the related information of the object (i.e., the target object) indicated by the graphic code. If the information corresponding to the target character string does not exist in the graphic code information base, the analysis result can be determined to be the information of the object which is not indicated by the graphic code, and at this time, the analysis result does not include the related information of the object which is indicated by the graphic code. At this time, the graphic code may not be successfully analyzed, and the controller fails to analyze the graphic code.
Step 904, determining whether the parsing result includes the food material type. When the analysis result includes the food material type, execute step 905; when the parsing result does not include the food material type, step 906 is executed.
Illustratively, the information related to the food material may comprise one or more of a type of food material, a shelf life, a production date, a manufacturer, a volume, a weight, a price, and the like. The controller may analyze the analysis result of the graphic code to determine whether the analysis result includes the food material type, and then determine whether the food material type of the food material can be determined according to the analysis result.
And 905, determining the food material type of the food material according to the analysis result.
When the analysis result of the graphic code includes the food material type, the controller may determine the food material type in the analysis result as the food material type of the food material. Therefore, the method can be regarded as that the controller analyzes the graphic code on the food material to obtain the food material type of the food material.
Step 906, determining whether the access state of the food material is the food material taking-out state. When the food material storing and taking state is the food material taking state, step 907 is executed; when the food material accessing state is not the food material accessing state, step 610 is executed.
Step 907, determining the similarity between each frame of food material image and each frame of reference image in at least one frame of reference image to obtain at least one reference similarity. Step 908 is performed.
The reference image comprises the food materials which are stored in the storage room and are marked to be in the state of carrying the graphic codes, namely the reference image is the image of the food materials carrying the graphic codes. The food material type of the food material in the reference image is known.
The controller determines the access status of the food material as a food material taking status in step 906, that is, determines that an access condition is satisfied, where the access condition includes: the identification state of the food materials is a state carrying the graphic codes, the target recognition result does not include the type of the food materials, the graphic codes are not collected by the code scanning module within the target time length after the controller determines the target recognition result, and the access state of the food materials is a state of taking out the food materials. Therefore, the controller determines that the user takes out the target food material carrying the graphic code from the storage chamber, the food material type of the food material is not identified in an image identification mode, and the user does not trigger the code scanning module to scan the graphic code of the food material. Furthermore, the controller can determine at least one frame of reference image of the food material with known food material types stored in the storage chamber, determine the similarity between the food material image containing the food material and each frame of reference image, obtain at least one reference similarity, and determine the similarity between the two frames of images, namely determine the similarity of the feature vectors of the two frames of images. The similarity between the food material image and the reference image can be represented as follows: the food material type of the food material is the probability of the food material type of the food material in the reference image.
For example, in step 603, the controller determines five frames of food material images including food materials, three food materials carrying graphic codes are stored in the storage chamber, and the controller can acquire three frames of reference images of each of the three food materials, so the controller can acquire nine frames of reference images. And the controller determines the similarity between each frame of food material image and each frame of reference image, so that 5 × 9-45 reference similarities can be determined.
Alternatively, the controller may determine the similarity between each frame of food material image and each frame of reference image by any one of a Structural Similarity Index (SSIM) method, a cosine similarity (cosin) determination method, and a histogram-based similarity determination method.
For example, the memory in the refrigerator in the embodiment of the present application may store data related to food materials with graphic codes stored in the storage chamber. Optionally, the related data may include the food material type of the food material, the character string corresponding to the graphic code, the volume or weight, and the corresponding reference image. For example, the memory may store the data related to the food materials shown in table 2 below. It should be noted that the reference images in table 2 are only used to illustrate that the related data of the food material includes corresponding images, and specific image contents of the images are not illustrated.
TABLE 2
Figure BDA0002351222590000301
Step 908 determines whether the at least one reference similarity includes a similarity greater than a similarity threshold. When the at least one reference similarity includes a similarity greater than the similarity threshold, perform step 909; when the at least one reference similarity is less than or equal to the similarity of the similarity threshold, step 610 is performed.
When the similarity between a certain frame of food material image and a certain frame of reference image is greater than a similarity threshold value, it can be considered that the food material type of the food material in the food material image is possibly the same as the food material type of the food material in the frame of reference image, the frame of reference image has a reference value for determining the food material type of the food material, and then the food material type of the food material can be further determined based on the frame of reference image. When the similarity between a certain frame of food material image and a certain frame of reference image is smaller than or equal to the similarity threshold, it can be considered that the food material type of the food material in the food material image is unlikely to be the same as the food material type of the food material in the reference image, and the frame of reference image does not have a reference value for determining the food material type of the food material.
And 909, determining the food material type of the food material in the reference image corresponding to the maximum similarity of the at least one reference similarity as the food material type of the food material.
The controller may determine, in all the reference images, a reference image in which the type of the food material included in the reference image is most likely to be the same as the type of the food material, and further determine the type of the food material in the reference image as the type of the food material. The reference image is the image with the maximum reference similarity with any frame of food material image in all the reference images.
It should be noted that, because too many food materials with graphic codes are often not stored in the storage chamber of the refrigerator, the number of reference images can be small, so that the speed of determining the maximum similarity among at least one reference similarity can be increased, and the speed of determining the type of the food material can be further ensured. It should be noted that, in the embodiment of the present application, when the access state of the food material is the food material taking-out state, the food material type of the food material is determined by determining the similarity between the food material image and the reference image. Therefore, the codes do not need to be scanned when the food materials are taken out (namely, the graphic codes carried by the target food materials are collected), the codes are not required to be scanned when the food materials are stored and taken out, and the user experience can be improved.
It should be noted that in the embodiment of the present application, the controller may perform step 610 when determining that the third condition is satisfied. Wherein the third condition comprises: the analysis result of the graphic code acquired by the code scanning module does not include any food material type; or, the at least one reference similarity determined by the controller is smaller than a similarity threshold; or the identification state of the food materials is a state of carrying the graphic codes, the target identification result does not include any food material type, the graphic codes are not collected by the code scanning module within the target duration, and the access state of the food materials is a state of storing the food materials.
The controller determines that the type of the food material cannot be determined by analyzing the graphic code acquired by the code scanning module, or the type of the food material cannot be determined according to a mode of matching the graphic code with the reference image when the food material is taken out, or the type of the food material cannot be determined when the food material is stored, and then the controller can control the loudspeaker to play prompt voice so as to determine the type of the food material in a mode of user input.
Optionally, when the analysis result of the controller on the graphic code collected by the code scanning module does not include any food material type, and the controller determines the food material type of the food material through a user input mode, the controller may upload the corresponding relationship between the graphic code collected by the code scanning module and the food material type of the food material to the graphic code information base, so that the food material type of the food material can be determined according to the graphic code when the graphic code is collected subsequently. If the controller can store the corresponding relation between the character string corresponding to the graphic code and the food material type of the food material into the graphic code information base, the continuous updating of the graphic code information base can be guaranteed, the graphic code information included in the graphic code information base is enriched, and the limitation that the food material type is determined by adopting a graphic code analyzing mode is reduced.
Optionally, in the embodiment of the application, the user can also set the graphic code for the food material by himself, and input the information of the food material indicated by the graphic code into the graphic code information base, so that the food material which does not carry the graphic code originally can be identified by means of graphic code analysis, and the management effect of the refrigerator on the information of the stored food material is improved. Alternatively, the graphic code pasted by the user can be reused, such as arranging the graphic code on the food material through a clip or a vessel with the graphic code.
It should be noted that, in the related art, the food material type of the food material to be accessed is determined only in an image recognition manner, and the accuracy of the determination of the food material type is low, and the cases that the determination of the food material type is impossible are more, so that the limitation of determining the food material type is high. It should be noted that when the type of the food material is determined by analyzing only the graphic code carried by the food material, the graphic code analysis is prone to fail. In the embodiment of the application, the food material type of the food material can be determined by combining image recognition, graphic code analysis and voice input, so that the effective determination of the food material type of the food material can be ensured, the accuracy of determining the food material type is improved, the limitation of determining the food material type is reduced, and the intelligent management of a refrigerator on the food material information is facilitated. In addition, codes do not need to be scanned when food materials are stored and taken out, so that the process of determining the types of the food materials can be simplified, and the use experience of a user on the refrigerator is improved.
In yet another optional embodiment of the present application, the refrigerator may further obtain the shelf life and the volume of the food material, so as to manage the food material information. Optionally, the controller may further perform the following steps to determine the shelf life and volume of the food material:
step b21, when the first food material is stored in the storage chamber at the first moment, determining a target food material condition met by the first food material according to the multi-frame first image which is acquired by the camera and comprises the first food material, wherein the target food material condition comprises a condition of the type of the food material and a condition of the state of the food material.
Optionally, any food material condition may include: conditions of food material types, or conditions of food material types and conditions of food material states; the food material status may comprise a processed status or an unprocessed status. For example, apple, banana, and the like may be food material types of food materials. The certain food material condition comprises the following food material types: the food material type is apple, and the food material state conditions are as follows: if the food material state is an unprocessed state, the food material satisfying the food material condition is an unprocessed apple.
In the following embodiments, the target food material conditions are described as examples, including conditions of food material types and conditions of food material states. In the embodiment of the present application, the food materials satisfying the target food material condition are not stored in the storage chamber before the first time, and the controller does not manage the food material information corresponding to the target food material condition. The food material information corresponding to the target food material condition may be information of a food material satisfying the target food material condition.
It should be noted that the multiple frames of first images described in this embodiment of the application may be images acquired by the image capturing apparatus in one acquisition cycle. The controller may perform image recognition based on the multiple frames of the first images, and further determine the food material type and the food material state of the first food material in the multiple frames of images to obtain the condition of the food material type and the condition of the food material state satisfied by the first food material, and then determine the target food material condition satisfied by the first food material. The food material state comprises a processed state or an unprocessed state. And when the controller cannot determine the food material type or the food material state of the first food material only in an image recognition mode, the controller can play a prompt voice prompting the user to input through the controller loudspeaker so as to determine the food material type or the food material state of the first food material in a user input mode. For example, if the controller determines that the food material type of the first food material is apple and the food material state is an unprocessed state, the controller may determine that the food material type of the target food material conditions is apple and the food material state conditions are: the food material state is unprocessed state.
Optionally, the identification model in the embodiment of the present application may be further configured to output the food material state of the food material according to the input food material image. It should be noted that, in the embodiment of the present application, reference may be made to the manner for determining the food material type in the manner for determining the target food material condition that is met by the first food material, and details of the embodiment of the present application are not described herein.
Alternatively, when the controller determines the food material state only by determining the food material type of the first food material according to the recognition result output by the recognition model, the controller may determine the food material state of the first food material in the following manner. The method comprises the following steps: the controller determines a third similarity of the first food material to the first reference food material and a fourth similarity of the first food material to the second reference food material. And determining the food material state of the food material corresponding to the larger similarity of the third similarity and the fourth similarity as the food material state of the first food material. The food material types of the first reference food material and the second reference food material are both the food material types of the first food material, the food material state of the first reference food material is a processed state, and the food material state of the second reference food material is an unprocessed state.
When the controller determines only the material type of the first material and does not determine the material state of the first material, the controller may determine which of the first material and the first reference material is more similar to determine the material state of the first material according to the similarity between the first material and the first reference material and the similarity between the first material and the second reference material.
The similarity between two food materials can be represented by the similarity between images containing the two food materials. The food material state of the first reference food material is a processed state, and the food material state of the second reference food material is an unprocessed state. If the first food material is more similar to the first reference food material, determining that the food material state of the first food material is a processed state; if the first food material is more similar to the second reference food material, the food material status of the first food material can be determined to be an unprocessed status.
For example, the controller may determine similarity between each of the at least one frame of food material image and each of the at least one frame of first reference image, to obtain at least one first reference similarity, and then determine a maximum similarity among the at least one first reference similarity as a third similarity. Wherein the first reference image comprises a first reference food material. The controller may determine a similarity between each frame of food material image in the at least one frame of food material image and each frame of second reference image in the at least one frame of second reference image to obtain at least one second reference similarity, and further determine a maximum similarity among the at least one second reference similarity as a fourth similarity. Wherein the second reference image comprises a second reference food material. The food material corresponding to the greater similarity of the third similarity and the fourth similarity is the food material more similar to the first food material, and the controller can determine the food material state corresponding to the greater similarity as the food material state of the first food material.
Alternatively, the controller may determine the similarity between each frame of food material image and each frame of reference image by any one of a Structural Similarity Index (SSIM) method, a cosine similarity (cosin) determination method, and a histogram-based similarity determination method.
In an embodiment of the application, the controller may further determine a first moment at which the first food material is stored in the storage chamber. For example, the controller may determine the first time by:
in the first mode, the controller may determine any time between the start of the capturing of the first image and the stop of the capturing of the first image by the image capturing apparatus as a first time when the first food material is stored in the storage chamber. That is, the controller may determine any one time in the acquisition cycle in which the first image is acquired as the first time.
For example, the first time may be a time at which the controller determines that the door is changed from the open state to the closed state, or the first time may be a time at which the controller determines that the door is changed from the closed state to the open state. Since the time consumed in the storing process of the user food material is generally less, the error of determining any moment in the storing process as the storing moment of the first food material is less. In addition, the first time is used for determining the shelf life of the food material, and the time required by the user to access the food material can be ignored compared with the shelf life of the food material, so that any time in the acquisition cycle of the first image is determined as the first time for storing the first food material, and the determination of the shelf life cannot be influenced.
In the second aspect, the controller may determine a time when the user's hand is inserted into the storage chamber as a first time when the first food material is stored in the storage chamber.
For example, the controller may determine a movement trajectory of the hand of the user according to the first images of the plurality of frames, and then determine an image when the hand moves to a target position according to the movement trajectory, where the target position is a position farthest from the starting point in the movement trajectory. The controller may consider the target position to be located in the storage compartment, and may determine that the hand extends into the storage compartment when the hand is moved to the target position. Further, the controller may determine a timing at which the image capturing apparatus captures the image as the first timing. It should be noted that, the manner of determining the moving trajectory of the hand may refer to the description of step 503, and the embodiment of the present application is not described again.
It should be noted that the controller may determine the access state of the first food material according to the plurality of frames of the first image. For the introduction of the access state of the first food material, reference may be made to the above description of determining the access state, and details are not described in this embodiment of the application. It should be noted that, in the embodiment of the present application, the controller determines the access state of the first food material as the stored food material state.
Step b22, determining food material information corresponding to the target food material conditions, wherein the food material information comprises shelf life and volume.
After the controller determines that the target food material condition is met by the first food material in step b21, the controller may search for food material information corresponding to the target food material condition in a food material information base, where the food material information base may include a corresponding relationship between various target food material conditions (i.e., a combination of a food material type and a food material state) and the food material information. For example, the food material information library may include the corresponding relationship shown in table 3 below. As shown in table 3 below, the food material information corresponding to the target food material condition may include, in addition to the shelf life and the volume: the food material storage method meeting the target food material condition and the reference image comprising the food material meeting the target food material condition. Optionally, the food material information corresponding to the target food material condition may further include the weight of the food material or other information, which is not limited in the embodiment of the present application.
TABLE 3
Figure BDA0002351222590000351
It should be noted that the food material information corresponding to the target food material condition may include at least one of a shelf-life duration and a volume, and in the embodiment of the present application, the food material information includes both the shelf-life duration and the volume. Optionally, the food material information may also comprise only shelf life or volume.
Assuming that the first food material is unprocessed chives, that is, the food material type of the first food material is chives, and the food material state of the first food material is an unprocessed state, the target food material condition includes the food material type of the chives and the food material state is an unprocessed state. In step b22, the controller may determine that the shelf life corresponding to the target food material condition is 1 to 3 days and the volume is 2000 cubic centimeters according to the correspondence shown in table 3 above. It should be noted that, in table 3 above, the shelf-life time corresponding to the target food material condition is taken as an example of a time length range, optionally, the shelf-life time may also be a fixed time length, or the controller may determine any time length within the shelf-life time range corresponding to the target food material condition as the shelf-life time corresponding to the target food material condition.
It should be noted that, when the controller determines the food material information corresponding to the target food material condition, the food material information may be recorded, so as to manage the information related to the food materials in the refrigerator. For example, the controller may record the food material information in a certain record table stored in the memory. Optionally, the record table may record information such as a storage location of the food material in the storage chamber, an access state of the food material, and an access time, in addition to the volume and shelf life of the food material.
For example, the controller may obtain a record table as shown in table 4 below after recording the food material information corresponding to the target food material condition satisfied by the first food material in the record table.
TABLE 4
Figure BDA0002351222590000361
And b23, determining the food material conditions met by the third food material taken out of the storage chamber according to the plurality of frames of third images acquired by the camera at a third moment after the first moment.
It should be noted that, each time the controller determines that there is a food material taken out of the storage chamber or a food material stored in the storage chamber, the controller can identify the food material to determine the type and state of the food material, and further determine the food material condition satisfied by the food material.
For example, it is assumed that after the first food material is stored in the storage chamber at the first time, the controller determines that the user takes out the third food material from the storage chamber, that is, the access state of the third food material is the food material taking-out state, and the controller may determine that the time when the user takes out the third food material is the third time. Further, the controller can also determine a food material type and a food material status of the third food material to determine the food material condition satisfied by the third food material.
It should be noted that the manner in which the controller determines the access state of the third food material is the same as the manner in which the access state of the first food material is determined, the manner in which the controller determines the access state of the third food material is the same as the manner in which the controller determines the access state of the first food material, the manner in which the controller determines the food material conditions met by the third food material is the same as the manner in which the controller determines the target food material conditions met by the first food material, and details of the embodiment of the present application are omitted.
In the embodiment of the present application, the third food material is also taken as the food material satisfying the target food material condition, that is, the food material condition satisfied by the third food material is taken as the target food material condition.
Step b24, when the third food material meets the target food material condition, updating the food material information corresponding to the target food material condition to obtain the first food material information.
The controller can determine whether the food material condition satisfied by the third food material is the target food material condition to determine whether the third food material is the same as the first food material, and further determine whether the update condition is satisfied. The updated condition may comprise that the food material satisfying the target food material condition is stored in or taken out from the storage chamber, i.e. the same food material as the first food material is stored in or taken out from the storage chamber. When the food material condition satisfied by the third food material is the target food material condition, the controller may determine that the third food material is the same as the first food material, so that it may be determined that the food material satisfying the target food material condition stored in the storage chamber is changed, and then the food material information corresponding to the target food material condition may be updated.
Illustratively, the shelf life time T corresponding to the updated target food material condition is T0- (T3-T1), T0 represents the shelf life time corresponding to the target food material condition before updating, T1 represents the first time, and T3 represents the third time. When the food material satisfying the target food material condition is taken out of the storage chamber, the volume of the food material satisfying the target food material condition in the storage chamber is reduced, so that the volume V corresponding to the updated target food material condition is V0- Δ V2, V0 represents the volume corresponding to the target food material condition before updating, and Δ V2 represents the volume of the third food material (i.e., the taken food material satisfying the target food material condition). Therefore, the first food material information (i.e., the food material information corresponding to the updated target food material condition) can include T and V. Alternatively, since it is generally difficult to determine the specific volume of the food material, Δ V2 can be set as a fixed value, and the volume of the food material satisfying the target food material condition in the storage chamber can be estimated roughly, so as to achieve fuzzy management of the volume of the food material. Optionally, the specific volume of the third food material may also be measured, and then the accurate volume Δ V2 of the third food material is determined according to the measurement result.
Supposing that the third food material is taken out after the first food material is stored for one day, the first food material and the third food material are unprocessed Chinese chives, namely the first food material and the third food material both meet the target food material condition that the food material type is Chinese chives and the food material state is an unprocessed state; the volume Δ V2 of the third food material is a set fixed value, such as 1000 cubic centimeters. The controller may update the food material information corresponding to the target food material condition, for example, update the record table shown in table 4 above to obtain the record table shown in table 5 below. As shown in table 5 below, the volume of the food material information corresponding to the updated target food material condition may be 1000 cubic centimeters, and the shelf life may be 2 days.
TABLE 5
Figure BDA0002351222590000381
And b25, determining the food material conditions met by the second food material stored in the storage chamber according to the plurality of frames of second images acquired by the camera at a second moment after the third moment.
For example, it is assumed that after the third food material is taken out of the storage chamber at the third time, the controller determines that the user stores the second food material into the storage chamber, that is, the access state of the second food material is the food material storing state, and the controller may determine that the time when the user stores the second food material is the second time. Further, the controller can also determine the food material type and food material status of the second food material to determine the food material condition satisfied by the second food material.
It should be noted that the manner in which the controller determines the access state of the second food material is the same as the manner in which the access states of the first food material and the third food material are determined, the manner in which the controller determines the access state of the second food material is the same as the manner in which the controller determines the access states of the first food material and the third food material, and the manner in which the controller determines the food material conditions met by the second food material is the same as the manner in which the controller determines the food material conditions met by the first food material and the third food material, which is not described in detail in the embodiments of the.
Step b26, determining whether the food material condition met by the second food material is the same as the food material condition met by the third food material. When the food material condition satisfied by the second food material is different from the food material condition satisfied by the third food material, performing step b 27; when the food material condition satisfied by the second food material is the same as the food material condition satisfied by the third food material, step b28 is executed.
The controller can determine whether the food material condition satisfied by the second food material is the same as the food material condition satisfied by the third food material to determine whether the second food material is the same as the third food material. Further, the controller may determine whether the food material stored in the storage chamber satisfying the target food material condition is changed again.
Step b27, determining the food material information corresponding to the food material condition satisfied by the second food material.
The controller determines that the food material conditions met by the second food material are different from the food material conditions met by the third food material, and then the controller can determine that the second food material is different from the third food material, and further can determine the food material information corresponding to the food material conditions met by the second food material, and record the food material information corresponding to the food material conditions. The controller can determine that the food material (e.g., the second food material) satisfying the food material condition is stored in the storage chamber according to the record, and further manage the food material information of the food material satisfying the food material condition.
And b28, determining whether the time difference between the second time and the third time is less than the duration threshold. When the time difference between the second time and the third time is smaller than the duration threshold, executing step b 29; and executing the step b27 when the time difference between the second time and the third time is greater than or equal to the duration threshold.
When the food material condition satisfied by the second food material is the same as the food material condition satisfied by the third food material (i.e., both the target food material conditions), the controller may determine that the second food material is the same as the third food material, and thus may determine that the food material satisfying the target food material condition stored in the storage chamber has changed. At this time, the second food material is the food material which is stored at the second moment and meets the target food material condition. Further, the controller may further determine a specific change of the food material satisfying the target food material condition. For example, the controller may determine a time difference between the second time and the third time, and further determine a specific change condition of the food material satisfying the target food material condition according to a relationship between the time difference and the time length threshold.
When the time difference between the second time and the third time is greater than or equal to the duration threshold, the controller may determine that a second food material identical to the third food material is placed after the third food material is taken out of the storage chamber for a longer time, and may further consider the second food material as a food material to be purchased again. At this time, the food material information corresponding to the food material condition satisfied by the second food material may be redetermined, and the food material information corresponding to the food material condition satisfied by the second food material is recorded. Optionally, since the food material condition satisfied by the second food material is the target food material condition, the controller may update the food material information (i.e., the first food material information) corresponding to the recorded target food material condition at this time, and update the food material information to the food material information corresponding to the target food material condition in the food material information base.
Step b29, determining whether the second food material is at least part of the third food material. When the second food material is at least part of the third food material, executing step b 210; when the second food material is not at least part of the third food material, step b27 is performed.
When the time difference between the second time and the third time is less than the duration threshold, the controller may determine that the second food material identical to the third food material is put in after a short time after the third food material is taken out of the storage chamber. Since the user usually takes out the food material from the storage chamber, if the food material is not eaten, the user stores the remaining food material in the refrigerator again in a short time. Therefore, when the time difference between the second time and the third time is smaller than the duration threshold, the controller may determine that the second food material is likely to be the remaining food material in the extracted third food material.
For example, the controller may control the speaker to play a prompt voice confirming whether the second food material is at least part of the third food material to determine whether the second food material is the remaining food material in the extracted third food material. And then the controller acquires response voice collected by the microphone within the target time length after the prompt voice is played by the loudspeaker. And further determining whether the response voice indicates that the second food material is at least part of the third food material.
Step b210, updating the food material information corresponding to the target food material condition to obtain second food material information.
When the controller determines that the second food material is at least part of the third food material, the controller may determine that the food material satisfying the target food material condition stored in the storage chamber is changed again, and may update the food material information corresponding to the target food material condition.
Illustratively, the shelf life time T corresponding to the updated target food material condition is T0- (T2-T3), T0 represents the shelf life time corresponding to the target food material condition before updating, T2 represents the second time, and T3 represents the third time. When the food materials satisfying the target food material condition are stored in the storage chamber, the volume of the food materials satisfying the target food material condition in the storage chamber increases, so that the volume V corresponding to the updated target food material condition is V0 +. DELTA.V 1, V0 represents the volume corresponding to the target food material condition before updating, and DELTA.V 1 represents the volume of the second food material (i.e., the stored food materials satisfying the target food material condition). Here, Δ V1 may be a fixed value or may be a specific volume of the second food material. Therefore, the second food material information (i.e., the food material information corresponding to the updated target food material condition) can include T and V. It should be noted that, at this time, the shelf life time T0 corresponding to the target food material condition before updating is the shelf life time in the first food material information, that is, the shelf life time T after updating in step b 24; at this time, the volume V0 corresponding to the target food material condition before updating is the volume in the first food material information, i.e. the volume V updated in step b 24. Alternatively, in the embodiment of the present application, when Δ V1 may be a fixed value set together with Δ V2, Δ V1 may be equal to Δ V2, or Δ V1 may not be equal to Δ V2.
Optionally, in the embodiment of the present application, the controller may further determine in real time whether the quality guarantee period corresponding to the target food material condition is less than the time limit value, and control the speaker to play a third prompt voice indicating the quality guarantee period corresponding to the target food material condition when it is determined that the quality guarantee period is less than the time limit value. And then the user can determine that the food materials meeting the target food material condition are going to deteriorate when hearing the third prompt voice, and further can process the food materials in time.
Optionally, in this embodiment of the application, when the third food material is taken out, the controller may control the speaker to play a prompt voice for confirming whether the remaining food materials still exist in the storage chamber, where the food material conditions satisfied by the remaining food materials are the same as the food material conditions satisfied by the third food material. And then the controller can confirm whether residual food materials still exist in the storage chamber according to the response voice collected by the microphone. When the controller determines that there is no remaining food material in the storage chamber, the controller may delete the food material information corresponding to the target food material condition. For example, when the food materials satisfying the target food material condition are not stored in the storage chamber, the controller does not need to manage the food material information of the food materials, so the controller can delete the food material information corresponding to the target food material condition.
In the embodiment of the present application, the food material conditions satisfied by the food materials are described as an example, where the food materials include both the type of the food material and the state of the food material. Optionally, the food material condition satisfied by the food material may also only include the type of the food material, and for such a case, the controller only needs to omit the step of determining the food material state of the food material.
It should be noted that, in the embodiment of the present application, only the target food material condition includes the condition of the food material type and the condition of the food material state. The combined controller for one food material type and food material state can record only one food material information as an example. For example, only one correspondence between the processed apples and the food material information may exist in the stored record table. Optionally, the target food material condition may also include conditions of other information of the food material, such as conditions of a storage location. At this time, for each combination of the food material type, the food material state and the storage position, the controller can record the corresponding food material information. The record table as stored may include the correspondence of processed apples stored in the first layer of the refrigeration compartment to food material information and the correspondence of processed apples stored in the second layer of the refrigeration compartment to food material information. Wherein the storage location is a condition of the first floor of the cold storage compartment and the storage location is a storage location of the second floor of the cold storage compartment that is respectively one of the two target food material conditions. Optionally, the target food material condition may further include conditions of other information of the food material, and the correspondence between the target food material condition and the food material information recorded by the controller when the conditions of other information are included can be analogized according to the correspondence when the conditions of the storage location are included, which is not described in detail in the embodiment of the present application.
It should be noted that, in the embodiment of the present application, the first food material, the second food material, and the third food material are only used for distinguishing food materials in different access scenes, and names of the first food material, the second food material, and the third food material may be arbitrarily replaced. The first time, the second time and the third time in the embodiment of the application are only used for distinguishing different times, and the names of the first time, the second time and the third time can be replaced arbitrarily. In the above embodiment of the application, the second food material is stored at the second time, and optionally, the time when the food material meeting the target food material condition is taken out may also be referred to as the second time.
It should be understood that the terms "first," "second," "third," and the like in the description and in the claims of the present application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used are interchangeable under appropriate circumstances and can be implemented in sequences other than those illustrated or otherwise described herein with respect to the embodiments of the application, for example.
Furthermore, the terms "comprises" and "comprising," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a product or device that comprises a list of elements is not necessarily limited to those elements explicitly listed, but may include other elements not expressly listed or inherent to such product or device.
Embodiments of the present application also provide a computer program product containing instructions, which when run on a computer, cause the computer to execute the method provided by the embodiments of the present application. The above description is only exemplary of the present application and should not be taken as limiting, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (12)

1. A refrigerator, characterized in that the refrigerator comprises:
a cabinet including a storage compartment having an opening;
the door is movably connected with the shell and used for shielding the opening;
the camera equipment is used for acquiring an image at the opening;
a shelf in the storage compartment for dividing the storage compartment into a plurality of storage layers;
the weight sensor is used for detecting the weight of the corresponding shelf and the object borne by the shelf;
a controller to:
determining a moving track of the hand according to a multi-frame image including the hand acquired by the camera equipment;
determining a target storage area which is passed by the hand in a plurality of storage areas according to the moving track, wherein the storage areas comprise at least two storage layers, and the storage layers in the plurality of storage areas are different;
determining a target storage layer in which the food material stored in the target storage area is changed according to the weight detected by the weight sensor corresponding to at least one shelf located between the respective storage layers of the target storage area.
2. The refrigerator of claim 1, wherein the controller is further configured to:
when G isi’-GiWhen the signal is not equal to 0, the signal is transmitted,determining a storage layer close to the bearing surface of the ith shelf in the target storage area as the target storage layer;
wherein i is not less than 1, GiRepresenting the weight detected by the weight sensor corresponding to the ith shelf before the hand passes the target storage area, Gi' denotes a weight detected by the weight sensor corresponding to the ith shelf after the hand passes through the target storage area.
3. The refrigerator of claim 1, wherein the controller is further configured to:
determining the access state of the food materials in the target storage area according to the multi-frame images, wherein the access state comprises a food material storing state or a food material taking state;
the determining of the target storage layer in which the food material stored in the target storage area is changed according to the weight detected by the weight sensor corresponding to at least one shelf located between the respective storage layers of the target storage area includes:
and determining the target storage layer according to the weight detected by the weight sensor corresponding to the at least one shelf and the access state.
4. The refrigerator of claim 3, wherein the controller is further configured to:
when a first condition is met, determining a storage layer adjacent to the ith shelf and close to the bearing surface of the ith shelf in the target storage area as the target storage layer;
when a second condition is met, determining a storage layer in the target storage area adjacent to the ith shelf and far away from the bearing surface of the ith shelf as the target storage layer;
wherein the first condition comprises: the access state is the food material storing state, and Gi’-GiIs greater than 0, or the access state is the food material taking-out state, and Gi’-Gi<0;
The secondThe conditions include: the access state is the food material storing state, and Gi’-GiLess than 0; or the access state is the food material taking-out state, and Gi’-Gi>0;
Wherein i is not less than 1, GiRepresenting the weight detected by the weight sensor corresponding to the ith shelf before the hand passes the target storage area, Gi' denotes a weight detected by the weight sensor corresponding to the ith shelf after the hand passes through the target storage area.
5. The refrigerator of claim 4, wherein the number of shelves of the at least one shelf is 1, and wherein the controller is further configured to:
when G isi’-GiWhen the storage state is a food material storage state, determining a storage layer, far away from the bearing surface of the ith shelf, in the target storage area as the target storage layer;
when G isi’-GiAnd when the storage and taking state is the food material taking state, determining a storage layer close to the bearing surface of the ith shelf in the target storage area as the target storage layer.
6. The refrigerator according to any one of claims 1 to 5, wherein the controller is further configured to:
and when the target storage area is a storage area close to the image pickup device in the plurality of storage areas, determining the target storage layer according to the weight detected by the weight sensor corresponding to the at least one shelf.
7. The refrigerator of claim 6, wherein the controller is further configured to:
determining the target storage layer through which the hand passes according to the movement locus when the target storage area is not a storage area near the image pickup apparatus among the plurality of storage areas.
8. The refrigerator of any one of claims 1 to 5, wherein the imaging device comprises a depth camera, the controller further configured to:
determining an average depth value of a hand area in a target image according to the moving track, wherein the target image is an image of the multi-frame image when the hand moves to a target position in the moving track, and the target position is a position farthest from a starting point in the moving track;
when the average depth value is within a depth value range corresponding to the target storage area, determining that the hand passes through the target storage area, wherein the depth value range corresponding to the target storage area comprises: a distance of any position in the target storage area from the image pickup apparatus.
9. The refrigerator according to any one of claims 1 to 5, wherein the weight sensor corresponding to the shelf comprises: the weight sensors are positioned on the bottom surface of the shelf, the bottom surface is opposite to the bearing surface of the shelf, and the weight detected by the weight sensors corresponding to the shelf is as follows: an average value of the weights detected by the plurality of sub-weight sensors, or a sum of the weights detected by the plurality of sub-weight sensors.
10. The refrigerator according to any one of claims 1 to 5, wherein the refrigerator further comprises: at least one set of bosses on a side wall of the storage compartment;
the at least one group of bosses and the at least one shelf are in one-to-one correspondence, and for any group of bosses and the corresponding shelf, any group of bosses is used for bearing the shelf, and the weight sensors corresponding to the shelf are positioned between the shelf and any group of bosses.
11. The refrigerator of claim 1, wherein the controller is further configured to:
inputting at least one frame of food material image containing food materials in the image acquired by the camera into an identification model to obtain an identification result of the food material image output by the identification model, wherein the identification model is used for outputting food material types based on the input food material image;
determining a target identification result according to the identification result of the at least one frame of food material image;
acquiring a retraining instruction, wherein the retraining instruction carries a target food material type and is used for indicating that the target identification result does not comprise the target food material type;
training the recognition model with target training data based on the retraining instruction to update the recognition model, the target training data comprising: a target food material image and the target food material type in the at least one frame of food material image, wherein the identification result of the target food material image does not include the target food material type.
12. The refrigerator of claim 1, wherein the controller is further configured to:
determining at least one frame of food material image containing food materials according to the multiple frames of images;
inputting the at least one frame of food material image into an identification model to obtain an identification result of the food material image output by the identification model, wherein the identification model is used for outputting a food material type based on the input food material image;
and determining a target identification result according to the identification result of the at least one frame of food material image.
CN201911416104.XA 2019-12-31 2019-12-31 Refrigerator with a door Active CN113124635B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911416104.XA CN113124635B (en) 2019-12-31 2019-12-31 Refrigerator with a door

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911416104.XA CN113124635B (en) 2019-12-31 2019-12-31 Refrigerator with a door

Publications (2)

Publication Number Publication Date
CN113124635A true CN113124635A (en) 2021-07-16
CN113124635B CN113124635B (en) 2022-05-17

Family

ID=76769562

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911416104.XA Active CN113124635B (en) 2019-12-31 2019-12-31 Refrigerator with a door

Country Status (1)

Country Link
CN (1) CN113124635B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115493343A (en) * 2021-06-18 2022-12-20 海信集团控股股份有限公司 Refrigerator and food material management method and device thereof
WO2023065899A1 (en) * 2021-10-18 2023-04-27 海信冰箱有限公司 Refrigerator and method for positioning food materials

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002303479A (en) * 2001-03-30 2002-10-18 Toshiba Corp Refrigerator
CN103940190A (en) * 2014-04-03 2014-07-23 合肥美的电冰箱有限公司 Refrigerator with food management system and food management method
CN105043005A (en) * 2015-06-16 2015-11-11 合肥华凌股份有限公司 System and method for managing refrigerator food information
CN105466140A (en) * 2016-01-08 2016-04-06 广东华兰海电测科技股份有限公司 Refrigerator capable of realizing storage management of foods
CN107024073A (en) * 2017-04-26 2017-08-08 中国石油大学(华东) Multi-sensor intelligent controlling method for refrigerator and intelligent refrigerator based on deep learning
CN107798277A (en) * 2016-09-05 2018-03-13 合肥美的智能科技有限公司 Food materials identifying system and method, food materials model training method, refrigerator and server
CN108038509A (en) * 2017-12-21 2018-05-15 合肥美的智能科技有限公司 Image-recognizing method and device based on image recognition model
CN109631456A (en) * 2017-10-06 2019-04-16 东芝生活电器株式会社 Refrigerator
US20190311319A1 (en) * 2018-04-08 2019-10-10 Thomas James Cote Comprehensive no-touch inventory management system
CN110455027A (en) * 2019-07-16 2019-11-15 海信集团有限公司 A kind of image collecting device and its refrigerator, control method

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002303479A (en) * 2001-03-30 2002-10-18 Toshiba Corp Refrigerator
CN103940190A (en) * 2014-04-03 2014-07-23 合肥美的电冰箱有限公司 Refrigerator with food management system and food management method
CN105043005A (en) * 2015-06-16 2015-11-11 合肥华凌股份有限公司 System and method for managing refrigerator food information
CN105466140A (en) * 2016-01-08 2016-04-06 广东华兰海电测科技股份有限公司 Refrigerator capable of realizing storage management of foods
CN107798277A (en) * 2016-09-05 2018-03-13 合肥美的智能科技有限公司 Food materials identifying system and method, food materials model training method, refrigerator and server
CN107024073A (en) * 2017-04-26 2017-08-08 中国石油大学(华东) Multi-sensor intelligent controlling method for refrigerator and intelligent refrigerator based on deep learning
CN109631456A (en) * 2017-10-06 2019-04-16 东芝生活电器株式会社 Refrigerator
CN108038509A (en) * 2017-12-21 2018-05-15 合肥美的智能科技有限公司 Image-recognizing method and device based on image recognition model
US20190311319A1 (en) * 2018-04-08 2019-10-10 Thomas James Cote Comprehensive no-touch inventory management system
CN110455027A (en) * 2019-07-16 2019-11-15 海信集团有限公司 A kind of image collecting device and its refrigerator, control method

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115493343A (en) * 2021-06-18 2022-12-20 海信集团控股股份有限公司 Refrigerator and food material management method and device thereof
CN115493343B (en) * 2021-06-18 2023-10-27 海信集团控股股份有限公司 Refrigerator and food material management method and device thereof
WO2023065899A1 (en) * 2021-10-18 2023-04-27 海信冰箱有限公司 Refrigerator and method for positioning food materials

Also Published As

Publication number Publication date
CN113124635B (en) 2022-05-17

Similar Documents

Publication Publication Date Title
CN109785337B (en) In-column mammal counting method based on example segmentation algorithm
CN111476302B (en) fast-RCNN target object detection method based on deep reinforcement learning
CN111444880B (en) Food material identification method and refrigerator
CN112101430B (en) Anchor frame generation method for image target detection processing and lightweight target detection method
CN113124635B (en) Refrigerator with a door
CN109141395B (en) Sweeper positioning method and device based on visual loopback calibration gyroscope
CN110674789B (en) Food material management method and refrigerator
CN107784663A (en) Correlation filtering tracking and device based on depth information
CN107341442A (en) Motion control method, device, computer equipment and service robot
US11308348B2 (en) Methods and systems for processing image data
EP3035235A1 (en) Method for setting a tridimensional shape detection classifier and method for tridimensional shape detection using said shape detection classifier
CN106052294A (en) Refrigerator and method for judging change of objects in object storage area of refrigerator
CN110716792B (en) Target detector and construction method and application thereof
CN113124636B (en) Refrigerator
CN113124633B (en) Refrigerator with a door
CN113947770B (en) Method for identifying object placed in different areas of intelligent cabinet
CN113124634B (en) Refrigerator with a door
Kaur et al. Performance evaluation of object detection algorithm using Ant Colony Optimization based Image Segmentation
CN111160190B (en) Vehicle-mounted pedestrian detection-oriented classification auxiliary kernel correlation filtering tracking method
CN109583513B (en) Method, system and device for detecting similar frame and readable storage medium
CN111488831A (en) Food association identification method and refrigerator
CN111967403A (en) Video moving area determining method and device and electronic equipment
KR20210132998A (en) Apparatus and method tracking object in image fames based on neural network
US20200151461A1 (en) Resident activity recognition system and method thereof
Maurice et al. Late fusion of bayesian and convolutional models for action recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant