CN113124633A - Refrigerator with a door - Google Patents
Refrigerator with a door Download PDFInfo
- Publication number
- CN113124633A CN113124633A CN201911416054.5A CN201911416054A CN113124633A CN 113124633 A CN113124633 A CN 113124633A CN 201911416054 A CN201911416054 A CN 201911416054A CN 113124633 A CN113124633 A CN 113124633A
- Authority
- CN
- China
- Prior art keywords
- food material
- target
- image
- controller
- frame
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- F—MECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
- F25—REFRIGERATION OR COOLING; COMBINED HEATING AND REFRIGERATION SYSTEMS; HEAT PUMP SYSTEMS; MANUFACTURE OR STORAGE OF ICE; LIQUEFACTION SOLIDIFICATION OF GASES
- F25D—REFRIGERATORS; COLD ROOMS; ICE-BOXES; COOLING OR FREEZING APPARATUS NOT OTHERWISE PROVIDED FOR
- F25D29/00—Arrangement or mounting of control or safety devices
- F25D29/003—Arrangement or mounting of control or safety devices for movable devices
-
- F—MECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
- F25—REFRIGERATION OR COOLING; COMBINED HEATING AND REFRIGERATION SYSTEMS; HEAT PUMP SYSTEMS; MANUFACTURE OR STORAGE OF ICE; LIQUEFACTION SOLIDIFICATION OF GASES
- F25D—REFRIGERATORS; COLD ROOMS; ICE-BOXES; COOLING OR FREEZING APPARATUS NOT OTHERWISE PROVIDED FOR
- F25D29/00—Arrangement or mounting of control or safety devices
- F25D29/005—Mounting of control devices
-
- F—MECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
- F25—REFRIGERATION OR COOLING; COMBINED HEATING AND REFRIGERATION SYSTEMS; HEAT PUMP SYSTEMS; MANUFACTURE OR STORAGE OF ICE; LIQUEFACTION SOLIDIFICATION OF GASES
- F25D—REFRIGERATORS; COLD ROOMS; ICE-BOXES; COOLING OR FREEZING APPARATUS NOT OTHERWISE PROVIDED FOR
- F25D2500/00—Problems to be solved
- F25D2500/06—Stock management
Landscapes
- Engineering & Computer Science (AREA)
- Chemical & Material Sciences (AREA)
- Combustion & Propulsion (AREA)
- Physics & Mathematics (AREA)
- Mechanical Engineering (AREA)
- Thermal Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Cold Air Circulating Systems And Constructional Details In Refrigerators (AREA)
Abstract
The application discloses refrigerator belongs to electron technical field. The refrigerator includes: a cabinet including a storage compartment; a door for shielding an opening of the storage chamber; the camera equipment is used for acquiring an image at the opening; the code scanning module is used for collecting graphic codes; a controller to: inputting at least one frame of food material image including a target food material in a plurality of frames of images collected by a camera into an identification model to obtain an identification result of the food material image output by the identification model, wherein the identification model is used for outputting a food material type based on the input image; determining a target recognition result according to the recognition result of at least one frame of food material image; when the target identification result does not include the food material type and the graphic code is acquired by the code scanning module within the target duration after the target identification result is determined, analyzing the graphic code to obtain an analysis result; and determining the food material type of the target food material according to the analysis result. The method and the device solve the problem that the refrigerator determines the poor effect of the food material types of the stored food materials. The application is used for storing food materials.
Description
Technical Field
The application relates to the technical field of electronics, in particular to a refrigerator.
Background
With the development of electronic technology, the requirements for the use intelligence and convenience of household appliances (such as a refrigerator) are higher and higher. For example, it is currently required that a refrigerator can determine the food material type of the stored food material.
In the related art, a refrigerator includes a storage chamber, a camera, and a controller. When a user accesses food materials in a storage chamber of the refrigerator, the camera can capture images of the food materials. Further, the controller can identify the image to obtain the type of the food material.
However, in the related art, the type of a food material is not recognized from an image of the food material, and thus, the refrigerator is less effective in determining the type of the stored food material.
Disclosure of Invention
The application provides a refrigerator, which can solve the problem that the effect of determining the food material type of stored food materials is poor. The refrigerator includes:
a cabinet including a storage compartment having an opening;
the door is movably connected with the shell and used for shielding the opening;
the camera equipment is used for acquiring an image at the opening;
the code scanning module is used for collecting graphic codes;
a controller to:
inputting at least one frame of food material image including a target food material in the multi-frame image acquired by the camera equipment into an identification model to obtain an identification result of the food material image output by the identification model, wherein the identification model is used for outputting a food material type based on the input image;
determining a target identification result according to the identification result of the at least one frame of food material image;
when the target identification result does not comprise the food material type and the code scanning module collects the graphic code within the target duration after the target identification result is determined, analyzing the graphic code to obtain an analysis result;
and determining the food material type of the target food material according to the analysis result.
The beneficial effect that technical scheme that this application provided brought includes at least:
in the refrigerator provided by the application, the controller can determine a target identification result according to the identification result of at least one frame of food material image after identifying the at least one frame of food material image, and determine the food material type of the target food material according to an analysis result obtained by analyzing the graphic code collected by the code scanning module within the target time length when the target identification result does not include the food material type. Therefore, the situation that the food material type of the target food material cannot be determined when the food material type cannot be identified through the image of the target food material can be avoided, and the determination effect of the food material type of the target food material can be improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic structural diagram of a refrigerator provided in an embodiment of the present application;
fig. 2 is a flowchart of a food material identification method according to an embodiment of the present application;
FIG. 3 is a schematic structural diagram of another refrigerator provided in an embodiment of the present application;
fig. 4 is a schematic structural diagram of a sensing module according to an embodiment of the present disclosure;
fig. 5 is a flowchart of another food material identification method according to an embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
With the development of electronic technology, the requirements for various household appliances are higher and higher. At present, the intelligent management of food materials in the refrigerator is one of the trends of the development of the refrigerator, and the accurate identification of the food material types is an important precondition for realizing the intelligent management. At present, the automatic identification of the type of food materials in a refrigerator is realized by a visual identification-based method, and the method is mainly characterized in that after the refrigerator is closed or a user accesses the food materials, a camera device is adopted to capture a frame of image of the food materials and input an identification model to identify the type of the food materials. However, at present, the type of a food material cannot be identified according to an image of the food material, so that the effect of the refrigerator in determining the type of the stored food material is poor, and the effect of the refrigerator in intelligent management of the food material is poor. The embodiment of the application provides a refrigerator, which can improve the effect of determining the food material type of stored food materials.
Fig. 1 is a schematic structural diagram of a refrigerator provided in an embodiment of the present application. As shown in fig. 1, the refrigerator 10 may include: a housing 101, a door 102, an image capture device 103, a code scanner module 104, and a controller (not shown in fig. 1). The cabinet 101 includes a storage chamber having an opening K; the door 102 is movably connected with the casing 101 and used for shielding the opening K; and an image pickup device 103 for picking up an image at the opening K. Alternatively, the controller may be located anywhere inside the housing 101, enclosed by the housing 101; or the controller may be located anywhere outside the housing 101; or the controller may be located anywhere inside the door 102, enclosed by the door; or the controller may be located anywhere outside of the door 102; or the controller may also be independent of the casing 101 and the door 102, and the setting position of the controller is not limited in the embodiment of the present application. The controller may be in communication with the camera device 103 for acquiring images captured by the camera device 103. The controller may further be in communication connection with the code scanning module 104, and is configured to acquire a graphic code acquired by the code scanning module, where the graphic code may include a barcode or a two-dimensional code.
The controller may be used to perform the food material identification method as shown in fig. 2. As shown in fig. 2, the method may include:
And 203, when the target identification result does not include the food material type and the graphic code is acquired by the target time length internal code scanning module after the target identification result is determined, analyzing the graphic code to obtain an analysis result.
And 204, determining the food material type of the target food material according to the analysis result.
In summary, in the refrigerator provided in the embodiment of the present application, the controller may determine the target recognition result according to the recognition result of the at least one frame of food material image after recognizing the at least one frame of food material image, and determine the food material type of the target food material according to an analysis result obtained by analyzing the graphic code collected by the code scanning module within the target time duration when the target recognition result does not include the food material type. Therefore, the situation that the food material type of the target food material cannot be determined when the food material type cannot be identified through the image of the target food material can be avoided, and the determination effect of the food material type of the target food material can be improved.
It should be noted that fig. 1 illustrates the image capturing apparatus 103 as being located at the top of the chassis 101, alternatively, the image capturing apparatus 103 may also be located at other positions, such as the top of the door 102 or the top of the storage room, which is not limited in this embodiment of the present application. It should be noted that the "top" of a structure described in the embodiments of the present application is the end of the structure away from the ground when the refrigerator is placed on the ground for normal use. In fig. 1, the code scanning module 104 is shown on the surface of the door 102 close to the opening K, alternatively, the code scanning module 104 may also be located on the surface of the door 102 far from the opening K. Alternatively, the code scanning module may be located at any other position on the casing 101 or the door 102, which is not limited in this embodiment.
The refrigerator provided by the embodiment of the application can further comprise a memory, a door switch detector, a display screen, a loudspeaker and a microphone, wherein the memory, the door switch detector, the display screen, the loudspeaker and the microphone can be in communication connection with the controller.
Alternatively, the memory may be provided in the cabinet or the door, or may be independent of the cabinet and the door. The memory can be used for storing images collected by the camera equipment, identification results obtained by identifying the food material images by the controller and other information needing to be stored. Alternatively, a door opening and closing detector may be used to detect whether the door is in an open state or a closed state. For example, a door opening and closing detector may be provided to a position in the housing capable of contacting the door, and the door is determined to be in an open state when the door opening and closing detector does not contact the door, and the door is determined to be in a closed state when the door opening and closing detector contacts the door. Alternatively, the display screen may be provided on a surface of the door of the refrigerator remote from the cabinet. The controller may control the display screen to display the recognition result of the food material image or other information. The display screen can also be a touch display screen, and at this time, a user can interact with the refrigerator through the touch display screen, for example, the user can perform touch on the touch display screen to trigger the controller to generate a corresponding instruction, and execute the generated instruction.
Alternatively, the speaker and microphone may be provided on the cabinet or on the door. For example, the speaker and the microphone may be provided at the same position as the image pickup apparatus. Like speaker, microphone and camera equipment can constitute the perception module jointly, and this perception module sets up at the top of casing. The controller can control the loudspeaker to emit voice information to prompt the user to carry out corresponding operation. The controller can control the microphone to collect sound information in the environment where the refrigerator is located so as to generate corresponding instructions.
For example, fig. 3 is a schematic structural diagram of another refrigerator provided in an embodiment of the present application, and the refrigerator shown in fig. 3 may be a left side view of the refrigerator, and fig. 3 shows a state in which a door 102 in the refrigerator is closed. As shown in fig. 3, the refrigerator 10 further includes a driving part 105 and a sensing module 100 on the top of the cabinet 101, and the sensing module 100 may be connected to the driving part 105. The drive component 105 is also communicatively coupled to the controller. Fig. 4 is a schematic structural diagram of a sensing module according to an embodiment of the present disclosure, and fig. 4 is a bottom view of the sensing module 100. As shown in fig. 4, the sensing module 100 includes: an image pickup apparatus 103, a speaker 106, and a microphone 107, the image pickup apparatus 103 including a depth camera 1031 and a color camera 1032. As shown in fig. 3, the field of view of the image pickup apparatus 103 is a conical region (region between two broken lines in fig. 3) having the image pickup apparatus 103 as a vertex. As shown in fig. 4, the microphone 107 may be a linear 4-microphone array, and the speaker 106 is located at the side of the sensing module, so that the distance between the microphone 107 and the speaker 106 can be increased.
Alternatively, the controller may control the driving part 105 to move the sensing module 100 according to the state (open state or closed state) of the door 102 or a voice command issued by the user. For example, when the controller determines that the door 102 is in the open state, the driving part 105 may be controlled to push the sensing module 100 out in a direction approaching the door 102, and when the sensing module is pushed out to a predetermined position, the driving part 105 may be controlled to stop the pushing-out action. And then triggers the image pickup device 103 in the sensing module 100 to work. When the controller determines that the door 102 is in the closed state, the image pickup apparatus 103 may be controlled to stop operating, and the driving part 105 may be controlled to retract the sensing module 100 in a direction away from the door 102, and when the sensing module is retracted to the home position, the driving part 105 may be controlled to stop the retracting action.
Optionally, the image capturing apparatus in the embodiment of the present application may include at least one of a depth camera and a color camera. The color camera can be a common color camera or a wide-angle color camera; the depth camera may be a binocular camera, a structured light camera, or a time of flight (TOF) based camera. The image captured by the depth camera may be a depth image (also referred to as range image), and the image captured by the color camera may be a color image. The pixel value of the pixel point in the depth image is the distance (also called depth) from the point corresponding to the pixel point to the depth camera in the scene covered by the visual field range of the depth camera, and the pixel value of the pixel point in the color image is a gray value. Alternatively, the frequency with which the depth camera and the color camera capture images may be the same, and the minimum time interval between the capture of the depth image and the capture of the color image may be less than the duration threshold. The multi-frame depth images collected by the depth camera and the multi-frame color images collected by the color camera can be in one-to-one correspondence, and the color image corresponding to each frame of depth image is the color image collected at the collecting time which is closest to the collecting time of the frame of depth image in the multi-frame color images. Furthermore, the scene difference between the corresponding depth image and the scene represented by the color image can be ensured to be small. For example, the depth camera and the color camera may each capture an image at the same time.
The pickup mode of the microphone can be selected according to the pickup distance, and can be near-field pickup or far-field pickup. A microphone can generally collect sound information within three meters in near-field sound pickup, and a microphone can generally collect sound information within a range of three meters to five meters in far-field sound pickup. The microphone can include native recording module when the near field picked up sound, also can include single microphone and disappear the module again. The microphone can include the module that includes when near field pickup when far field pickup, can also include many microphone module such as the six microphone modules of linear four microphone module one-level annular etc..
The controller may include a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), or a combination of a CPU and a GPU. The processor may further include a hardware chip. The hardware chip may be an application-specific integrated circuit (ASIC), a Programmable Logic Device (PLD), or a combination thereof. The PLD may be a Complex Programmable Logic Device (CPLD), a field-programmable gate array (FPGA), a General Array Logic (GAL), or any combination thereof. It should be noted that the controller in the embodiment of the present application may be disposed in the refrigerator, or may be disposed in other devices, and it is only necessary to ensure that the controller can be in communication connection with each component in the refrigerator, and can control each component.
The memory is connected to the controller through a bus or other means, and at least one instruction, at least one program, a code set, or an instruction set is stored in the memory, and the at least one instruction, the at least one program, the code set, or the instruction set is loaded and executed by the controller to implement the method provided by the embodiment of the present application. The memory may be a volatile memory (or a nonvolatile memory), a non-volatile memory (or a combination thereof). The volatile memory may be a random-access memory (RAM), such as a Static Random Access Memory (SRAM) or a Dynamic Random Access Memory (DRAM). The nonvolatile memory may be a Read Only Memory (ROM), such as a Programmable Read Only Memory (PROM), an Erasable Programmable Read Only Memory (EPROM), and an electrically erasable programmable read-only memory (EEPROM). The non-volatile memory may also be a flash memory, a magnetic memory, such as a magnetic tape, a floppy disk, or a hard disk. The non-volatile memory may also be an optical disc.
Fig. 5 is a flowchart of another food material identification method provided in the embodiment of the present application, where the method can be applied to a controller. As shown in fig. 5, the method may include:
step 501, controlling the camera device to collect images at the opening of the storage room.
The visual field range of the camera equipment at least needs to cover the opening of the storage chamber, and the food materials inevitably need to pass through the opening when the user accesses the food materials in the refrigerator, so that the camera equipment can be ensured to acquire images in the process of accessing the food materials by the user.
Alternatively, when the controller detects that the door is in an open state through the door switch detector, the controller may control the driving part to push out the sensing module, and then control the image pickup apparatus to start capturing an image. The controller may control the camera device to continuously acquire a certain number of images per second. When the controller determines that the door is changed from the open state to the closed state, the controller may control the image pickup apparatus to stop capturing images. Thus, in the process that the door is changed from the closed state to the open state and then to the closed state, the image pickup device can complete image pickup in one pickup cycle.
It should be noted that, in the embodiment of the present application, the image capturing apparatus captures an image under the control of the controller as an example for explanation. Optionally, the image capturing apparatus may also perform image acquisition without being controlled by the controller, which is not limited in this embodiment of the application.
Optionally, in this embodiment of the present application, the image capturing apparatus includes a depth camera and a color camera, and the depth camera and the color camera may capture images simultaneously, so that the images captured by the image capturing apparatus at the same capture time include a depth image and a color image corresponding to the depth image. The image collected by the camera device at the opening of the storage room comprises a group of depth images and a group of color images which are in one-to-one correspondence with each frame of depth image in the group of depth images, and the group of depth images comprises a plurality of frames of depth images.
Step 502, determining the access state of the target food material and the identification state of the target food material according to the multi-frame images acquired by the camera.
The access state comprises a food material storing state or a food material taking state, and the identification state comprises a graphic code carrying state or a graphic code not carrying state.
For example, the controller may recognize the plurality of frames of images to determine whether an image including a graphic code exists in the plurality of frames of images. When the image comprising the graphic code exists in the multi-frame image, the controller can determine that the identification state of the target food material is the state carrying the graphic code; when the image including the graphic code does not exist in the multi-frame image, the controller may determine that the identification state of the target food material is a state without carrying the graphic code.
The controller can determine a multi-frame color image corresponding to the multi-frame depth image according to the multi-frame depth image including the hand, further recognize the multi-frame color image, and determine the change condition of the hand state. The hand state comprises a state of not taking food materials or a state of taking food materials. When the controller determines that the hand state is changed from the food material not taken state to the food material taken state according to the multi-frame color images, the access state of the food materials can be determined to be the food material taken state; when the controller determines that the hand state is changed from the food material taking state to the food material non-taking state according to the multi-frame color images, the access state of the food material can be determined to be the food material storing state.
For example, the controller may perform image recognition on a plurality of frames of images collected by the camera device, and then determine that at least one frame of food material image of the plurality of frames of images includes the target food material. The specific determination method is described in detail later, and is not described herein again.
And 504, respectively inputting at least one frame of food material image into the identification model to obtain an identification result of each frame of food material image output by the identification model.
The controller may directly input the food material image into the recognition model after each frame of food material image is determined, so as to obtain a recognition result of the frame of food material image output by the recognition model. Optionally, after determining all food material images in one acquisition period of the image pickup device, the controller may further input each frame of food material image into the identification model, so as to obtain an identification result of each frame of food material image output by the identification model.
The identification model in the embodiment of the application can be used for outputting the food material type based on the input food material image. Optionally, the recognition model may also be used to: and outputting the confidence coefficient of the food material type based on the input food material image. For example, the recognition result of the recognition model for recognizing and outputting a certain frame of food material image may include: the food material type is apple, and the confidence coefficient is 60%. It should be noted that in the recognition result of at least one frame of food material image output by the recognition model, each recognition result may include a food material type and a confidence thereof, or a partial recognition result may not include a food material type, and the partial recognition result indicates that the recognition model does not recognize the food material type of the target food material in the food material image. Alternatively, when the recognition model cannot recognize the type of the food material in the food material image, the recognition result output by the recognition model may include information indicating that the type of the food material is not recognized.
It should be noted that, when the recognition model recognizes a frame of food material image, a plurality of food material types and confidence levels corresponding to the food material types may be output, and at this time, the recognition model may output the maximum confidence level and the food material type corresponding to the confidence level as a recognition result.
Optionally, in this embodiment of the application, the recognition model may be any one of network structures such as a deep neural network, a convolutional neural network, a deep belief network, a deep stack neural network, a deep fusion network, a deep recurrent neural network, a deep bayesian neural network, a deep generation network, and deep reinforcement learning, or a derivative model thereof.
And 505, determining a target identification result according to the identification result of the at least one frame of food material image.
The controller in the embodiment of the application can further calculate or judge according to the recognition result of at least one frame of food material image output by the recognition model, and further determine a target recognition result, namely a final recognition result obtained by recognizing the food material accessed by the user.
When the controller determines that the recognition result of the at least one frame of food material image does not have a recognition result including the type of food material, the controller may directly determine that the target recognition result is the type of food material without recognizing the food material image, and the target recognition result does not include any food material type.
When the controller determines that there is at least one recognition result including the type of food material among the recognition results of the at least one frame of food material image in step 504, the controller may generate a target recognition result including the type of food material. For example, the controller may determine the food material type included in the target recognition result in the following two determination manners.
In a first determination manner, the food material type in the target recognition result may be: the food material type in the recognition result with the highest confidence coefficient of the food material types in the at least one recognition result.
For example, the controller determines that five frames of food material images including the target food material, wherein the food material type in the recognition results of three frames of food material images is a, and wherein the confidence in each recognition result is 0.5; the food material type in the recognition result of one frame of food material image is B, and the confidence coefficient of the food material type B in the recognition result is 0.8; the food material type in the recognition result of one frame of food material image is C, and the confidence coefficient of the food material type C in the recognition result is 0.7. Therefore, the controller can determine that the confidence of the food material type B is the highest among the five recognition results, and further, the controller can generate a target recognition result including the food material type B.
In the second determination manner, since the at least one recognition result may have the same food material types in partial recognition results, the controller may determine the comprehensive confidence of each food material type according to the confidence of each food material type in each recognition result, and further generate the target recognition result according to the comprehensive confidence of each food material type.
For example, suppose that the at least one recognition result comprises x recognition results, the x recognition results can comprise y food material types, and 1 ≦ y ≦ x. For each of the y food material types, the controller may determine a composite confidence of the food material type according to a target formula. The target formula isWherein s represents the number of the x identification results including the identification result of the food material type, QaAnd representing the confidence coefficient in the a-th identification result of the food material type in the x identification results, wherein a is more than or equal to 1 and less than or equal to s. Further, the controller may generate a target recognition result including a food material type with the highest overall confidence among the y food material types.
For example, the controller determines that five frames of food material images including the target food material, wherein the food material type in the recognition results of three frames of food material images is a, and wherein the confidence in each recognition result is 0.5; the food material type in the recognition results of two frames of food material images is B, and the confidence in each recognition result is 0.8. According to the target formula, the comprehensive confidence coefficient Q of the food material type A can be calculatedA(0.5+0.5+0.5) × 3/5 ═ 0.90, the overall confidence of food material type B is QB=(0.8+0.8)×2/5=0.64。QA>QBAccordingly, the controller may generate a target recognition result including the food material type a.
Optionally, when the target recognition result does not include the food material type, that is, the controller does not determine the food material type of the target food material through image recognition, the controller may control the speaker to play an indication voice indicating that the user moves the target food material, so as to place the graphic code of the target food material within the visual field of the code scanning module. When hearing the prompt voice, the user can place the graphic code on the target food material in the visual field range of the code scanning module, so that the code scanning module can collect the graphic code on the target food material. Optionally, the code scanning module may start collecting the graphic code when the door is determined to be in the open state.
Optionally, when the controller determines that the identification state of the target food material is the state carrying the graphic code, the controller performs step 506. The embodiment of the present application takes the identification state of the target food material as the state of carrying the graphic code as an example for explanation.
And 507, analyzing the graphic code acquired by the code scanning module in the target time length to obtain an analysis result. Step 508 is performed.
It should be noted that any graphic code may correspond to a character string, and a graphic code on an object is generally used to indicate related information of the object. The user (such as a manufacturer of the object) can set the related information of the object indicated by the graphic code by himself, and then the related information indicated by the graphic code is input into the graphic code information base, so that other users can search the related information of the object indicated by the graphic code in the graphic code information base according to the graphic code. It should be noted that the graphic code information base may be stored in a memory in the refrigerator, or may be stored on the internet, or other devices connected to the controller.
The graphic code information base may include information corresponding to a plurality of character strings, and when the controller parses the graphic code, the controller may first determine a target character string corresponding to the graphic code, and then may search information corresponding to the target character string in the graphic code information base. If the information corresponding to the target character string exists in the graphic code information base, the information corresponding to the target character string may be determined as the analysis result of the graphic code, and the analysis result includes the related information of the object (i.e., the target object) indicated by the graphic code. If the information corresponding to the target character string does not exist in the graphic code information base, the analysis result can be determined to be the information of the object which is not indicated by the graphic code, and at this time, the analysis result does not include the related information of the object which is indicated by the graphic code. At this time, the graphic code may not be successfully analyzed, and the controller fails to analyze the graphic code.
Illustratively, the information related to the food material may comprise one or more of a type of food material, a shelf life, a production date, a manufacturer, a volume, a weight, a price, and the like. The controller may analyze the analysis result of the graphic code to determine whether the analysis result includes the food material type, and then determine whether the food material type of the target food material can be determined according to the analysis result.
And 509, determining the food material type of the target food material according to the analysis result.
When the analysis result of the graphic code includes the food material type, the controller may determine the food material type in the analysis result as the food material type of the target food material. Therefore, the method can be regarded as that the controller analyzes the graphic code on the target food material to obtain the food material type of the target food material.
Step 510, determining whether the access state of the target food material is the food material taking-out state. When the access state of the target food material is the food material taking-out state, executing step 511; when the access status of the target food material is not the food material taking-out status, step 514 is executed.
Please refer to the description of step 502 for the manner of determining the access status, which is not described herein again in this embodiment.
The reference image comprises the food materials which are stored in the storage room and are marked to be in the state of carrying the graphic codes, namely the reference image is the image of the food materials carrying the graphic codes. The food material type of the food material in the reference image is known.
The controller determines the access status of the target food material as the food material taking-out status in step 510, that is, determines that a first condition is satisfied, where the first condition includes: the identification state of the target food material is a state carrying the graphic code, the target identification result does not include the food material type, the graphic code is not collected by the code scanning module within the target time length after the controller determines the target identification result, and the access state of the target food material is a state of taking out the food material. Therefore, the controller determines that the user takes out the target food material carrying the graphic code from the storage chamber, the food material type of the target food material is not identified in an image identification mode, and the user does not trigger the code scanning module to scan the graphic code of the target food material. Furthermore, the controller can determine at least one frame of reference image of the food material with known food material types stored in the storage chamber, determine the similarity between the food material image comprising the target food material and each frame of reference image, obtain at least one reference similarity, and determine the similarity between the two frames of images, namely determine the similarity of the feature vectors of the two frames of images. The similarity between the food material image and the reference image can be represented as follows: the food material type of the target food material is the probability of the food material type of the food material in the reference image.
For example, in step 503, the controller determines five frames of food material images including the target food material, three food materials carrying graphic codes are stored in the storage chamber, and the controller can acquire three frames of reference images of each of the three food materials, so the controller can acquire nine frames of reference images. And the controller determines the similarity between each frame of food material image and each frame of reference image, so that 5 × 9-45 reference similarities can be determined.
Alternatively, the controller may determine the similarity between each frame of food material image and each frame of reference image by any one of a Structural Similarity Index (SSIM) method, a cosine similarity (cosin) determination method, and a histogram-based similarity determination method.
For example, the memory in the refrigerator in the embodiment of the present application may store data related to food materials with graphic codes stored in the storage chamber. Optionally, the related data may include the food material type of the food material, the character string corresponding to the graphic code, the volume or weight, and the corresponding reference image. For example, the memory may store the data related to the food materials shown in table 1 below. It should be noted that the reference images in table 1 are only used to illustrate that the related data of the food material includes corresponding images, and specific image contents of the images are not illustrated.
TABLE 1
When the similarity between a frame of food material image and a frame of reference image is greater than a similarity threshold value, it can be considered that the food material type of the target food material in the food material image is possibly the same as the food material type of the food material in the frame of reference image, the frame of reference image has a reference value for determining the food material type of the target food material, and then the food material type of the target food material can be further determined based on the frame of reference image. When the similarity between a frame of food material image and a frame of reference image is smaller than or equal to the similarity threshold, it can be considered that the food material type of the target food material in the food material image is unlikely to be the same as the food material type of the food material in the reference image, and the frame of reference image does not have a reference value for determining the food material type of the target food material.
The controller may determine, in all the reference images, a reference image in which the food material type of the included food material is most likely to be the same as the food material type of the target food material, and further determine the food material type of the food material in the reference image as the food material type of the target food material. The reference image is the image with the maximum reference similarity with any frame of food material image in all the reference images.
It should be noted that, because too many food materials with graphic codes are often not stored in the storage chamber of the refrigerator, the number of reference images can be small, so that the speed of determining the maximum similarity among at least one reference similarity can be increased, and the speed of determining the type of the food material of the target food material can be ensured. In addition, in the embodiment of the present application, when the access state of the target food material is the food material taking-out state, the food material type of the target food material is determined by determining the similarity between the food material image and the reference image. Therefore, the codes do not need to be scanned when the food materials are taken out (namely, the graphic codes carried by the target food materials are collected), the codes are not required to be scanned when the food materials are stored and taken out, and the user experience can be improved.
And 514, controlling a loudspeaker to play a prompt voice prompting to input the food material type of the target food material. Step 515 is performed.
The controller may perform step 514 upon determining that the second condition is satisfied. Wherein the second condition comprises: the analysis result of the graphic code acquired by the code scanning module does not include any food material type; or, the at least one reference similarity determined by the controller is smaller than a similarity threshold; or the identification state of the target food material is a state carrying the graphic code, the target identification result does not include any food material type, the graphic code is not collected by the code scanning module within the target duration, and the access state of the target food material is a state of storing the food material.
The controller determines that the food material type of the target food material cannot be determined by analyzing the graphic code acquired by the code scanning module, or the food material type of the target food material cannot be determined according to a mode of matching with the reference image when the target food material is taken out, or the food material type of the target food material cannot be determined when the target food material is stored, and then the controller can control the loudspeaker to play prompt voice so as to determine the food material type of the target food material in a mode of user input.
For example, the prompt voice may prompt the user to input the food material type of the target food material in the form of voice. For example, the prompt voice may be "what food material you access". It should be noted that the content of the prompt voice in the embodiment of the present application is only an exemplary description, and the content of the prompt voice in practical application may be changed arbitrarily, which is not limited in the embodiment of the present application.
And step 515, acquiring the response voice collected by the microphone within the first time length after the prompt voice is played by the loudspeaker. Step 516 is performed.
Illustratively, the first duration may be 10 seconds, 20 seconds, or other duration. The response voice may indicate the food material type of the target food material. When the user hears the prompt voice played by the speaker, the user may speak a response voice, for example, the response voice is "apple". It should be noted that the content of the response voice in the embodiment of the present application is only an exemplary description, and the content of the response voice in practical application may be arbitrarily changed, which is not limited in the embodiment of the present application.
And 516, determining the food material type of the target food material based on the response voice.
The controller can extract information of the response voice collected by the microphone to determine that the response voice indicates the food material type of the target food material. It should be noted that, in the embodiment of the application, the controller determines the food material type of the target food material in a form of collecting the response voice through the microphone. Optionally, the controller may also receive the food material type input by the user through the touch display screen. At this time, the prompt voice in step 514 may also prompt the user to manually input the food material type through the touch display screen on the refrigerator.
Optionally, when the analysis result of the controller on the graphic code acquired by the code scanning module does not include any food material type, and the controller determines the food material type of the target food material in a user input manner, the controller may upload the corresponding relationship between the graphic code acquired by the code scanning module and the food material type of the target food material to the graphic code information base, so that the food material type of the food material can be determined according to the graphic code when the graphic code is acquired subsequently. If the controller can store the corresponding relation between the character string corresponding to the graphic code and the food material type of the target food material into the graphic code information base, the continuous updating of the graphic code information base can be guaranteed, the graphic code information included in the graphic code information base is enriched, and the limitation of determining the food material type by adopting a graphic code analyzing mode is reduced.
Optionally, in the embodiment of the application, the user can also set the graphic code for the food material by himself, and input the information of the food material indicated by the graphic code into the graphic code information base, so that the food material which does not carry the graphic code originally can be identified by means of graphic code analysis, and the management effect of the refrigerator on the information of the stored food material is improved. Alternatively, the graphic code pasted by the user can be reused, such as arranging the graphic code on the food material through a clip or a vessel with the graphic code.
It should be noted that, in the related art, the food material type of the food material to be accessed is determined only in an image recognition manner, and the accuracy of the determination of the food material type is low, and the cases that the determination of the food material type is impossible are more, so that the limitation of determining the food material type is high. It should be noted that when the type of the food material is determined by analyzing only the graphic code carried by the food material, the graphic code analysis is prone to fail. In the embodiment of the application, the food material type of the target food material can be determined by combining image recognition, graphic code analysis and voice input, so that the effective determination of the food material type of the target food material can be ensured, the accuracy of determining the food material type is improved, the limitation of determining the food material type is reduced, and the intelligent management of a refrigerator on the food material information is facilitated. In addition, codes do not need to be scanned when food materials are stored and taken out, so that the process of determining the types of the food materials can be simplified, and the use experience of a user on the refrigerator is improved.
In summary, in the refrigerator provided in the embodiment of the present application, the controller may determine the target recognition result according to the recognition result of the at least one frame of food material image after recognizing the at least one frame of food material image, and determine the food material type of the target food material according to an analysis result obtained by analyzing the graphic code collected by the code scanning module within the target time duration when the target recognition result does not include the food material type. Therefore, the situation that the food material type of the target food material cannot be determined when the food material type cannot be identified through the image of the target food material can be avoided, and the determination effect of the food material type of the target food material can be improved.
Referring to the manner of determining at least one frame of food material image in step 503, step 503 may include the following steps 5031 to 5034:
step 5031, acquiring n frames of target images including the hand region in the multi-frame images acquired by the camera device.
Wherein n is more than or equal to 1. It should be noted that in the embodiment of the present application, an n-frame target image including a hand region may be determined in a set of depth images acquired by an image capturing apparatus, where the target image is a depth image. Alternatively, the controller may acquire each frame of depth image acquired by the image capturing device in real time, that is, each time a frame of depth image is acquired by the image capturing device, the frame of depth image is acquired. The controller may then perform hand detection on the frame depth image to determine whether the frame depth image is a target image including a hand region. Optionally, the controller may train a classifier through machine learning or a deep neural network, and then perform hand detection on the depth image by using the trained classifier.
In an example, the controller performs hand detection on all depth images acquired by the camera in one acquisition cycle, and then determines n frames of target images including a hand region. Optionally, in the depth image acquired by the image capturing apparatus in one acquisition cycle, the depth image between the first frame target image and the last frame target image determined by the controller may be the target image.
Optionally, in this embodiment of the application, after the at least one frame of target image is determined, a color image corresponding to the at least one frame of target image may also be directly used as the food material image.
Step 5032, determining at least one auxiliary image in the n frames of target images.
Optionally, the controller may perform filtering on the determined n frames of target images to obtain at least one frame of auxiliary image. Alternatively, the controller may determine the auxiliary image by:
step s11, determine the first frame of target image in the n frames of target images as a frame of auxiliary image.
For example, when the controller determines that a frame of depth image is a target image including a hand region for the first time during hand detection of a depth image acquired by the image capturing apparatus, the controller may directly determine the frame of target image as a frame of auxiliary image.
Step s12, when | hi-hi-1And when | ≧ epsilon, determining the ith frame target image in the n frame target images as a frame auxiliary image.
And the ith frame target image is any frame target image except the first frame target image in the n frame target images. h isiRepresenting the average depth value, h, of the hand region in the target image of the i-th framei-1And representing the average depth value of the hand area in the i-1 th frame target image in the n frames of target images, wherein epsilon represents a depth threshold value, and i is more than or equal to 1 and less than or equal to i and less than or equal to n.
When the controller determines each frame of the target image, the controller may obtain a pixel value of each pixel point in the hand region of the target image, and determine an average depth value of the hand region (that is, an average pixel value of each pixel point in the hand region) according to the pixel value of each pixel point in the hand region. In the present embodiment, the "average depth value of the hand region" is referred to as "hand depth". The controller can record the hand depth of each frame of target image, and further determine the change amplitude delta h ═ h of the hand depth in each frame of target imagei-hi-1And determining whether the change amplitude of the hand depth in each frame of the target image is greater than or equal to a depth threshold epsilon. When the change amplitude of the hand depth in a certain frame of target image is greater than or equal to the depth threshold epsilon, the controller determines the frame of target image as a frame of auxiliary image. The controller may identify the type of the food material in the image captured by the image capturing apparatus according to the determined auxiliary image.
It should be noted that in the embodiment of the present application, the auxiliary image is determined by the variation range of the hand depth, that is, the target image with a large hand variation may be determined as the auxiliary image, so as to avoid the waste of computing resources caused by identifying all the multiple frame images with a high similarity.
Step s13, when | hi-hi-1And if the frame number of the image between the ith frame target image and the previous frame auxiliary image in the depth image acquired by the camera equipment is greater than or equal to the frame number threshold value, determining the ith frame target image as a frame auxiliary image.
When the change amplitude of the hand in a certain frame of target image is smaller than the depth threshold, the controller can also determine the number of frames of the depth image between the frame of target image and the determined previous frame of auxiliary image in the depth image acquired by the camera equipment, and determine whether the number of frames is larger than or equal to the frame number threshold Deltak. When the number of frames is greater than or equal to the frame number threshold, the frame target image is determined as a frame auxiliary image. That is, when the change width of the hand in the target image is smaller than the depth threshold, the controller determines one frame of auxiliary image every Δ k frame of image. Therefore, the situation that the auxiliary image cannot be determined due to the fact that the moving amplitude of the hand of the user is small all the time can be avoided, and the situation that the image for food material identification can be obtained is guaranteed.
Optionally, in this embodiment of the application, after the at least one frame of auxiliary image is determined, a color image corresponding to the at least one frame of auxiliary image may also be directly used as the food material image.
Step 5033, determining at least one key image in the at least one auxiliary image.
In an embodiment of the application, to reduce the number of images targeted for food material identification to reduce the calculation time, the controller may re-screen the key image from the determined at least one frame of auxiliary image for subsequent image processing and identification.
Alternatively, the controller may determine the key image by:
and step s21, determining a target hand area corresponding to the depth value range of the kth frame auxiliary image from a plurality of hand areas corresponding to the depth value ranges.
Wherein, the k frame auxiliary image is any one frame auxiliary image in the at least one frame auxiliary image. k is larger than or equal to 1, the pixel average value of the hand area of the k frame auxiliary image is positioned in the depth value range of the k frame auxiliary image, and in a plurality of hand areas corresponding to the depth value ranges, the depth value in the depth value range is in negative correlation with the corresponding hand area.
Optionally, the controller may determine m depth intervals according to the height of the refrigerator, where m is greater than 1, and the depth of each position in the scene where the refrigerator is located is the distance from the position to the image pickup device in the refrigerator. Illustratively, each depth zone includes [ Q0, Q1), [ Q2, Q3), …, [ Qm-1, Qm ], where Q0 is 0, and Qm may be greater than or equal to the height of the refrigerator. Alternatively, the lengths of the respective depth intervals may all be equal, for example, the lengths of the respective depth intervals may be 10 centimeters, 20 centimeters, or other values. Optionally, the multiple depth value ranges in the embodiment of the present application may include m depth value ranges in one-to-one correspondence with the m depth intervals, where each depth value range may be the same as the corresponding depth interval.
The depth value ranges in the embodiment of the present application may also correspond to the hand areas one by one, and the depth values in the depth value ranges are inversely related to the corresponding hand areas. The area of the hand corresponding to each depth value range is the area of the hand of the user in the image acquired by the image pickup device when the average depth of the hand of the user is within the depth value range. For example, the hand area in an image may be represented by the total number of pixel points included in the hand region in the image. When the distance between the hand of the user and the camera equipment is short, the hand area in the image acquired by the camera equipment is necessarily large, so the hand area is large; when the distance between the hand of the user and the camera shooting equipment is far, the hand area in the image collected by the camera shooting equipment is necessarily small, so that the hand area is small.
Optionally, when the hand of the user is located at different depths, the controller may cluster areas of hand regions in the multi-frame image collected by the camera device to obtain hand areas corresponding to different depth intervals, and then obtain hand areas corresponding to different depth value ranges.
In this embodiment, when the controller determines a frame of the auxiliary image, the controller may determine, according to the average pixel value of the hand region in the auxiliary image, a depth value range in which the average pixel value of the hand region is located, and further determine a target hand area corresponding to the depth value range.
And step s22, when the ratio of the area of the hand region in the k frame auxiliary image to the target hand area is larger than the ratio threshold, determining the k frame auxiliary image as a frame key image.
In this embodiment, when the controller determines a frame of the auxiliary image, the controller may determine an area of a hand region in the auxiliary image, that is, determine the number of pixels included in the hand region. The controller may then determine a scaling factor corresponding to the frame of auxiliary image, that is, a ratio of an area of a hand region in the frame of auxiliary image to a target hand area corresponding to the frame of auxiliary image. The controller may further detect whether the scale factor is greater than a ratio threshold, and determine the frame of auxiliary image as a frame of key image when the scale factor is greater than the ratio threshold.
Optionally, in the embodiment of the application, after the at least one frame of key image is determined in the auxiliary image, the at least one frame of key image may also be directly input as a food material image into the recognition model to obtain at least one recognition result. And determining a target identification result of the food material image according to the at least one identification result so as to obtain the type of the food material in the food material image.
Step 5034, determining at least one frame of food material image according to the determined at least one frame of key image.
In the embodiment of the application, in order to further improve the identification accuracy, a partial Region of the key image where the food material is taken by the hand may be used as a Region of Interest (ROI), and then the Region of Interest in the color image corresponding to the key image is captured according to the ROI to obtain the food material image, so as to perform subsequent food material identification on the food material image. Optionally, the region of interest in the color image corresponding to the key image is at the same position as the region of interest in the key image of the frame.
Alternatively, the controller may determine the food material image by:
and step s31, acquiring the expansion coefficient corresponding to the depth value range of each frame of key image from the expansion coefficients corresponding to the depth value ranges.
It should be noted that, in step s21, the m depth value ranges determined according to the m depth intervals may also correspond to the m expansion coefficients one by one, and the depth values in the respective depth value ranges may be negatively correlated to the corresponding expansion coefficients.
Step s32, determining the food material area in each frame of key image according to the expansion coefficient corresponding to the depth value range of each frame of key image and the hand area in each frame of key image, where the food material area in each frame of key image includes the hand area in each frame of key image.
Optionally, both the hand region and the food material region in the key image in the embodiment of the application may be rectangular, and the center, the length direction and the width direction of the hand region and the food material region are the same. Optionally, the length w' ═ e of the food material areapWidth h' ═ h × ep(ii) a Where w represents the length of the hand region, h represents the width of the hand region, and p represents the expansion coefficient corresponding to the depth value range of the key image.
And step s33, intercepting the food material area in the color image corresponding to each frame of key image as a frame of food material image corresponding to each frame of key image.
The relation that the length and the width of the food material image satisfy is the same as the relation that the food material area in the frame key image satisfies, and details are not repeated herein in the embodiments of the present application.
When the user accesses the food material, the area of the food material may be larger than the area of the hand, or the food material may stick out of the hand of the user. According to the embodiment of the application, the hand region in the key image is enlarged to determine the food material region, and then the food material region in the color image corresponding to the key image is intercepted as the food material image, so that more food material features can be ensured to be included in the food material image, and the problem of low identification accuracy caused by the fact that only the hand region is intercepted as the food material image and then food material identification is carried out is solved.
It should be noted that the determination method of the reference image stored in the embodiment of the present application is the same as the determination method of the food material image, and details thereof are not described in the embodiment of the present application. In addition, only the food material images determined in the multi-frame images acquired by the camera equipment are input into the identification model for identification, so that the number of the food material images is small and the characteristics of the food materials are obvious, the time consumption in the identification calculation process can be reduced, and the real-time property of the identification model for identifying the food materials is improved. And when the controller is positioned in other equipment independent from the refrigerator, the data volume of the image transmitted to the controller can be reduced, and the real-time performance of the food material identification by the identification model is improved.
Optionally, after the controller identifies the food material through the identification model, if the target identification result obtained by identification is wrong, the controller may train the identification model again by using the target training data to update the identification model, so as to improve the accuracy of the identification model in identifying the food material image. For example, after the controller generates the target recognition result, the controller may determine whether the target recognition result is correct by acquiring a confirmation voice of the user. When the controller determines that the target recognition result is wrong, the recognition accuracy of the food material image by the recognition model is low, and then the retraining instruction can be received, and the recognition model is trained continuously to optimize the recognition model. The retraining instruction may carry the correct food material type and food material status.
It should be noted that, in the embodiment of the present application, different training methods may be adopted to continuously train the recognition model according to different recognition results output by the recognition model. For example, the recognition model may be determined to be retrained or incrementally trained according to the recognition result output by the recognition model. It should be noted that the retraining of the model is to train the model by using the newly added training data and the historical training data of the model to obtain a new model. And performing incremental training on the model, namely, adjusting parameters related to the newly added training data in the model according to the newly added training data to update the model. Optionally, retraining the model does not change the recognition result that the model can recognize, and only improves the accuracy of the model recognition, while performing incremental training on the model may increase the recognition result that the trained model can output.
Optionally, in the embodiment of the present application, the controller may further determine other information of the food material in the refrigerator, such as the shelf life, the volume, the storage location, and other food material information of the food material.
In another optional embodiment of the present application, the controller may further comprise a shelf and a weight sensor. The controller may determine the storage location of the food material in combination with the image captured by the camera and the weight detected by the weight sensor.
In an embodiment of the application, a shelf is located in the storage compartment for dividing the storage compartment into a plurality of storage layers; each weight sensor is used for detecting the weight of the corresponding shelf and the object carried on the corresponding shelf. It should be noted that fig. 1 illustrates an example in which three shelves (not shown in fig. 1) are provided in the storage chamber, and the three shelves can divide the storage chamber into 4 storage layers. Optionally, the number of the shelves may also be two or four or even more, which is not limited in the embodiments of the present application. Alternatively, the storage compartments in the embodiments of the present application may include a refrigerating compartment and a freezing compartment, and the door may include at least one door corresponding to the refrigerating compartment and at least one door corresponding to the freezing compartment, and fig. 1 illustrates only the structure in the refrigerating compartment, and reference may be made to the description of the refrigerating compartment for the freezing compartment.
Optionally, the weight sensor corresponding to each shelf in the embodiment of the present application may include a plurality of sub-weight sensors located on a bottom surface of the shelf, the bottom surface of the shelf is opposite to the bearing surface of the shelf, and the weight detected by the weight sensor corresponding to the shelf may be: an average value of the weights detected by the plurality of sub-weight sensors, or a sum of the weights detected by the plurality of sub-weight sensors. Alternatively, each shelf may also include only one sub-weight sensor, which is not limited in this embodiment.
Optionally, each shelf may also be located on a corresponding carrying structure, and a weight sensor corresponding to the shelf may be located between the shelf and the corresponding carrying structure to detect the weight of the shelf and the object carried by the shelf. Illustratively, the refrigerator may further include at least one set of bosses on a sidewall of the storage compartment, and each set of bosses may be one load bearing structure. The at least one group of bosses are in one-to-one correspondence with the at least one shelf, and for any group of bosses and the corresponding shelf, the group of bosses are used for bearing the shelf, and the weight sensors corresponding to the shelf are positioned between the shelf and the bosses. Alternatively, each set of bosses may include at least two bosses supporting opposite ends of the shelf. For example, the weight sensor may include a plurality of sub-weight sensors located at an edge region of the bottom surface of the shelf. For example, the weight sensor includes two sub-weight sensors, which may be respectively located at edge positions of opposite ends in the bottom surface of the shelf. For another example, the weight sensor includes four sub weight sensors, and the four sub weight sensors may be respectively located at four corners of the bottom surface of the shelf.
Alternatively, the load bearing structure may be a plate-like structure that may be secured in the storage compartment (e.g., by being supported by bosses on the side walls of the storage compartment), the rack may be located on a corresponding plate-like structure, and the rack's corresponding weight sensor may be located between the rack and the corresponding plate-like structure. At this time, the sub-weight sensor included in the weight sensor corresponding to the shelf may be located at a middle region of the bottom surface of the shelf.
The controller may further perform the following steps to determine the storage location of the food material:
and b11, determining the moving track of the hand according to the multi-frame target images.
It should be noted that after the controller determines multiple frames of target images in the step 5031, the controller may execute the step b 11. Thereafter, the controller may perform step b 12.
For example, after each frame of target image including the hand region is determined, the controller may further determine a centroid position of the hand region in each frame of target image, and then determine a movement trajectory of the hand by tracking the centroid position, such as tracking the centroid position of the hand region by an adaptive higher-order predictive tracking model. For example, the controller may cluster pixel points of the hand region in the image to determine a centroid position of the hand. The controller can also determine a track vector of the hand according to two adjacent frames of target images, and further obtain a plurality of track vectors according to the multiple frames of target images. The controller may determine a continuous movement trajectory of the hand based on the trajectory vector obtained by combining the plurality of trajectory vectors.
And b12, determining the target storage area passed by the hand according to the moving track of the hand. Step b13 is performed.
The storage compartment may include a plurality of storage regions, each of which may include at least two storage layers, and the storage layers of the plurality of storage regions are different. Illustratively, the storage compartment in the refrigerator shown in fig. 1 is divided into four storage layers by three shelves, and it is assumed that the four storage layers are sequentially referred to as a first storage layer, a second storage layer, a third storage layer, and a fourth storage layer from top to bottom. The four reservoir layers may belong to two reservoir regions, e.g., the two reservoir regions are referred to as a high confidence region and a low confidence region, respectively. Wherein the low confidence region may include a first reservoir layer and a second reservoir layer and the high confidence region may include a third reservoir layer and a fourth reservoir layer.
The controller may determine the m depth value ranges according to a height of the refrigerator. Please refer to step 5034 for the introduction of the m depth value ranges, which is not described in the embodiment of the present application. Alternatively, the plurality of depth value ranges may correspond one-to-one to a plurality of storage areas included in the storage room, and the depth value range corresponding to each storage area includes a distance from any one position in the storage area to the image pickup apparatus.
For example, the controller may determine an average depth value of a hand region in a target image according to a movement trajectory of the hand, where the target image is an image of the multi-frame image when the hand moves to a target position in the movement trajectory, and the target position is a position farthest from the starting point in the movement trajectory. Furthermore, the controller may determine the storage area corresponding to the depth value range in which the average depth value is located as the target storage area through which the hand passes; that is, the controller may determine that the hand passes through the target storage area when the average depth value is within the depth value range corresponding to the target storage area. In this way, the controller may determine that the user accesses food material from the target storage area, i.e., that the storage location of the accessed food material is located within the target storage area.
Step b13, determine whether the target storage area is a storage area near the image pickup apparatus among the plurality of storage areas. When the target storage area is not a storage area close to the image pickup apparatus, performing step b 14; when the target storage area is a storage area near the image pickup apparatus, step b15 is executed.
When the camera device collects images, a blind area often exists, at least part of areas in the storage area close to the camera device are located in the blind area, and when a user accesses food materials in the storage area close to the camera device, the accuracy of the position of the food materials to be accessed is determined to be lower through the images collected by the camera device. And camera equipment can gather the comparatively complete image apart from its distant storing district, and when the user accessed the food material in this storing district far away from camera equipment, the accuracy of the position of confirming access food material according to the image of camera equipment collection was higher. It is possible to determine the accuracy of determining the position of the access food material from the image captured by the image capturing apparatus by determining whether the target storage area is one of the plurality of storage areas that is close to the image capturing apparatus.
For example, the image pickup apparatus is located at the top of the cabinet or the top of the storage room, the storage area near the image pickup apparatus may be the low confidence area, and the storage area farther from the image pickup apparatus may be the high confidence area.
And b14, determining the target storage layer passed by the hand according to the moving track.
When the controller determines that the target storage area is not a storage area close to the camera equipment, the controller determines that the accuracy of the position for storing and taking food materials is higher according to the image acquired by the camera equipment; furthermore, the controller can directly determine the position of the food material to be stored and taken according to the image collected by the camera equipment.
For example, the depth value range corresponding to the storage area may include a plurality of sub-depth value ranges, the plurality of sub-depth value ranges may correspond to the storage layers in the storage area in a one-to-one manner, and the sub-depth value range corresponding to each storage layer includes a distance from any position in the storage layer to the image capturing apparatus. The controller can determine a target image according to the moving track of the hand, and then determine a sub-depth value range in which the average depth value of the hand area in the target image is located; furthermore, the controller may determine the storage layer corresponding to the sub-depth range as a target storage layer through which the hand passes, that is, determine the storage position of the stored food material as the target storage layer.
And b15, determining the target storage layer with the changed food material stored in the target storage area according to the weight detected by the weight sensor corresponding to at least one shelf positioned between the storage layers of the target storage area.
When the controller determines that the target storage area is a storage area close to the camera equipment, the controller determines that the accuracy of determining the position of storing and taking food materials according to the image acquired by the camera equipment is low; furthermore, the controller can determine the position of the food material to be accessed by combining the weight detected by the weight sensor so as to ensure the accuracy of determining the position of the food material to be accessed. Alternatively, in the embodiment of the present application, the weight sensors may be provided only on the bottom surfaces of the shelves in the storage area near the image pickup apparatus, that is, only the weight sensors may be provided for each of the shelves in the storage area near the image pickup apparatus. It should be noted that the shelf in the storage area near the image pickup apparatus in the embodiment of the present application refers to a shelf between the respective storage layers in the storage area. Optionally, a weight sensor may be disposed on a bottom surface of each shelf in the storage chamber, which is not limited in this application.
For example, when it is determined that the image captured by the image capturing apparatus includes a hand, that is, when the target image is captured, the controller may capture the weight detected by the weight sensor corresponding to each shelf in the storage room, and determine that the weight detected by a certain weight sensor captured within a set period of time is the same as the weight detected by the weight sensor, as an effective weight detected by the weight sensor. The controller can determine the storage layer with the changed stored food material according to the effective weight detected by each weight sensor, namely determine the storage position of the stored food material.
It should be noted that there are many ways to determine the target storage layer according to the weight detected by the weight sensor, and the embodiments of the present application are explained by taking the following two ways as examples.
In a first manner, the controller may determine the target reservoir layer based only on the weight detected by the weight sensor.
Alternatively, when Gi’-GiAnd when the storage layer is not equal to 0, the controller can determine the storage layer, which is close to the bearing surface of the ith shelf, in the target storage area as the target storage layer. Wherein i is more than or equal to 1, and the ith shelf is any one shelf in the target storage area; giIndicating the weight detected by the weight sensor corresponding to the ith shelf before the hand passes the target storage area, Gi' denotes the weight detected by the weight sensor corresponding to the ith shelf after the hand passes through the target storage area. GiAnd Gi' both are the effective weights detected by the weight sensors. Gi’-GiNot equal to 0, that is, after the hand passes through the target storage area, the weight detected by the weight sensor corresponding to the ith shelf changes.
For example, when the controller determines that the image captured by the image capturing device includes a hand, the controller may start to acquire the effective weight detected by the weight sensor corresponding to each shelf in the storage area close to the image capturing device, where the weight detected by the weight sensor corresponding to the ith shelf detected by the controller is Gi. The controller may then continueThe effective weight detected by the weight sensor corresponding to each shelf of the controller is obtained, and when no hand is included in the image acquired by the camera equipment in the preset time period, the controller can determine the weight detected by the weight sensor corresponding to the ith shelf as Gi’。
Because weight sensor is used for detecting the weight of the shelf that corresponds and the object that it bore, so when the effectual weight that certain weight sensor detected changes, the object that the shelf that this weight sensor corresponds bore can be thought to the controller changes, and then the controller can directly confirm to have carried out the access of eating the material in the storing layer of this shelf top. Thus, the controller can determine the storage tier in the target storage area that is proximate to the carrying surface of the shelf as the target storage tier. Therefore Gi’-GiAnd when the storage layer is not equal to 0, the controller can determine the storage layer, which is close to the bearing surface of the ith shelf, in the target storage area as the target storage layer.
In the second mode, the controller may further determine the access state of the food material in the target storage area according to the multi-frame image including the hand captured by the image capturing apparatus. The access state comprises a food material storing state or a food material taking state. Furthermore, the target storage layer is determined according to the weight detected by the weight sensor corresponding to at least one shelf between the storage layers of the target storage area and the access state of the food material in the target storage area. The controller determines that the access state of the food in the target storage area is a food storing state, namely that the food is stored in the target storage area; the controller determines that the access state of the food in the target storage area is the food taking-out state, namely that the food is taken out of the target storage area.
The controller can determine a multi-frame color image corresponding to the multi-frame depth image according to the multi-frame depth image including the hand, further recognize the multi-frame color image, and determine the change condition of the hand state. The hand state comprises a state of not taking food materials or a state of taking food materials. When the controller determines that the hand state is changed from the food material not taken state to the food material taken state according to the multi-frame color images, the access state of the food materials can be determined to be the food material taken state; when the controller determines that the hand state is changed from the food material taking state to the food material non-taking state according to the multi-frame color images, the access state of the food material can be determined to be the food material storing state.
In one case, when Gi’-GiWhen not equal to 0, the controller may determine whether the first condition and the second condition are satisfied, and then determine the target reservoir layer. Wherein the first condition comprises: the access state is the state of storing food material, and Gi’-Gi> 0, or the access state is the food material taking-out state, and Gi’-GiLess than 0; the second condition includes: the access state is the state of storing food material, and Gi’-GiLess than 0; or the access state is the food material taking-out state, and Gi’-Gi>0。Gi’-GiIf the weight is more than 0, namely after the food material is stored and taken, the weight detected by the weight sensor corresponding to the ith shelf is increased; gi’-GiThat is, after the food material is stored and taken out, the weight detected by the weight sensor corresponding to the ith shelf is reduced.
When the first condition is met, the controller can determine a storage layer in the target storage area adjacent to the ith shelf and proximate to the carrying surface of the ith shelf (which can also be referred to as a storage layer above the ith shelf) as the target storage layer. When the second condition is met, the controller can determine a storage layer in the target storage area that is adjacent to the ith shelf and away from the carrying surface of the ith shelf (which can also be referred to as a storage layer under the ith shelf) as the target storage layer.
In another case, there is only one shelf between the individual storage levels of the target storage area, i.e. the target storage area comprises only two storage levels. When the weight detected by the weight sensor corresponding to the shelf is not changed after the food material is stored and taken, the controller may determine the target storage layer for storing and taking the food material in the target storage area according to the storage and taking state of the food material.
For example, the one shelf in the target storage area is the ith shelf, and the weight detected by the weight sensor corresponding to the shelf is not changed after the food material is stored and taken, that is, Gi’-Gi0. At this time, when the access state is the food material storing state, the controller may determine that the target storage layer is a storage layer of the bearing surface far from the ith shelf in the target storage area; when the access state is a food material taking-out state, the controller may determine that the target storage layer is a storage layer close to the carrying surface of the ith shelf in the target storage area.
It is assumed that the target storage area comprises two storage layers including the first storage layer and the second storage layer described above, and the shelf between the two storage layers is the first shelf. Table 2 below is a table of correspondence between the change in weight detected by the weight sensor, the access status, and the target storage layer, and table 2 below shows only six of the correspondence.
As shown in table 2 below, in the first corresponding relationship, the controller determines that there is food material stored in the target storage area, and the weight detected by the weight sensor corresponding to the first shelf is increased after the food material is stored in the target storage area, at this time, the controller may determine that the food material is stored in the first storage layer.
In the second corresponding relation, the controller determines that the food material is stored in the target storage area, and the weight detected by the weight sensor corresponding to the first shelf is reduced after the food material is stored and taken out, and at the moment, the controller can determine that the food material is stored in the second storage layer. And the controller can also determine that the scene is that more food materials are stored in the second storage layer, and the food materials are ejected to the first storage layer when the food materials are stored in the second storage layer.
In the third corresponding relation, the controller determines that food materials are stored in the target storage area, the weight detected by the weight sensor corresponding to the first shelf is not changed after the food materials are stored and taken, and at the moment, the controller can determine that the food materials in the first storage layer are not changed and the food materials are stored in the second storage layer.
In the fourth corresponding relationship, the controller determines that the food material is taken out of the target storage area, and the weight detected by the weight sensor corresponding to the first shelf is increased after the food material is stored and taken out, at this time, the controller may determine that the food material is taken out of the second storage layer. Since the weight sensor corresponding to the shelf below the first storage area inevitably detects a decrease in weight when the food material is taken out of the first storage area, it can be determined that the controller can determine that the food material is taken out of the second storage layer when the weight sensor detects an increase in weight when the food material is taken out of the target storage area. And the controller can also determine that the scene is that more food materials are stored in the second storage layer, the food materials are jacked to the first storage layer before the food materials are taken out from the second storage layer, and the weight detected by the weight sensor corresponding to the shelf is smaller than the actual weight of the shelf and the objects borne by the shelf. When the food material is taken out of the second storage layer, the weight detected by the weight sensor is increased due to the lack of the load of the food material of the second storage layer.
In a fifth corresponding relationship, the controller determines that the food material is taken out of the target storage area, and the weight detected by the weight sensor corresponding to the first shelf is reduced after the food material is stored and taken out, at this time, the controller may determine that the food material is taken out of the first storage layer.
In a sixth corresponding relationship, the controller determines that the food material is taken out of the target storage area, and the weight detected by the weight sensor corresponding to the first shelf is not changed after the food material is stored and taken out, at this time, the controller may determine that the food material is taken out of the first storage layer.
TABLE 2
Serial number | Access state of food material | Change in weight | Target reservoir layer |
1 | Food material storage state | Increase of | The first storage layer |
2 | Food material storage state | Reduce | Second storage layer |
3 | Food material storage state | Is not changed | Second storage layer |
4 | Food material taking-out state | Increase of | Second storage layer |
5 | Food material taking-out state | Reduce | The first storage layer |
6 | Food material taking-out state | Is not changed | Second storage layer |
It should be noted that, in the embodiment of the present application, only the weight sensor is disposed on the bottom surface of the shelf in the storage area close to the image pickup apparatus, and when it is determined that the hand passes through the storage area close to the image pickup apparatus, the target storage layer of the stored food material is determined according to the weight detected by the weight sensor. Optionally, when the weight sensor is further arranged on the bottom surface of the shelf in the storage area far away from the image pickup device, when the hand is determined to pass through the storage area far away from the image pickup device, the storage layer of the stored food material can also be determined by combining the weight detected by the weight sensor.
Optionally, in the embodiment of the application, after the controller is in the access state of the food material and the target storage layer of the accessed food material, the storage information of the food material may be displayed on a display provided on the refrigerator. For example, when the controller determines that the access state of the food material is the food material stocking state and determines that the target storage layer in which the food material is stocked is the first storage layer, the display screen may display storage information of "stocking of food material in the first storage layer".
Optionally, the controller may further identify the type of the food material accessed according to the image acquired by the image pickup device, and further display the identified type of the food material through the display. For example, the controller may recognize that the type of the obtained food material is apple, and the display screen may display storage information of "apple is stored in the first storage layer".
It should be noted that, in the related art, by setting a fixed depth value range corresponding to each storage layer, when it is detected that the depth of the food material is within the depth value range corresponding to which storage layer when the food material is stored and taken, it is determined which storage layer the storage position of the food material is. At present, the position of the uppermost shelf (i.e., the shelf closest to the top of the refrigerator) of many storage rooms of the refrigerator can be adjusted up and down, at this time, the size of the uppermost storage layer in the storage room changes, and the actual depth range of the storage area is different from the set depth value range corresponding to the storage layer. Therefore, the accuracy of determining the storage location of the food material using the related art is low, and the manner of determining the storage location of the food material is not adaptable.
In the embodiment of the application, the target storage area is determined by detecting the depth of the hand area, and then the target storage layer stored in the stored food material is determined according to the weight detected by the weight sensor corresponding to the shelf. Because even if the position of the shelf is changed, the weight sensor corresponding to the shelf can still normally detect the weight of the shelf and the objects carried by the shelf (namely, the objects in the storage layer above the shelf). Therefore, when the position of the shelf is changed, the storage layer of the stored food material can be accurately determined according to the weight detected by the weight sensor corresponding to the shelf, and therefore the method for determining the storage position of the food material in the embodiment of the application has good adaptability.
In yet another optional embodiment of the present application, the refrigerator may further obtain the shelf life and the volume of the food material, so as to manage the food material information. Optionally, the controller may further perform the following steps to determine the shelf life and volume of the food material:
step b21, when the first food material is stored in the storage chamber at the first moment, determining a target food material condition met by the first food material according to the multi-frame first image which is acquired by the camera and comprises the first food material, wherein the target food material condition comprises a condition of the type of the food material and a condition of the state of the food material.
Optionally, any food material condition may include: conditions of food material types, or conditions of food material types and conditions of food material states; the food material status may comprise a processed status or an unprocessed status. For example, apple, banana, and the like may be food material types of food materials. The certain food material condition comprises the following food material types: the food material type is apple, and the food material state conditions are as follows: if the food material state is an unprocessed state, the food material satisfying the food material condition is an unprocessed apple.
In the following embodiments, the target food material conditions are described as examples, including conditions of food material types and conditions of food material states. In the embodiment of the present application, the food materials satisfying the target food material condition are not stored in the storage chamber before the first time, and the controller does not manage the food material information corresponding to the target food material condition. The food material information corresponding to the target food material condition may be information of a food material satisfying the target food material condition.
It should be noted that the multiple frames of first images described in this embodiment of the application may be images acquired by the image capturing apparatus in one acquisition cycle. The controller may perform image recognition based on the multiple frames of the first images, and further determine the food material type and the food material state of the first food material in the multiple frames of images to obtain the condition of the food material type and the condition of the food material state satisfied by the first food material, and then determine the target food material condition satisfied by the first food material. The food material state comprises a processed state or an unprocessed state. And when the controller cannot determine the food material type or the food material state of the first food material only in an image recognition mode, the controller can play a prompt voice prompting the user to input through the controller loudspeaker so as to determine the food material type or the food material state of the first food material in a user input mode. For example, if the controller determines that the food material type of the first food material is apple and the food material state is an unprocessed state, the controller may determine that the food material type of the target food material conditions is apple and the food material state conditions are: the food material state is unprocessed state.
Optionally, the identification model in the embodiment of the present application may be further configured to output the food material state of the food material according to the input food material image. It should be noted that, in the embodiment of the present application, reference may be made to the manner for determining the food material type in the manner for determining the target food material condition that is met by the first food material, and details of the embodiment of the present application are not described herein.
Alternatively, when the controller determines the food material state only by determining the food material type of the first food material according to the recognition result output by the recognition model, the controller may determine the food material state of the first food material in the following manner. The method comprises the following steps: the controller determines a first similarity of the first food material to a first reference food material and a second similarity of the first food material to a second reference food material. And determining the food material state of the food material corresponding to the greater similarity of the first similarity and the second similarity as the food material state of the first food material. The food material types of the first reference food material and the second reference food material are both the food material types of the first food material, the food material state of the first reference food material is a processed state, and the food material state of the second reference food material is an unprocessed state.
When the controller determines only the material type of the first material and does not determine the material state of the first material, the controller may determine which of the first material and the first reference material is more similar to determine the material state of the first material according to the similarity between the first material and the first reference material and the similarity between the first material and the second reference material.
The similarity between two food materials can be represented by the similarity between images containing the two food materials. The food material state of the first reference food material is a processed state, and the food material state of the second reference food material is an unprocessed state. If the first food material is more similar to the first reference food material, determining that the food material state of the first food material is a processed state; if the first food material is more similar to the second reference food material, the food material status of the first food material can be determined to be an unprocessed status.
For example, the controller may determine similarity between each frame of food material image in the at least one frame of food material image and each frame of first reference image in the at least one frame of first reference image, to obtain at least one first reference similarity, and then determine a maximum similarity among the at least one first reference similarity as the first similarity. Wherein the first reference image comprises a first reference food material. The controller can determine the similarity between each frame of food material image in the at least one frame of food material image and each frame of second reference image in the at least one frame of second reference image to obtain at least one second reference similarity, and further determine the maximum similarity in the at least one second reference similarity as the second similarity. Wherein the second reference image comprises a second reference food material. The food material corresponding to the greater similarity of the first similarity and the second similarity is the food material more similar to the first food material, and the controller can determine the food material state corresponding to the greater similarity as the food material state of the first food material.
Alternatively, the controller may determine the similarity between each frame of food material image and each frame of reference image by any one of a Structural Similarity Index (SSIM) method, a cosine similarity (cosin) determination method, and a histogram-based similarity determination method.
In an embodiment of the application, the controller may further determine a first moment at which the first food material is stored in the storage chamber. For example, the controller may determine the first time by:
in the first mode, the controller may determine any time between the start of the capturing of the first image and the stop of the capturing of the first image by the image capturing apparatus as a first time when the first food material is stored in the storage chamber. That is, the controller may determine any one time in the acquisition cycle in which the first image is acquired as the first time.
For example, the first time may be a time at which the controller determines that the door is changed from the open state to the closed state, or the first time may be a time at which the controller determines that the door is changed from the closed state to the open state. Since the time consumed in the storing process of the user food material is generally less, the error of determining any moment in the storing process as the storing moment of the first food material is less. In addition, the first time is used for determining the shelf life of the food material, and the time required by the user to access the food material can be ignored compared with the shelf life of the food material, so that any time in the acquisition cycle of the first image is determined as the first time for storing the first food material, and the determination of the shelf life cannot be influenced.
In the second aspect, the controller may determine a time when the user's hand is inserted into the storage chamber as a first time when the first food material is stored in the storage chamber.
For example, the controller may determine a movement trajectory of the hand of the user according to the first images of the plurality of frames, and then determine an image when the hand moves to a target position according to the movement trajectory, where the target position is a position farthest from the starting point in the movement trajectory. The controller may consider the target position to be located in the storage compartment, and may determine that the hand extends into the storage compartment when the hand is moved to the target position. Further, the controller may determine a timing at which the image capturing apparatus captures the image as the first timing. It should be noted that, reference may be made to the description of step b11 for determining the moving trajectory of the hand, and details of the embodiment of the present application are not described again.
It should be noted that the controller may determine the access state of the first food material according to the plurality of frames of the first image. For the introduction of the access state of the first food material, reference may be made to the above description of determining the access state, and details are not described in this embodiment of the application. It should be noted that, in the embodiment of the present application, the controller determines the access state of the first food material as the stored food material state.
Step b22, determining food material information corresponding to the target food material conditions, wherein the food material information comprises shelf life and volume.
After the controller determines that the target food material condition is met by the first food material in step b21, the controller may search for food material information corresponding to the target food material condition in a food material information base, where the food material information base may include a corresponding relationship between various target food material conditions (i.e., a combination of a food material type and a food material state) and the food material information. For example, the food material information library may include the corresponding relationship shown in table 3 below. As shown in table 3 below, the food material information corresponding to the target food material condition may include, in addition to the shelf life and the volume: the food material storage method meeting the target food material condition and the reference image comprising the food material meeting the target food material condition. Optionally, the food material information corresponding to the target food material condition may further include the weight of the food material or other information, which is not limited in the embodiment of the present application.
TABLE 3
It should be noted that the food material information corresponding to the target food material condition may include at least one of a shelf-life duration and a volume, and in the embodiment of the present application, the food material information includes both the shelf-life duration and the volume. Optionally, the food material information may also comprise only shelf life or volume.
Assuming that the first food material is unprocessed chives, that is, the food material type of the first food material is chives, and the food material state of the first food material is an unprocessed state, the target food material condition includes the food material type of the chives and the food material state is an unprocessed state. In step b22, the controller may determine that the shelf life corresponding to the target food material condition is 1 to 3 days and the volume is 2000 cubic centimeters according to the correspondence shown in table 3 above. It should be noted that, in table 3 above, the shelf-life time corresponding to the target food material condition is taken as an example of a time length range, optionally, the shelf-life time may also be a fixed time length, or the controller may determine any time length within the shelf-life time range corresponding to the target food material condition as the shelf-life time corresponding to the target food material condition.
It should be noted that, when the controller determines the food material information corresponding to the target food material condition, the food material information may be recorded, so as to manage the information related to the food materials in the refrigerator. For example, the controller may record the food material information in a certain record table stored in the memory. Optionally, the record table may record information such as a storage location of the food material in the storage chamber, an access state of the food material, and an access time, in addition to the volume and shelf life of the food material.
For example, the controller may obtain a record table as shown in table 4 below after recording the food material information corresponding to the target food material condition satisfied by the first food material in the record table.
TABLE 4
And b23, determining the food material conditions met by the third food material taken out of the storage chamber according to the plurality of frames of third images acquired by the camera at a third moment after the first moment.
It should be noted that, each time the controller determines that there is a food material taken out of the storage chamber or a food material stored in the storage chamber, the controller can identify the food material to determine the type and state of the food material, and further determine the food material condition satisfied by the food material.
For example, it is assumed that after the first food material is stored in the storage chamber at the first time, the controller determines that the user takes out the third food material from the storage chamber, that is, the access state of the third food material is the food material taking-out state, and the controller may determine that the time when the user takes out the third food material is the third time. Further, the controller can also determine a food material type and a food material status of the third food material to determine the food material condition satisfied by the third food material.
It should be noted that the manner in which the controller determines the access state of the third food material is the same as the manner in which the access state of the first food material is determined, the manner in which the controller determines the access state of the third food material is the same as the manner in which the controller determines the access state of the first food material, the manner in which the controller determines the food material conditions met by the third food material is the same as the manner in which the controller determines the target food material conditions met by the first food material, and details of the embodiment of the present application are omitted.
In the embodiment of the present application, the third food material is also taken as the food material satisfying the target food material condition, that is, the food material condition satisfied by the third food material is taken as the target food material condition.
Step b24, when the third food material meets the target food material condition, updating the food material information corresponding to the target food material condition to obtain the first food material information.
The controller can determine whether the food material condition satisfied by the third food material is the target food material condition to determine whether the third food material is the same as the first food material, and further determine whether the update condition is satisfied. The updated condition may comprise that the food material satisfying the target food material condition is stored in or taken out from the storage chamber, i.e. the same food material as the first food material is stored in or taken out from the storage chamber. When the food material condition satisfied by the third food material is the target food material condition, the controller may determine that the third food material is the same as the first food material, so that it may be determined that the food material satisfying the target food material condition stored in the storage chamber is changed, and then the food material information corresponding to the target food material condition may be updated.
Illustratively, the shelf life time T corresponding to the updated target food material condition is T0- (T3-T1), T0 represents the shelf life time corresponding to the target food material condition before updating, T1 represents the first time, and T3 represents the third time. When the food material satisfying the target food material condition is taken out of the storage chamber, the volume of the food material satisfying the target food material condition in the storage chamber is reduced, so that the volume V corresponding to the updated target food material condition is V0- Δ V2, V0 represents the volume corresponding to the target food material condition before updating, and Δ V2 represents the volume of the third food material (i.e., the taken food material satisfying the target food material condition). Therefore, the first food material information (i.e., the food material information corresponding to the updated target food material condition) can include T and V. Alternatively, since it is generally difficult to determine the specific volume of the food material, Δ V2 can be set as a fixed value, and the volume of the food material satisfying the target food material condition in the storage chamber can be estimated roughly, so as to achieve fuzzy management of the volume of the food material. Optionally, the specific volume of the third food material may also be measured, and then the accurate volume Δ V2 of the third food material is determined according to the measurement result.
Supposing that the third food material is taken out after the first food material is stored for one day, the first food material and the third food material are unprocessed Chinese chives, namely the first food material and the third food material both meet the target food material condition that the food material type is Chinese chives and the food material state is an unprocessed state; the volume Δ V2 of the third food material is a set fixed value, such as 1000 cubic centimeters. The controller may update the food material information corresponding to the target food material condition, for example, update the record table shown in table 4 above to obtain the record table shown in table 5 below. As shown in table 5 below, the volume of the food material information corresponding to the updated target food material condition may be 1000 cubic centimeters, and the shelf life may be 2 days.
TABLE 5
And b25, determining the food material conditions met by the second food material stored in the storage chamber according to the plurality of frames of second images acquired by the camera at a second moment after the third moment.
For example, it is assumed that after the third food material is taken out of the storage chamber at the third time, the controller determines that the user stores the second food material into the storage chamber, that is, the access state of the second food material is the food material storing state, and the controller may determine that the time when the user stores the second food material is the second time. Further, the controller can also determine the food material type and food material status of the second food material to determine the food material condition satisfied by the second food material.
It should be noted that the manner in which the controller determines the access state of the second food material is the same as the manner in which the access states of the first food material and the third food material are determined, the manner in which the controller determines the access state of the second food material is the same as the manner in which the controller determines the access states of the first food material and the third food material, and the manner in which the controller determines the food material conditions met by the second food material is the same as the manner in which the controller determines the food material conditions met by the first food material and the third food material, which is not described in detail in the embodiments of the.
Step b26, determining whether the food material condition met by the second food material is the same as the food material condition met by the third food material. When the food material condition satisfied by the second food material is different from the food material condition satisfied by the third food material, performing step b 27; when the food material condition satisfied by the second food material is the same as the food material condition satisfied by the third food material, step b28 is executed.
The controller can determine whether the food material condition satisfied by the second food material is the same as the food material condition satisfied by the third food material to determine whether the second food material is the same as the third food material. Further, the controller may determine whether the food material stored in the storage chamber satisfying the target food material condition is changed again.
Step b27, determining the food material information corresponding to the food material condition satisfied by the second food material.
The controller determines that the food material conditions met by the second food material are different from the food material conditions met by the third food material, and then the controller can determine that the second food material is different from the third food material, and further can determine the food material information corresponding to the food material conditions met by the second food material, and record the food material information corresponding to the food material conditions. The controller can determine that the food material (e.g., the second food material) satisfying the food material condition is stored in the storage chamber according to the record, and further manage the food material information of the food material satisfying the food material condition.
And b28, determining whether the time difference between the second time and the third time is less than the duration threshold. When the time difference between the second time and the third time is smaller than the duration threshold, executing step b 29; and executing the step b27 when the time difference between the second time and the third time is greater than or equal to the duration threshold.
When the food material condition satisfied by the second food material is the same as the food material condition satisfied by the third food material (i.e., both the target food material conditions), the controller may determine that the second food material is the same as the third food material, and thus may determine that the food material satisfying the target food material condition stored in the storage chamber has changed. At this time, the second food material is the food material which is stored at the second moment and meets the target food material condition. Further, the controller may further determine a specific change of the food material satisfying the target food material condition. For example, the controller may determine a time difference between the second time and the third time, and further determine a specific change condition of the food material satisfying the target food material condition according to a relationship between the time difference and the time length threshold.
When the time difference between the second time and the third time is greater than or equal to the duration threshold, the controller may determine that a second food material identical to the third food material is placed after the third food material is taken out of the storage chamber for a longer time, and may further consider the second food material as a food material to be purchased again. At this time, the food material information corresponding to the food material condition satisfied by the second food material may be redetermined, and the food material information corresponding to the food material condition satisfied by the second food material is recorded. Optionally, since the food material condition satisfied by the second food material is the target food material condition, the controller may update the food material information (i.e., the first food material information) corresponding to the recorded target food material condition at this time, and update the food material information to the food material information corresponding to the target food material condition in the food material information base.
Step b29, determining whether the second food material is at least part of the third food material. When the second food material is at least part of the third food material, executing step b 210; when the second food material is not at least part of the third food material, step b27 is performed.
When the time difference between the second time and the third time is less than the duration threshold, the controller may determine that the second food material identical to the third food material is put in after a short time after the third food material is taken out of the storage chamber. Since the user usually takes out the food material from the storage chamber, if the food material is not eaten, the user stores the remaining food material in the refrigerator again in a short time. Therefore, when the time difference between the second time and the third time is smaller than the duration threshold, the controller may determine that the second food material is likely to be the remaining food material in the extracted third food material.
For example, the controller may control the speaker to play a prompt voice confirming whether the second food material is at least part of the third food material to determine whether the second food material is the remaining food material in the extracted third food material. And then the controller acquires response voice collected by the microphone within the target time length after the prompt voice is played by the loudspeaker. And further determining whether the response voice indicates that the second food material is at least part of the third food material.
Step b210, updating the food material information corresponding to the target food material condition to obtain second food material information.
When the controller determines that the second food material is at least part of the third food material, the controller may determine that the food material satisfying the target food material condition stored in the storage chamber is changed again, and may update the food material information corresponding to the target food material condition.
Illustratively, the shelf life time T corresponding to the updated target food material condition is T0- (T2-T3), T0 represents the shelf life time corresponding to the target food material condition before updating, T2 represents the second time, and T3 represents the third time. When the food materials satisfying the target food material condition are stored in the storage chamber, the volume of the food materials satisfying the target food material condition in the storage chamber increases, so that the volume V corresponding to the updated target food material condition is V0 +. DELTA.V 1, V0 represents the volume corresponding to the target food material condition before updating, and DELTA.V 1 represents the volume of the second food material (i.e., the stored food materials satisfying the target food material condition). Here, Δ V1 may be a fixed value or may be a specific volume of the second food material. Therefore, the second food material information (i.e., the food material information corresponding to the updated target food material condition) can include T and V. It should be noted that, at this time, the shelf life time T0 corresponding to the target food material condition before updating is the shelf life time in the first food material information, that is, the shelf life time T after updating in step b 24; at this time, the volume V0 corresponding to the target food material condition before updating is the volume in the first food material information, i.e. the volume V updated in step b 24. Alternatively, in the embodiment of the present application, when Δ V1 may be a fixed value set together with Δ V2, Δ V1 may be equal to Δ V2, or Δ V1 may not be equal to Δ V2.
Optionally, in the embodiment of the present application, the controller may further determine in real time whether the quality guarantee period corresponding to the target food material condition is less than the time limit value, and control the speaker to play a third prompt voice indicating the quality guarantee period corresponding to the target food material condition when it is determined that the quality guarantee period is less than the time limit value. And then the user can determine that the food materials meeting the target food material condition are going to deteriorate when hearing the third prompt voice, and further can process the food materials in time.
Optionally, in this embodiment of the application, when the third food material is taken out, the controller may control the speaker to play a prompt voice for confirming whether the remaining food materials still exist in the storage chamber, where the food material conditions satisfied by the remaining food materials are the same as the food material conditions satisfied by the third food material. And then the controller can confirm whether residual food materials still exist in the storage chamber according to the response voice collected by the microphone. When the controller determines that there is no remaining food material in the storage chamber, the controller may delete the food material information corresponding to the target food material condition. For example, when the food materials satisfying the target food material condition are not stored in the storage chamber, the controller does not need to manage the food material information of the food materials, so the controller can delete the food material information corresponding to the target food material condition.
In the embodiment of the present application, the food material conditions satisfied by the food materials are described as an example, where the food materials include both the type of the food material and the state of the food material. Optionally, the food material condition satisfied by the food material may also only include the type of the food material, and for such a case, the controller only needs to omit the step of determining the food material state of the food material.
It should be noted that, in the embodiment of the present application, only the target food material condition includes the condition of the food material type and the condition of the food material state. The combined controller for one food material type and food material state can record only one food material information as an example. For example, only one correspondence between the processed apples and the food material information may exist in the stored record table. Optionally, the target food material condition may also include conditions of other information of the food material, such as conditions of a storage location. At this time, for each combination of the food material type, the food material state and the storage position, the controller can record the corresponding food material information. The record table as stored may include the correspondence of processed apples stored in the first layer of the refrigeration compartment to food material information and the correspondence of processed apples stored in the second layer of the refrigeration compartment to food material information. Wherein the storage location is a condition of the first floor of the cold storage compartment and the storage location is a storage location of the second floor of the cold storage compartment that is respectively one of the two target food material conditions. Optionally, the target food material condition may further include conditions of other information of the food material, and the correspondence between the target food material condition and the food material information recorded by the controller when the conditions of other information are included can be analogized according to the correspondence when the conditions of the storage location are included, which is not described in detail in the embodiment of the present application.
It should be noted that, in the embodiment of the present application, the first food material, the second food material, and the third food material are only used for distinguishing food materials in different access scenes, and names of the first food material, the second food material, and the third food material may be arbitrarily replaced. The first time, the second time and the third time in the embodiment of the application are only used for distinguishing different times, and the names of the first time, the second time and the third time can be replaced arbitrarily. In the above embodiment of the application, the second food material is stored at the second time, and optionally, the time when the food material meeting the target food material condition is taken out may also be referred to as the second time.
It should be understood that the terms "first," "second," "third," and the like in the description and in the claims of the present application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used are interchangeable under appropriate circumstances and can be implemented in sequences other than those illustrated or otherwise described herein with respect to the embodiments of the application, for example.
Furthermore, the terms "comprises" and "comprising," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a product or device that comprises a list of elements is not necessarily limited to those elements explicitly listed, but may include other elements not expressly listed or inherent to such product or device.
Embodiments of the present application also provide a computer program product containing instructions, which when run on a computer, cause the computer to execute the method provided by the embodiments of the present application.
The above description is only exemplary of the present application and should not be taken as limiting, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.
Claims (11)
1. A refrigerator, characterized in that the refrigerator comprises:
a cabinet including a storage compartment having an opening;
the door is movably connected with the shell and used for shielding the opening;
the camera equipment is used for acquiring an image at the opening;
the code scanning module is used for collecting graphic codes;
a controller to:
inputting at least one frame of food material image including a target food material in the multi-frame image acquired by the camera equipment into an identification model to obtain an identification result of the food material image output by the identification model, wherein the identification model is used for outputting a food material type based on the input food material image;
determining a target identification result according to the identification result of the at least one frame of food material image;
when the target identification result does not comprise the food material type and the code scanning module collects the graphic code within the target duration after the target identification result is determined, analyzing the graphic code to obtain an analysis result;
and determining the food material type of the target food material according to the analysis result.
2. The refrigerator of claim 1, wherein the recognition model is further configured to: outputting a confidence level of the food material type based on the input image, the controller further for:
when at least one recognition result comprising food material types exists in the recognition results of the at least one frame of food material images, generating the target recognition result comprising the food material types of the target food materials;
wherein the food material types of the target food material are as follows: the food material type in the recognition result with the highest confidence coefficient of the food material types in the at least one recognition result.
3. The refrigerator according to claim 1 or 2, wherein the controller is further configured to:
determining an access state of the target food material and an identification state of the target food material according to the multi-frame images, wherein the access state comprises a food material storing state or a food material taking state, and the identification state comprises a graphic code carrying state or a graphic code not carrying state;
after the target recognition result is determined, if a first condition is met, determining the similarity between each frame of food material image and each frame of reference image in at least one frame of reference image to obtain at least one reference similarity; wherein the reference image comprises the food materials which are stored in the storage room and have the identification states of the graphic code carrying states; the first condition includes: the identification state of the target food material is the state carrying the graphic code, the target identification result does not include the food material type, the graphic code is not collected by the code scanning module within the target time length, and the access state of the target food material is the state of taking out the food material;
determining a maximum similarity among the at least one reference similarity when the at least one reference similarity includes a similarity greater than a similarity threshold;
and determining the food material type of the food material in the reference image corresponding to the maximum similarity as the food material type of the target food material.
4. The refrigerator according to claim 3, further comprising:
the loudspeaker is used for playing prompt voice prompting to input the food material type of the target food material when a second condition is met; wherein the second condition comprises: the analysis result does not comprise any food material type; or, the at least one reference similarity is less than the similarity threshold; or the identification state of the target food material is the state carrying the graphic code, the target identification result does not include any food material type, the graphic code is not collected by the code scanning module within the target time length, and the access state of the target food material is the state of the stored food material;
the microphone is used for collecting response voice within a first time length after the prompt voice is played by the loudspeaker;
the controller is further used for determining the food material type of the target food material based on the response voice.
5. The refrigerator of claim 4, wherein the controller is further configured to:
determining a target character string corresponding to the graphic code;
when the information corresponding to the target character string exists in a graphic code information base, determining the information corresponding to the target character string as the analysis result, wherein the graphic code information base comprises: information corresponding to the plurality of character strings.
6. The refrigerator of claim 5, wherein the controller is further configured to:
and when the second condition is met and the food material type of the target food material is determined based on the response voice, storing the corresponding relation between the target character string corresponding to the graphic code and the food material type of the target food material into the graphic code information base.
7. The refrigerator of claim 1 or 2, wherein the code scanning module is located on a surface of the door away from the opening or a surface near the opening.
8. The refrigerator of claim 1, wherein the controller is further configured to:
acquiring a retraining instruction, wherein the retraining instruction carries the food material type of the target food material, and the retraining instruction is used for indicating that the target identification result does not include the food material type of the target food material;
training the recognition model with target training data based on the retraining instruction to update the recognition model, the target training data comprising: a target food material image in the at least one frame of food material image and a food material type of the target food material, wherein the identification result of the target food material image does not include the food material type of the target food material.
9. The refrigerator of claim 1, wherein the controller is further configured to:
acquiring n frames of target images including a hand area in the images acquired by the camera equipment, wherein n is more than or equal to 1;
and determining the at least one frame of food material image according to the n frames of target images.
10. The refrigerator of claim 1, 8 or 9, wherein the controller is further configured to:
when the target food materials meeting the target food material conditions are stored at a first moment, determining food material information corresponding to the target food material conditions; wherein the food material information comprises at least one of shelf life and volume;
at a second moment after the first moment, if the updating condition is met, updating the food material information corresponding to the target food material condition;
wherein the update condition includes: storing or taking out food materials meeting the target food material conditions;
the quality guarantee period T corresponding to the updated target food material condition is T0- (T2-T1), T0 represents the quality guarantee period corresponding to the target food material condition before updating, T1 represents the first moment, and T2 represents the second moment;
when the food materials meeting the target food material condition are stored at the second moment, updating the volume V of the food material information corresponding to the target food material condition as V0 plus delta V1; when the food materials meeting the target food material condition are taken out at the second moment, the updated volume V of the food material information corresponding to the target food material condition is V0-delta V2; v0 represents the volume of the food material information corresponding to the target food material condition before updating, Δ V1 represents the volume of the food material satisfying the target food material condition stored at the second time, and Δ V2 represents the volume of the food material satisfying the target food material condition extracted at the second time.
11. The refrigerator according to claim 1, 8, 9 or 10, further comprising:
a shelf in the storage compartment for dividing the storage compartment into a plurality of storage layers;
the weight sensor is used for detecting the weight of the corresponding shelf and the object borne by the shelf;
the controller is further configured to:
determining a moving track of the hand according to a multi-frame target image including a hand region acquired by the camera equipment;
determining a target storage area which is passed by the hand in a plurality of storage areas according to the moving track, wherein the storage areas comprise at least two storage layers, and the storage layers in the plurality of storage areas are different;
determining a target storage layer in which the food material stored in the target storage area is changed according to the weight detected by the weight sensor corresponding to at least one shelf located between the respective storage layers of the target storage area.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911416054.5A CN113124633B (en) | 2019-12-31 | 2019-12-31 | Refrigerator with a door |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911416054.5A CN113124633B (en) | 2019-12-31 | 2019-12-31 | Refrigerator with a door |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113124633A true CN113124633A (en) | 2021-07-16 |
CN113124633B CN113124633B (en) | 2022-04-01 |
Family
ID=76769504
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911416054.5A Active CN113124633B (en) | 2019-12-31 | 2019-12-31 | Refrigerator with a door |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113124633B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2022162053A (en) * | 2020-09-03 | 2022-10-21 | パナソニックIpマネジメント株式会社 | food management system and refrigerator |
TWI841920B (en) * | 2021-09-06 | 2024-05-11 | 日商日立環球生活方案股份有限公司 | Refrigerator |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103712410A (en) * | 2012-09-28 | 2014-04-09 | Lg电子株式会社 | Electric product |
CN106016892A (en) * | 2016-05-24 | 2016-10-12 | 青岛海尔股份有限公司 | Control method and system of intelligent refrigerator |
JP2017009203A (en) * | 2015-06-23 | 2017-01-12 | シャープ株式会社 | Interior photographing device |
CN106679321A (en) * | 2016-12-19 | 2017-05-17 | Tcl集团股份有限公司 | Intelligent refrigerator food management method and intelligent refrigerator |
CN108154078A (en) * | 2017-11-20 | 2018-06-12 | 爱图瓴(上海)信息科技有限公司 | Food materials managing device and method |
CN108647734A (en) * | 2018-05-15 | 2018-10-12 | 上海达显智能科技有限公司 | A kind of food image big data acquisition method, acquisition system and food recognition methods |
-
2019
- 2019-12-31 CN CN201911416054.5A patent/CN113124633B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103712410A (en) * | 2012-09-28 | 2014-04-09 | Lg电子株式会社 | Electric product |
JP2017009203A (en) * | 2015-06-23 | 2017-01-12 | シャープ株式会社 | Interior photographing device |
CN106016892A (en) * | 2016-05-24 | 2016-10-12 | 青岛海尔股份有限公司 | Control method and system of intelligent refrigerator |
CN106679321A (en) * | 2016-12-19 | 2017-05-17 | Tcl集团股份有限公司 | Intelligent refrigerator food management method and intelligent refrigerator |
CN108154078A (en) * | 2017-11-20 | 2018-06-12 | 爱图瓴(上海)信息科技有限公司 | Food materials managing device and method |
CN108647734A (en) * | 2018-05-15 | 2018-10-12 | 上海达显智能科技有限公司 | A kind of food image big data acquisition method, acquisition system and food recognition methods |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2022162053A (en) * | 2020-09-03 | 2022-10-21 | パナソニックIpマネジメント株式会社 | food management system and refrigerator |
TWI841920B (en) * | 2021-09-06 | 2024-05-11 | 日商日立環球生活方案股份有限公司 | Refrigerator |
Also Published As
Publication number | Publication date |
---|---|
CN113124633B (en) | 2022-04-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US5170440A (en) | Perceptual grouping by multiple hypothesis probabilistic data association | |
CN111444880B (en) | Food material identification method and refrigerator | |
CN110689560B (en) | Food material management method and equipment | |
CN113124633B (en) | Refrigerator with a door | |
CN111476302A (en) | fast-RCNN target object detection method based on deep reinforcement learning | |
CN107705324A (en) | A kind of video object detection method based on machine learning | |
CN113124635B (en) | Refrigerator with a door | |
CN109141395B (en) | Sweeper positioning method and device based on visual loopback calibration gyroscope | |
CN104408760B (en) | A kind of high-precision virtual assembly system algorithm based on binocular vision | |
CN107341442A (en) | Motion control method, device, computer equipment and service robot | |
CN106052294A (en) | Refrigerator and method for judging change of objects in object storage area of refrigerator | |
Wang et al. | Towards cooperation in sequential prisoner's dilemmas: a deep multiagent reinforcement learning approach | |
CN111476194B (en) | Detection method for working state of sensing module and refrigerator | |
CN110674789A (en) | Food material management method and refrigerator | |
CN113139402B (en) | A kind of refrigerator | |
CN113124636B (en) | Refrigerator | |
CN113947770B (en) | Method for identifying object placed in different areas of intelligent cabinet | |
CN108765463A (en) | A kind of moving target detecting method calmodulin binding domain CaM extraction and improve textural characteristics | |
US20220325946A1 (en) | Selective image capture using a plurality of cameras in a refrigerator appliance | |
CN109740527B (en) | Image processing method in video frame | |
CN113124634B (en) | Refrigerator with a door | |
WO2022206043A1 (en) | Smart refrigerator, access action recognition method, device, and medium | |
CN115719472A (en) | Complementary optimization method, system, equipment and storage medium based on detection and tracking | |
CN111967403B (en) | Video movement area determining method and device and electronic equipment | |
CN111488831A (en) | Food association identification method and refrigerator |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |