CN111503991B - Method for identifying storage and taking positions of refrigerator food materials and refrigerator - Google Patents

Method for identifying storage and taking positions of refrigerator food materials and refrigerator Download PDF

Info

Publication number
CN111503991B
CN111503991B CN202010295608.7A CN202010295608A CN111503991B CN 111503991 B CN111503991 B CN 111503991B CN 202010295608 A CN202010295608 A CN 202010295608A CN 111503991 B CN111503991 B CN 111503991B
Authority
CN
China
Prior art keywords
refrigerator
image
roi
food material
identifying
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010295608.7A
Other languages
Chinese (zh)
Other versions
CN111503991A (en
Inventor
赵启东
殷红杰
方春伦
杨柳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hisense Co Ltd
Original Assignee
Hisense Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hisense Co Ltd filed Critical Hisense Co Ltd
Priority to CN202010295608.7A priority Critical patent/CN111503991B/en
Publication of CN111503991A publication Critical patent/CN111503991A/en
Application granted granted Critical
Publication of CN111503991B publication Critical patent/CN111503991B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F25REFRIGERATION OR COOLING; COMBINED HEATING AND REFRIGERATION SYSTEMS; HEAT PUMP SYSTEMS; MANUFACTURE OR STORAGE OF ICE; LIQUEFACTION SOLIDIFICATION OF GASES
    • F25DREFRIGERATORS; COLD ROOMS; ICE-BOXES; COOLING OR FREEZING APPARATUS NOT OTHERWISE PROVIDED FOR
    • F25D29/00Arrangement or mounting of control or safety devices
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F25REFRIGERATION OR COOLING; COMBINED HEATING AND REFRIGERATION SYSTEMS; HEAT PUMP SYSTEMS; MANUFACTURE OR STORAGE OF ICE; LIQUEFACTION SOLIDIFICATION OF GASES
    • F25DREFRIGERATORS; COLD ROOMS; ICE-BOXES; COOLING OR FREEZING APPARATUS NOT OTHERWISE PROVIDED FOR
    • F25D2400/00General features of, or devices for refrigerators, cold rooms, ice-boxes, or for cooling or freezing apparatus not covered by any other subclass
    • F25D2400/36Visual displays
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F25REFRIGERATION OR COOLING; COMBINED HEATING AND REFRIGERATION SYSTEMS; HEAT PUMP SYSTEMS; MANUFACTURE OR STORAGE OF ICE; LIQUEFACTION SOLIDIFICATION OF GASES
    • F25DREFRIGERATORS; COLD ROOMS; ICE-BOXES; COOLING OR FREEZING APPARATUS NOT OTHERWISE PROVIDED FOR
    • F25D2500/00Problems to be solved
    • F25D2500/06Stock management
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F25REFRIGERATION OR COOLING; COMBINED HEATING AND REFRIGERATION SYSTEMS; HEAT PUMP SYSTEMS; MANUFACTURE OR STORAGE OF ICE; LIQUEFACTION SOLIDIFICATION OF GASES
    • F25DREFRIGERATORS; COLD ROOMS; ICE-BOXES; COOLING OR FREEZING APPARATUS NOT OTHERWISE PROVIDED FOR
    • F25D2600/00Control issues
    • F25D2600/06Controlling according to a predetermined profile

Abstract

The application relates to the technical field of household appliances, in particular to a method for identifying food material storing and taking positions of a refrigerator and the refrigerator. The problems that depth information of a common color image is lost, the calculated amount of an image invalid area is large, and the storage and taking positions of food materials cannot be identified when the position of a refrigerator partition plate is changed can be solved to a certain extent. The method comprises the steps of acquiring a first image and a second image of a refrigerator door area; setting an interest region for the first image to obtain a first ROI, setting the interest region for the second image to obtain a second ROI, and performing partition detection based on the second ROI to obtain partition position information; detecting based on the first ROI to obtain a motion region; detecting in the motion area to obtain a hand mass center motion track; and identifying to obtain the food material storing and taking position according to the hand mass center motion track and the position information of the partition plate.

Description

Method for identifying storage and taking positions of refrigerator food materials and refrigerator
Technical Field
The application relates to the technical field of household appliances, in particular to a method for identifying food material storing and taking positions of a refrigerator and the refrigerator.
Background
The refrigerator is the most common white household appliance in family life, and is the household appliance product which starts and develops the earliest in Chinese household appliances. The food material management of the refrigerator becomes a core function of the intelligent refrigerator, and the food material management comprises the management of food material types, quantity, freshness date and storage position information. The food material storage position information is used for providing storage positions of food materials in different partitions in the refrigerator for users.
In some implementations of methods for identifying the access location of the refrigerator food material, the types of different food materials are generally identified through manual entry of food material information or a color image-based static identification scheme. The static identification scheme is that one or more cameras are mounted on each partition board of the refrigerator and used for collecting color images, the color images in the refrigerator are collected before and after a user opens or closes the refrigerator door, and then the type identification of different food materials is carried out based on the color images.
However, the traditional color two-dimensional images do not have depth information, the food materials are shielded from each other, and the position of the refrigerator partition plate is adjusted manually, so that the problem that the food material position is not identified accurately or cannot be identified is caused.
Disclosure of Invention
The method for identifying the food material access position of the refrigerator and the refrigerator can solve the problems that the depth information of a common color image is lost, the calculation amount of an image invalid area is large, and the food material access position cannot be identified when the position of a refrigerator partition plate changes to a certain extent by constructing a first image, a second image, a first ROI (region of interest) and a second ROI, acquiring partition plate position information, constructing a confidence thermodynamic diagram, calculating a background image template and detecting a hand mass center motion track.
The embodiment of the application is realized as follows:
a first embodiment of the present application provides a method for identifying an access location of a refrigerator food material, the method comprising:
acquiring a first image and a second image of a refrigerator door area;
setting an interest region for the first image to obtain a first ROI, and setting an interest region for the second image to obtain a second ROI;
detecting a partition plate based on the second ROI to obtain partition plate position information;
detecting based on the first ROI to obtain a motion area, and detecting in the motion area to obtain a hand mass center motion track;
and identifying to obtain the food material storing and taking position according to the hand mass center motion track and the position information of the partition plate.
In a second embodiment of the present application, a method for identifying an access position of a refrigerator food material is provided, where the first image is a depth image and the second image is a color image.
In a third embodiment of the present application, a method for identifying an access location of a refrigerator food material is provided, wherein the location information of the partition is updated and stored in a location threshold configuration file of a refrigerator system when the location information of the partition changes.
In a fourth embodiment of the present application, a method for identifying an access position of a refrigerator food is provided, where before setting a region of interest for a first image to obtain a first ROI, the method further includes performing a reduction preprocessing on the first image; before setting the interest region of the second image to obtain a second ROI, the method further comprises the step of carrying out reduction preprocessing on the second image.
In a fifth embodiment of the present application, a method for identifying an access position of a refrigerator food is provided, where a motion region is obtained by detecting based on the first ROI, and the following is performed: calculating to obtain a background image template of the refrigerator based on a plurality of continuous images of the first ROI; and calculating the difference value between the first ROI image and the background image template to obtain a motion region.
In a sixth embodiment of the present application, a method for identifying an access location of a refrigerator food is provided, wherein the background image template is dynamically updated after each food access.
A seventh embodiment of the present application provides a method for identifying an access position of a refrigerator food, wherein the partition detection is specifically implemented to detect an edge of each partition in the refrigerator, and the partition detection may be performed based on a deep learning model or based on hough line detection.
In an eighth embodiment of the present application, a method for identifying an access position of a food material of a refrigerator is provided, where after the food material access position is identified, the method further includes: and the user actively corrects the access position, constructs a confidence thermodynamic diagram of the partition plate, and updates the position threshold configuration file if the confidence of the partition plate is smaller than a preset threshold.
A ninth embodiment of the present application provides a refrigerator including:
a refrigerator main body configured to be provided with a depth camera, a controller;
a display screen provided to an outer case of the refrigerator main body, configured to display an interface;
a depth camera configured to capture a first image of a refrigerator door area and a second image of a refrigerator interior;
a controller configured to perform the method according to the summary of the first to eighth embodiments in the invention of the present application, and display the food material information on the display screen.
In a tenth embodiment of the present application, there is provided a refrigerator, wherein the controller is further configured to display the food material information on another smart device.
An eleventh embodiment of the present application provides a refrigerator, wherein the food material information includes one or a combination of several of a food material storage location, recipe recommendation, and purchase reminder.
The technical scheme provided by the application comprises the following beneficial effects: by constructing the first image and the second image, the identification of the operation track of the user and the detection of the partition plate can be realized; further, by constructing the first ROI and the second ROI, the influence of invalid regions can be eliminated, and the calculated amount of image recognition detection is reduced; further, by acquiring the position information of the partition plate and constructing a confidence thermodynamic diagram, the position threshold value configuration file can be updated in time when the position of the refrigerator partition plate changes; further, by acquiring a background image template, a motion area can be acquired reversely; the identification of the food material storing and taking position can be realized by detecting and identifying the motion track of the mass center of the hand in the motion area.
Drawings
In order to more clearly explain the technical solution of the present application, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious to those skilled in the art that other drawings can be obtained according to the drawings without any creative effort.
Fig. 1 shows a functional schematic diagram of a refrigerator according to an embodiment of the present application;
fig. 2 is a schematic flow chart illustrating a method for identifying an access location of a refrigerator food material according to an embodiment of the present application;
FIG. 3 shows a depth camera mounting position schematic of an embodiment of the present application;
FIG. 4 shows a schematic view of a depth camera mounting location according to another embodiment of the present application;
FIG. 5 shows a schematic diagram of a first ROI of a first image of an embodiment of the application;
FIG. 6 is a schematic diagram illustrating the detection of a partition based on a deep learning model according to an embodiment of the present application;
FIG. 7 shows a schematic diagram of detecting a partition plate based on a Hough line detection algorithm in the embodiment of the present application;
FIG. 8 is a schematic diagram illustrating depth division of partitions in each level of a refrigerator according to an embodiment of the present application;
FIG. 9 shows a depth image schematic of a first ROI of an embodiment of the application;
FIG. 10 is a schematic diagram illustrating a background image template of a refrigerator according to an embodiment of the present application;
FIG. 11 is a schematic diagram illustrating a motion zone in accordance with an embodiment of the present application;
FIG. 12 is a schematic diagram illustrating a hand centroid location for an embodiment of the present application;
fig. 13 shows a confidence thermodynamic diagram of a refrigerator partition in an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present invention will be described clearly and completely with reference to the accompanying drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Reference throughout this specification to "embodiments," "some embodiments," "one embodiment," or "an embodiment," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment. Thus, appearances of the phrases "in various embodiments," "in some embodiments," "in at least one other embodiment," or "in an embodiment" or the like throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. Thus, the particular features, structures, or characteristics shown or described in connection with one embodiment may be combined, in whole or in part, with the features, structures, or characteristics of one or more other embodiments, without limitation. Such modifications and variations are intended to be included within the scope of the present application.
Fig. 1 shows a functional schematic diagram of a refrigerator according to an embodiment of the present application.
The intelligent refrigerator can endow the refrigerator with the capability of identifying the storage position of food materials and the interaction capability with a user through massive image and voice big data and combining a cloud computing and deep learning algorithm. The method has the outstanding characteristics that the more users are used, the more data are learned, the higher the system recognition rate is, and the more intelligent the human-computer interaction is, the better the user experience is.
The refrigerator interaction system at least comprises a sensing layer, a data layer and an application layer.
The sensing layer identifies food materials, refrigerators and user information through various sensors such as a color camera, a depth camera, Radio Frequency Identification (RFID), a code scanning and the like.
The data layer may reflect food material information, refrigerator information, and user information. The food material information comprises the type, position, quantity, volume and shelf life of food materials; the refrigerator information comprises information such as temperature, humidity and the like; the user information includes user identity, birthday, health status and other information.
In the application layer, after the refrigerator system obtains the food materials, the refrigerator and the user basic data, different application services can be provided for the user, wherein the application services comprise food recommendation, use-and-purchase (purchase reminding), lack reminding, expiration reminding, health management, storage guidance and the like. The user interacts with the refrigerator through intelligent terminal and/or intelligent pronunciation, intelligent terminal can be the smart mobile phone, or computer, panel computer etc. of installing specific APP.
Fig. 2 is a flowchart illustrating a method for identifying an access position of a refrigerator food material according to an embodiment of the present application.
In step 201, a first image and a second image of a refrigerator door area are acquired.
In some embodiments, the first image and the second image of the refrigerator door area are acquired by an image capture device of the refrigerator.
The first image is an operation area in the process of accessing the food materials by the user. Typically the first image is the refrigerator door area, which includes the refrigerator door front and back, and the door to user area. The access action of the user and the food material information can be identified by carrying out image acquisition and image analysis on an operation area in the process of accessing the food material by the user.
The second image is an image including the position information of the partition. The image at least comprises the edge area of the whole partition plate close to the refrigerator door body. The second image is an area from the highest partition plate to the lowest partition plate in the refrigerator, and the structure inside the refrigerator can be identified through analysis of the second image, wherein the structure comprises the position of the partition plates inside the refrigerator and the number of the partition plates. When the structure inside the refrigerator is changed, the refrigerator system may update the position information of the partition plate according to the second image collected in real time.
In some embodiments, the image capture device may be implemented as a depth camera.
The common color camera takes pictures that all objects within the camera's view angle can be seen and recorded, but the recorded data does not contain the distance of the objects from the camera. It can only be determined by semantic analysis of the image which objects are farther from the camera and which are closer, but there is no exact data.
The distance between each point in the image and the camera can be accurately identified through the depth image acquired by the depth camera, the three-dimensional space coordinates of each point in the image can be acquired through the (x, y) coordinates of the point in the 2D image, and then the real scene can be restored through the three-dimensional coordinates, so that the scene modeling is realized.
There are three main types of depth cameras: a light time of flight depth camera (TOF), an RGB binocular vision depth camera, a structured light depth camera.
In some embodiments, the image capture device may be implemented as a structured light depth camera.
The structured light depth camera projects light with certain structural characteristics to a shot object through the near-infrared laser, and then is collected through a special infrared camera. The light with a certain structure can acquire different image phase information according to different depth areas of a shot object, and then the change of the structure is converted into depth information through an arithmetic unit, so that a three-dimensional structure is obtained.
Invisible infrared laser with specific wavelength is generally used as a light source, light emitted by the invisible infrared laser is projected on an object through a certain code, and the position and depth information of the object is obtained by calculating the distortion of a returned code pattern through a certain algorithm.
FIG. 3 shows a depth camera mounting location schematic of an embodiment of the present application.
As shown in the figure, the top of the outer part of the refrigerator is provided with a pop-up motor mechanism module; the front end of the top outside the refrigerator, close to the door body, is provided with a sensing module, the sensing module comprises a structured light depth camera, the optical axis of the depth camera is perpendicular to the ground, and the visual angle range of the depth camera is as shown in the figure.
It should be noted that the depth camera may be factory set according to actual situations. When a user opens the refrigerator door body, the depth camera is started immediately, and then the depth image and the color image of the refrigerator door area are collected in real time.
FIG. 4 shows a depth camera mounting location schematic of another embodiment of the present application.
The top end in the refrigerator is provided with a perception module. The perception module comprises a structured light depth camera, and the view angle range of the structured light depth camera is shown by a dotted line in the figure. When a user opens a refrigerator door body, a depth camera is started immediately, the depth camera can acquire a color image close to the edge of a partition plate of the door body, the color image is used for partition plate detection, and the color image is a second image in the application; on the other hand, the viewing angle of the depth camera can also acquire the area part of the refrigerator door, namely the operation area for the user to access the food material, so as to acquire the depth image of the operation area, wherein the depth image is the first image.
In step 202, a first ROI is obtained by setting a region of interest for the first image, and a second ROI is obtained by setting a region of interest for the second image.
In some embodiments, the ROI (Region of interest) is a specific Region divided in the first image and the second image, and is used to remove an invalid Region in the image, so as to further reduce the influence caused by the image analysis and calculation of the invalid Region, improve the image analysis efficiency, and reduce the analysis and calculation amount.
In some embodiments, the ROI may also be a specific region divided in the first image and the second image based on the installation location of the refrigerator scene and the depth camera, and is used to remove an invalid region in the image, so as to further reduce the influence on the invalid region image analysis and calculation, improve the image analysis efficiency, and reduce the analysis and calculation amount.
In some embodiments, the first image is a depth image. Before setting the interest region of the first image to obtain the first ROI, the method further comprises the step of carrying out reduction preprocessing on the first image.
The first image acquired by the depth camera is downsampled to reduce the amount of computation in the image recognition algorithm. The main purpose of downsampling is to reduce the high-resolution grayscale map into small maps, generating a thumbnail of the first image.
For example, a first image depthImage of size MN, subjecting it to rs times of sampling to obtain (M/r)(N/s) size resolution small image. I.e. the original image rThe image in the s window is changed into a pixel, the value of the pixel point can be the average value, the maximum value, the minimum value or the median value of all pixels in the window, and the depth image depthSample with smaller size is obtained after down sampling.
In some embodiments, a first ROI is set on the downsampled depth image depthSample.
Fig. 5 shows a schematic diagram of a first ROI of a first image according to an embodiment of the present application.
The first ROI focuses mainly on a refrigerator door region, which is a region between the refrigerator and the user, in the first image.
In some embodiments, the refrigerator door area can be selected from a user front side, a refrigerator cold storage area, and a trapezoidal area enclosed by two refrigerator doors, such as a trapezoidal frame shown in fig. 5.
The first ROI may be set to a region according to actual analysis needs.
In some embodiments, the second image is a color image. Before setting the interest region of the second image to obtain a second ROI, the method further comprises the step of carrying out reduction preprocessing on the second image.
The acquired second image is down sampled to reduce the computational load of the image recognition algorithm. The main purpose of downsampling is to reduce a high-resolution grayscale image into small images and generate thumbnails of the corresponding images.
For example, the color image size is MN, subjecting it to rs times of sampling to obtain (M/r)(N/s) size resolution small image.
In some embodiments, the obtained resolution small image may be further subjected to image filtering analysis, such as mean filtering, median filtering, and the like, so as to improve stability and accuracy of the image analysis result.
The second image sets a second ROI after down-sampling.
The second ROI is defined, so that the region irrelevant to the position of an analysis partition in the image can be removed, and the calculation amount of image recognition and analysis is reduced. The second ROI is mainly concerned with the edge region of the partition inside the refrigerator near the door body.
The second ROI may be set according to actual analysis requirements.
In step 203, a partition detection is performed based on the second ROI to obtain partition position information.
In some embodiments, the distance between each partition of the refrigerator is manually or automatically adjusted, so that the position and the number of the partitions are changed frequently. And identifying the position information of the partition board of the refrigerator at the detection moment and the position threshold value of the partition board according to the real-time second image collected by the depth camera.
Firstly, the detection of the partition is performed based on the second ROI, and the detection of the partition can be realized by detecting the edge of the partition.
In some embodiments, the barrier detection is performed by a deep learning model. The deep learning model mainly adopts detection models in a convolutional neural network, such as RCNN, FastRCNN, fastern, YOLO series, and the like, and the application is not limited specifically.
In some embodiments, the deep learning model for partition detection may use the same model as the food material identification module, so as to achieve the purposes of reducing the amount of computation and reducing the complexity of the system. The deep learning strategy can effectively fuse the food material identification model, the detection rate of the edge of the partition is high, and fig. 6 shows a partition detection schematic diagram based on the deep learning model in the embodiment of the application.
In some embodiments, the baffle detection is achieved by a hough-line-based detection algorithm. The edges of the refrigerator partition plates are all straight lines in the image, and the edges of the partition plates can be effectively detected by the Hough straight line detection algorithm. The Hough line detection algorithm is small in calculation amount and high in real-time performance, and fig. 7 shows a partition plate detection schematic diagram based on the Hough line detection algorithm in the embodiment of the application.
In some embodiments, the detection of the partition can also perform the detection of the edge information of the partition based on the image template method by analyzing the color, structure and texture of the edge of the partition.
It should be noted that the method for detecting a separator described in the present application detects an edge area of a separator, including but not limited to using the above schemes.
In some embodiments, the location threshold profile is updated and stored in the refrigerator system when the location information of the partition changes, and the location threshold profile is a location threshold profile table.
The depth camera finishes the calibration of obtaining a color image and a depth image when leaving a factory, and can reversely obtain the depth value of the edge of each partition plate in the refrigerator based on the color image. And the position of the depth camera and the spacer may change during installation and transportation, resulting in a problem of the edge of the spacer being inclined in the color image.
In order to ensure the accuracy of the detection effect of the edge of the position partition board, the depth position threshold value of each layer of the partition board can be respectively set according to different transverse positions.
Fig. 8 shows a depth division schematic diagram of each level partition of the refrigerator according to the embodiment of the application.
First, three position thresholds can be set for each layer of separator.
As shown in the figure, the coordinate system in the scene uses a depth camera as an origin, the depth camera is located at a middle position above the refrigerator, and an optical axis Z of the depth camera is perpendicular to the ground.
The color image is projected on the depth image, the position of the pixel disp of each layer in the depth image can be obtained, and the pixel disp of each layer is divided into three sections of areas according to different set transverse X position ranges.
Then, all depth values of each segment are respectively subjected to speckle filtering, wrong depth values are removed, and smooth filtering is carried out on correct depth values.
And finally, taking the average value obtained by each section as the depth distance value of each section, traversing the positions of all the layers of partition plates, and obtaining the depth distance value zij at the edge of each partition plate.
Where zij represents the depth distance value at the edge of each partition, i is the number of layers of the refrigerator partitions, and j is the transverse X position range, as shown in table 1.
Number of layers Transverse X position range 1 Transverse X position range 2 Transverse X position range 3
First layer z11 z12 z13
Second layer z21 z22 z23
Third layer z31 z32 z33
The fourth layer z41 z42 z43
TABLE 1
The position threshold value configuration table is shown in table 2, and the acquired depth distance value z at each partition edgeijComparing with a position threshold value configuration table stored in the system, and calculating the difference value between the two, wherein:
Δtij=abs(zij-dij),
if Δ tijIf the value is larger than the preset threshold value delta T, updating the position threshold value d in the transverse position range j of the corresponding refrigerator i-layer partition plate in the position threshold value configuration table 2ij=zij
If Δ tijAnd if the position threshold value is less than or equal to the preset threshold value delta T, the position threshold value configuration table 2 is not updated.
And traversing the positions of all the layer clapboards, and updating the position threshold configuration file stored by the system.
Number of layers Transverse X position range 1 Transverse X position range 2 Transverse X position range 3
First layer d11 d12 d13
Second layer d21 d22 d23
Third layer d31 d32 d33
The fourth layer d41 d42 d43
TABLE 2
In step 204, detecting based on the first ROI to obtain a motion region; and detecting in the motion area to obtain a hand centroid motion trail.
Fig. 9 shows a depth image schematic diagram of a first ROI in an embodiment of the present application.
The motion region can be obtained by performing image analysis on the depthROI depth image of the first ROI.
First, a background image template of the refrigerator is calculated based on images of a plurality of consecutive first ROIs.
In order to ensure the accuracy of detecting the motion region in the first ROI, an accurate refrigerator background region needs to be extracted so as to distinguish the background region from the motion region in the first ROI.
After the refrigerator leaves the factory, the position of the depth camera is relatively fixed. In the process of accessing the food materials by the user, the depth of the fixed object in the depth image of the first ROI is a fixed value or fluctuates within a small range. The motion region of the first ROI can be reversely acquired by determining the fixed region.
The depth camera acquires a continuous depth image, which may also be considered to acquire a continuous first ROI. And numbering the continuous depth images, wherein the corresponding image frame numbers are 1, 2, 3, i +1 and.
In some embodiments, a rectangular region RECT is defined in the depth image depthROI of the continuous first ROI, said rectangular region RECT being of a size that can be set to its dimensions according to the actual situation, as shown by the dashed box in fig. 9.
In some embodiments, the difference value of the valid depth values depthValue in the rectangular region RECT in two consecutive frames or every other multiple frames is calculated and stored into the matrix matDiff _ i.
The valid depth value depthValue satisfies the following condition:
dmin<depthValue<dmax
where dmin and dmax are the minimum and maximum values of the effective depth values in the rectangular region RECT.
The matrix matDiff _ i that stores the difference values can be expressed as:
Figure BDA0002452070550000111
wherein n is 1, 2 or 3; and i is greater than 3, the difference is calculated from the third frame depth image, and when n is equal to 1, the depth images of two continuous frames are calculated.
The number depthNum of pixels in matDiff _ i larger than a preset minimum depth threshold depthT is counted.
If depthNum is larger than a preset minimum number threshold depthNumT, a moving object is considered to be in the rectangular region RECT;
if depthNum is less than or equal to the set minimum number threshold depthNumT, the area is considered to be in a static state, and the area is judged to be in a static state
Figure BDA0002452070550000112
Fig. 10 shows a schematic diagram of a background image template of a refrigerator according to an embodiment of the present application.
In some embodiments, the background image template is dynamically updated after each food material access.
Since the background area of the refrigerator is always changed in each access process of the food material, once depthNum meets the threshold condition, the background image scene template area front is updated, and the update threshold condition meets the following relation:
Figure BDA0002452070550000121
and secondly, calculating the difference value between the first ROI image and the background image template to obtain a motion region.
After a background image template of the refrigerator is obtained, a difference between the depthROI of the depth image of the first ROI and the background image template area front may be calculated, where the difference is a motion region area moving of the first ROI, and fig. 11 illustrates a schematic diagram of the motion region in the embodiment of the present application.
It can be seen that there is still more sporadic speckle noise in the motion region of fig. 11, and in some embodiments, the noise can be processed by using speckle filtering.
And detecting based on the motion area to obtain a hand mass center motion trail.
In some embodiments, a sliding rectangular window may be used to search through the motion region area moving to detect a hand region of a user.
The sliding rectangular window may be sized to pAnd q, counting the number of depth effective values in the sliding rectangular window by traversing the area moving.
And if the number meets a preset threshold value, the obtained sliding rectangular window at the moment is the hand area AreaTarget operated by the user.
In some embodiments, after acquiring the moving hand area AreaTarget, performing post-processing on the depth values in the hand area, where the post-processing mainly includes speckle filtering, 3sigma filtering, and the like to reduce noise and computation.
The average of the hand region filtered depth values is then calculated as the hand centroid position H (x, y, z).
FIG. 12 is a schematic diagram illustrating a hand centroid location in an embodiment of the present application.
Then, the motion trail of the hand mass center of the user in the food material storing and taking process can be obtained based on the continuous hand mass center positions.
In step 205, identifying the food material access position according to the hand centroid motion track and the partition plate position information.
And identifying the food material storing and taking position based on the obtained refrigerator partition plate position information and the hand mass center motion trail, wherein the partition plate position information comprises a partition plate area set depth range.
When the hand centroid position H (x, y, z) in the hand centroid motion trajectory meets the set depth range, the position corresponding to the hand centroid is the number of partition plates for storing and taking, and therefore the food material storing and taking position is identified.
For example, if d11 < H (x, y, z) < d21, the location of the hand centroid is identified as the second tier.
In some embodiments, the identification result of the food material access position is notified to the user in real time through a voice component of the refrigerator and/or an intelligent terminal establishing network connection with the refrigerator, and the intelligent terminal may be an APP terminal.
In some embodiments, the user actively corrects the access location, constructs a partition confidence thermodynamic diagram, and updates the location threshold profile if the partition confidence level is less than a preset threshold.
The user can actively judge the accuracy of the identification result of the food material access position to actively correct errors.
Constructing a confidence thermodynamic diagram according to the error correction frequency of a user, counting the error correction frequency rateM of the food position recognition result in a certain time period, and setting the error correction frequency at different partition plate positions as rateMuj, wherein the error correction frequency is expressed as follows:
rateMij=numMij/sumij
wherein i is the number of layers of the refrigerator partition boards;
j is the transverse X position range;
numMijrepresenting the total times of position error correction in the transverse position range j of the i layers of partitions of the refrigerator;
sumijrepresenting the total number of times food material is stored at that location.
The confidence of the construction is expressed as:
Conij=1-rateMij
and constructing a confidence thermodynamic diagram according to the confidence obtained from all the positions of the partition.
Fig. 13 shows a confidence thermodynamic diagram of a refrigerator partition in an embodiment of the present application.
Traversing the confidence thermodynamic diagram, if the confidence Conij is smaller than a preset confidence threshold Con _ th, determining that the system detects that the user frequently modifies the position through APP or voice, continuously detecting the position of the partition, and updating the position threshold in the transverse position range j of the partition corresponding to the i layer of the refrigerator in the configuration file;
if the confidence Conij is greater than or equal to the preset confidence threshold Con _ th and the user does not perform position error correction, the result of identifying the food material access position is considered to be correct, the detection of the position of the partition plate is stopped, and the latest stored position threshold configuration file is used for subsequent position identification.
In some embodiments, the identification of the user access action may be determined from the direction of the hand centroid motion trajectory.
The motion trail of the centroid of the hand is continuous and can be considered to be composed of a plurality of segments of vectors. And the motion direction of the hand of the user can be obtained by analyzing the vector of the motion trail of the mass center of the hand of the user in real time. For example, a distance threshold is set at the position of the refrigerator door as the starting position of the access action, and the mass center of the hand crosses the threshold line from the outside of the refrigerator, so that the storage action is considered; and if the hand leaves the refrigerator and is far away from the threshold line, the hand is considered to be a taking action. After the access action is identified, whether the user stores food materials or takes food materials can be identified by further combining the identification result in the hand and the time characteristics.
In some embodiments, based on the result of the access action identification and tracking, it is determined whether the access action enters a preset key area. If the action enters or leaves the key area, extracting a color image corresponding to the moment, and sending the image of the ROI area in the key frame to a food material identification module of the refrigerator, wherein the food material identification module can use a CNN algorithm to identify the type of the food material in some embodiments. The setting of the key area can be performed according to actual experience and hardware parameters, and is not described in detail in the application.
The application also provides a refrigerator, which comprises a refrigerator main body, a display screen, a depth camera and a controller.
The refrigerator main body is provided with a depth camera, a controller and a display screen. The display screen is arranged on the outer shell of the refrigerator main body and is configured to be used for displaying an interface. The depth camera is configured to capture a first image, a second image of a region of the refrigerator door. The controller is configured to execute any method for identifying the access position of the refrigerator food material according to the embodiment of the application, and display the food material information on the display screen.
In some embodiments, the controller is further configured to present the food material information to another smart device. For example, based on information of food material type, motion, location, etc., the controller sends the relevant food material information to a smart terminal of the user, such as a smart phone installed with a specific APP. In some embodiments, a food service can be provided for the user through an intelligent terminal of the user and a display screen of a refrigerator, and the functions of food storage position query, recipe recommendation, purchase reminding and the like are mainly related.
In some embodiments, the refrigerator includes a food material identification module that may be deployed locally or to the cloud. Because the food material identification module only analyzes the ROI area in the key frame image, the consumption of computing resources and the data volume of network transmission can be reduced, and meanwhile, the real-time property of food material type identification can be improved.
What has been described above includes examples of implementations of the invention. It is, of course, not possible to describe every conceivable combination of components or methodologies for purposes of describing the claimed subject matter, but it is to be appreciated that many further combinations and permutations of the subject innovation are possible. Accordingly, the claimed subject matter is intended to embrace all such alterations, modifications and variations that fall within the spirit and scope of the appended claims. Moreover, the foregoing description of illustrated implementations of the present application, including what is described in the "abstract," is not intended to be exhaustive or to limit the disclosed implementations to the precise forms disclosed. While specific implementations and examples are described herein for illustrative purposes, various modifications are possible which are considered within the scope of such implementations and examples, as those skilled in the relevant art will recognize.
In particular and in regard to the various functions performed by the above described components, devices, circuits, systems and the like, the terms used to describe such components are intended to correspond, unless otherwise indicated, to any component which performs the specified function of the described component (e.g., that is functionally equivalent), even though not structurally equivalent to the disclosed structure which performs the function in the herein illustrated exemplary aspects of the claimed subject matter. In this regard, it will also be recognized that the innovation includes a system as well as a computer-readable storage medium having computer-executable instructions for performing the acts and/or events of the various methods of the claimed subject matter.
The above-described systems/circuits/modules have been described with respect to interaction between several components/blocks. It can be appreciated that such systems/circuits and components/blocks can include those components or the referenced stator components, some of the specified components or sub-components, and/or additional components, and in various permutations and combinations of the above. Sub-components can also be implemented as components communicatively coupled to other components rather than included within parent components (hierarchical). Additionally, it should be noted that one or more components may be combined into a single component providing aggregate functionality or divided into several separate sub-components, and any one or more middle layers (e.g., a management layer) may be provided to communicatively couple to such sub-components in order to provide comprehensive functionality. Any components described herein may also interact with one or more other components not specifically described herein but known to those of skill in the art.
Notwithstanding that the numerical ranges and parameters setting forth the broad scope of the invention are approximations, the numerical values set forth in the specific examples are reported as precisely as possible. Any numerical value, however, inherently contains certain errors necessarily resulting from the standard deviation found in their respective testing measurements. Moreover, all ranges disclosed herein are to be understood to encompass any and all subranges subsumed therein. For example, a range of "less than or equal to 11" can include any and all subranges between (and including) the minimum value of zero and the maximum value of 11, i.e., any and all subranges have a minimum value equal to or greater than zero and a maximum value of equal to or less than 11 (e.g., 1 to 5). In some cases, the values as described for the parameters can have negative values.
In addition, while a particular feature of the subject innovation may have been disclosed with respect to only one of several implementations, such feature may be combined with one or more other features of the other implementations as may be desired and advantageous for any given or particular application. Furthermore, to the extent that the terms "includes," "including," "has," "contains," variants thereof, and other similar words are used in either the detailed description or the claims, these terms are intended to be inclusive in a manner similar to the term "comprising" as an open transition word without precluding any additional or other elements.
Reference throughout this specification to "one implementation" or "an implementation" means that a particular feature, structure, or characteristic described in connection with the implementation is included in at least one implementation. Thus, the appearances of the phrases "in one implementation" or "in an implementation" in various places throughout this specification are not necessarily all referring to the same implementation. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more implementations.
Furthermore, reference throughout this specification to "an item" or "a file" means that a particular structure, feature, or object described in connection with the implementation is not necessarily the same object. Further, "file" or "item" can refer to objects in various formats.
The terms "component," "module," "system," and the like as used herein are generally intended to refer to a computer-related entity, either hardware (e.g., a circuit), a combination of hardware and software, or an entity associated with an operating machine having one or more specific functionalities. For example, a component may be, but is not limited to being, a process running on a processor (e.g., a digital signal processor), a processor, an object, an executable, a thread of execution, a program, and/or a computer. By way of illustration, both an application running on a controller and the controller can be a component. One or more components can reside within a process and/or thread of execution and a component can be localized on one computer and/or distributed between two or more computers. Although individual components are depicted in various implementations, it is to be appreciated that the components can be represented using one or more common components. Further, the design of each implementation can include different component placements, component selections, etc. to achieve optimal performance. Furthermore, "means" can take the form of specially designed hardware; by software thereon (generalized hardware wherein the execution of specific functions is specialized, that enables the hardware to perform, software stored on a computer readable medium, or a combination thereof).
Moreover, the word "exemplary" or "exemplary" is used herein to mean "serving as an example, instance, or illustration". Any aspect or design described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other aspects or designs. Rather, use of the word "exemplary" or "exemplary" is intended to present concepts in a concrete fashion. As used herein, the term "X employs A or B" is intended to mean that it naturally includes any of the permutations, unless otherwise indicated or clear from the context. That is, if X employs A; x is B; or X employs both A and B, then "X employs A or B" is satisfied under any of the above examples. In addition, the articles "a" and "an" as used in this application and the appended claims should generally be construed to mean "one or more" unless specified otherwise or clear from context to be directed to a singular form.

Claims (10)

1. A method for identifying an access position of food materials in a refrigerator, the method comprising:
acquiring a first image and a second image of a refrigerator door area;
setting an interest region for the first image to obtain a first ROI, and setting an interest region for the second image to obtain a second ROI;
detecting a partition plate based on the second ROI to obtain partition plate position information, wherein the partition plate position information is updated and stored in a position threshold configuration file of the refrigerator system when being changed;
detecting based on the first ROI to obtain a motion area, and detecting in the motion area to obtain a hand mass center motion track;
and identifying to obtain the food material storing and taking position according to the hand mass center motion track and the position information of the partition plate.
2. The method for identifying the food material access position of the refrigerator as claimed in claim 1, wherein the first image is a depth image and the second image is a color image.
3. The method for identifying the food material access position of the refrigerator as claimed in claim 1, wherein the position information of the partition is updated and stored in a position threshold configuration file of the refrigerator system when the position information of the partition changes.
4. The method for identifying the food material access position of the refrigerator as claimed in claim 1,
before setting a region of interest for the first image to obtain a first ROI, performing reduction preprocessing on the first image;
before setting the interest region of the second image to obtain a second ROI, the method further comprises the step of carrying out reduction preprocessing on the second image.
5. The method for identifying food material access positions for a refrigerator as claimed in claim 2, wherein the motion region is detected based on the first ROI, and the following steps are performed:
calculating to obtain a background image template of the refrigerator based on a plurality of continuous images of the first ROI;
and calculating the difference value between the first ROI image and the background image template to obtain a motion region.
6. The method for identifying food material access positions for the refrigerator as claimed in claim 5, wherein the background image template is dynamically updated after each food material access.
7. The method for identifying the food material access position of the refrigerator as claimed in claim 1, wherein the partition detection is specifically implemented as detecting the edge of each partition in the refrigerator, and the partition detection can be performed based on a deep learning model or based on Hough line detection.
8. The method for identifying food material access positions for a refrigerator as claimed in claim 1, further comprising the steps of, after identifying the food material access positions: and the user actively corrects the access position, constructs a confidence thermodynamic diagram of the partition plate, and updates the position threshold configuration file if the confidence of the partition plate is smaller than a preset threshold.
9. A refrigerator, characterized by comprising:
a refrigerator main body configured to be provided with a depth camera, a controller;
a display screen provided to an outer case of the refrigerator main body, configured to display an interface;
a depth camera configured to capture first and second images of a refrigerator door area;
a controller configured to perform the method according to any one of claims 1 to 8 and to present food material information to the display screen.
10. The refrigerator of claim 9, wherein the controller is further configured to present the food material information to another smart device.
CN202010295608.7A 2020-04-15 2020-04-15 Method for identifying storage and taking positions of refrigerator food materials and refrigerator Active CN111503991B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010295608.7A CN111503991B (en) 2020-04-15 2020-04-15 Method for identifying storage and taking positions of refrigerator food materials and refrigerator

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010295608.7A CN111503991B (en) 2020-04-15 2020-04-15 Method for identifying storage and taking positions of refrigerator food materials and refrigerator

Publications (2)

Publication Number Publication Date
CN111503991A CN111503991A (en) 2020-08-07
CN111503991B true CN111503991B (en) 2021-10-08

Family

ID=71877608

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010295608.7A Active CN111503991B (en) 2020-04-15 2020-04-15 Method for identifying storage and taking positions of refrigerator food materials and refrigerator

Country Status (1)

Country Link
CN (1) CN111503991B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11692767B2 (en) * 2020-10-07 2023-07-04 Haier Us Appliance Solutions, Inc. Refrigerator appliance and methods of user identification
US11692769B2 (en) * 2020-11-17 2023-07-04 Haier Us Appliance Solutions, Inc. Inventory management system for a refrigerator appliance
WO2022206043A1 (en) * 2021-03-31 2022-10-06 海信集团控股股份有限公司 Smart refrigerator, access action recognition method, device, and medium
CN115704622A (en) * 2021-08-05 2023-02-17 海信集团控股股份有限公司 Intelligent refrigerator and food material storing and taking action identification method
CN113915840B (en) * 2021-10-18 2022-12-16 海信冰箱有限公司 Refrigerator and food material positioning method thereof
CN116358211A (en) * 2021-12-28 2023-06-30 海信集团控股股份有限公司 Intelligent refrigerator, food management method, equipment and medium

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20050026241A (en) * 2003-09-09 2005-03-15 주식회사 대우일렉트로닉스 Vegetables room open and closed device of refrigerator
JP5170554B2 (en) * 2008-09-29 2013-03-27 株式会社ダイフク Learning device in article storage facility
FR2963327B1 (en) * 2010-07-27 2012-08-24 Air Liquide DEVICE FOR STORING ARTICLES UNDER CONTROLLED ATMOSPHERE
TR201706864A3 (en) * 2017-05-10 2018-12-21 Arcelik As A COOLER
CN110472515B (en) * 2019-07-23 2021-04-13 创新先进技术有限公司 Goods shelf commodity detection method and system
CN110674789B (en) * 2019-10-12 2022-04-29 海信集团有限公司 Food material management method and refrigerator

Also Published As

Publication number Publication date
CN111503991A (en) 2020-08-07

Similar Documents

Publication Publication Date Title
CN111503991B (en) Method for identifying storage and taking positions of refrigerator food materials and refrigerator
CN111460926B (en) Video pedestrian detection method fusing multi-target tracking clues
CN107563313B (en) Multi-target pedestrian detection and tracking method based on deep learning
US8548226B2 (en) Stereo image processing device and method
US8073196B2 (en) Detection and tracking of moving objects from a moving platform in presence of strong parallax
EP2858008A2 (en) Target detecting method and system
US20140177915A1 (en) Method and apparatus for detecting object
US20100295783A1 (en) Gesture recognition systems and related methods
CN105069804B (en) Threedimensional model scan rebuilding method based on smart mobile phone
CN111476194B (en) Detection method for working state of sensing module and refrigerator
CA2601832A1 (en) Human detection and tracking for security applications
CN106327488B (en) Self-adaptive foreground detection method and detection device thereof
WO2019007253A1 (en) Image recognition method, apparatus and device, and readable medium
CN110992424B (en) Positioning method and system based on binocular vision
CN114454875A (en) Urban road automatic parking method and system based on reinforcement learning
CN113139402B (en) A kind of refrigerator
CN113947770B (en) Method for identifying object placed in different areas of intelligent cabinet
CN111539282A (en) Method for identifying storage room article access position and refrigerator
EP2677462A1 (en) Method and apparatus for segmenting object area
CN110864670B (en) Method and system for acquiring position of target obstacle
CN109948489A (en) A kind of face identification system and method based on the fusion of video multiframe face characteristic
CN111160292B (en) Human eye detection method
CN109344685A (en) A kind of wisdom pallet and its intelligent positioning method for tracing
CN112784725A (en) Pedestrian anti-collision early warning method and device, storage medium and forklift
CN113622786B (en) Automatic door control method, system and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant