CN111950414A - Cabinet food identification system and identification method - Google Patents

Cabinet food identification system and identification method Download PDF

Info

Publication number
CN111950414A
CN111950414A CN202010759222.7A CN202010759222A CN111950414A CN 111950414 A CN111950414 A CN 111950414A CN 202010759222 A CN202010759222 A CN 202010759222A CN 111950414 A CN111950414 A CN 111950414A
Authority
CN
China
Prior art keywords
food
cabinet
identification
server
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010759222.7A
Other languages
Chinese (zh)
Inventor
张元本
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Weilin Software Co ltd
Original Assignee
Guangzhou Weilin Software Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Weilin Software Co ltd filed Critical Guangzhou Weilin Software Co ltd
Priority to CN202010759222.7A priority Critical patent/CN111950414A/en
Publication of CN111950414A publication Critical patent/CN111950414A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/68Food, e.g. fruit or vegetables

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Cold Air Circulating Systems And Constructional Details In Refrigerators (AREA)

Abstract

The invention discloses a cabinet food identification system and an identification method, which comprise an identification mechanism, a video stream server, an identification server and a storage server, wherein: the identification mechanism is used for acquiring image information of food entering and exiting the cabinet and transmitting the image information to the video streaming server; the video streaming server receives the image information collected by the identification mechanism and distributes the image information to the identification server; the identification server receives the image information, identifies the picture according to the image information, performs tracking judgment and identification judgment through the picture, and finally sends a judgment result to the storage server; the storage server receives the judgment result and updates the food data information in the cabinet; the cabinet food identification system and the identification method have the advantages of higher intelligent degree and better identification effect.

Description

Cabinet food identification system and identification method
Technical Field
The invention particularly relates to a cabinet food identification system and an identification method.
Background
The smart home (home automation) is characterized in that a home is used as a platform, facilities related to home life are integrated by utilizing a comprehensive wiring technology, a network communication technology, a safety precaution technology, an automatic control technology and an audio and video technology, an efficient management system of home facilities and home schedule affairs is constructed, home safety, convenience, comfort and artistry are improved, and an environment-friendly and energy-saving living environment is realized.
In recent years, along with the popularization of smart homes, more and more household devices are changed in an intelligent manner, a refrigerator, a cabinet, a food cabinet or a storage cabinet for storing food in the existing smart home is one of key points for food identification and management, and a built-in camera is adopted in the conventional smart home, so that the shooting visual field is limited, the identification rate is low, the overall price is high, and many or even no identification functions are provided.
Disclosure of Invention
Aiming at the defects in the prior art, the invention aims to provide a cabinet food identification system and an identification method which have high intelligent degree and good identification effect.
The technical scheme adopted by the invention for solving the technical problems is as follows:
a cabinet food identification system comprises an identification mechanism, a video streaming server, an identification server and a storage server, wherein:
the identification mechanism is used for acquiring image information of food entering and exiting the cabinet and transmitting the image information to the video streaming server;
the video streaming server receives the image information collected by the identification mechanism and distributes the image information to the identification server;
the identification server receives the image information, identifies the picture according to the image information, performs tracking judgment and identification judgment through the picture, and finally sends a judgment result to the storage server;
and the storage server receives the judgment result and updates the food data information in the cabinet.
Preferably, the cabinet is one of a cabinet, a refrigerator, a food cabinet and a storage cabinet.
Furthermore, the identification mechanism, the video streaming server, the identification server and the storage server are all remote cloud servers and perform data transmission between the remote cloud servers through inside and outside.
Furthermore, the identification mechanism comprises at least one camera arranged outside the box body and positioned at the top of the box body.
Preferably, the identification mechanism comprises at least one camera arranged in the box body and positioned at the bottom of the box body.
Preferably, the food data information includes a food type, a bin ID, a food characteristic, a put-in time, and a take-out time.
Another technical problem to be solved by the present invention is to provide a cabinet food identification method, which applies the above cabinet food identification system, and includes the following steps:
1) acquiring images, aligning a front area of a cabinet door of the cabinet through a camera, shooting pictures of articles put into or taken out of the cabinet, binding the cabinet by at least one camera, wherein each camera has an ID (identity), and connecting the camera with a remote server;
2) transmitting and distributing images, wherein the images acquired by each cabinet need to be uploaded to a video streaming server and are distributed to an identification server by the video streaming server;
3) identifying food, identifying a corresponding picture by an identification model loaded by an identification server, if the picture contains a target, keeping the ID and picture information of the current cabinet, and starting a food tracking mode;
4) tracking food, and judging the state of the food through the moving trend of the food in continuous multiple frames, wherein after the model identifies that the cabinet door is opened, when the food is close to the cabinet, the food can be judged to be placed, and when the food is far from the cabinet, the food can be judged to be taken;
5) managing food, identifying a server, making feedback according to different states after obtaining the food state, sending related data of the food to a storage server when the food is judged to be placed, storing, updating cabinet food information, and simultaneously sending results to a terminal for display; and when the food is taken, the related data of the food is sent to the storage server, the characteristics of the same type of food returned by the storage server are received, the most possible food is screened out by comparing the same type of food, the most possible food is returned to the storage server, the food information is stored and updated, and the result is sent to the terminal for display.
Preferably, in step 4), the food tracking method includes:
the in-and-out direction of the food is determined by the position of the camera mounting, let P (x, y) denote the coordinates of the food in the camera, Ppre(x, y) denotes the coordinates of the food in the previous frame, Pnow(x, y) represents the current frame food coordinates,Pdiff(x, y) represents the difference between the two, and the specific formula is as follows: pdiff(x,y)=Pnow(x,y)-Ppre(x, y) if it is close to the camera, it is considered to be food, when P isdiff(x, y) is positive and away from the camera is considered food, when P is presentdiff(x, y) is negative; coordinates P of food in N consecutive framesdiff(x, y) are all positive, N is an algorithm preset value, the state at the moment is considered as food placement, and when food coordinates P in continuous N framesdiffIf (x, y) is negative, the state is considered as food intake.
Further, in step 5), the intelligent terminal includes a display screen of a mobile phone APP, a WeChat applet and a cabinet.
The invention has the beneficial effects that:
the device is simple, the low price, application scope are wide, can dispose on the case cabinet that the storage has food, calculate through special cloud ware, and it is powerful to calculate, can discern different kinds of article, and recognition effect is good, can deposit food kind, cabinet ID, food characteristic, putting into time and taking out time data in storage server after the discernment simultaneously, realizes the further intellectuality of house.
Drawings
FIG. 1 is a schematic diagram of the installation of the device in the food identification system and method of the present invention;
FIG. 2 is a schematic diagram of a server module in the case food identification system and identification method according to the present invention;
fig. 3 is a schematic overall flow chart of the case food identification system and the identification method according to the present invention.
Detailed Description
The present invention is further described with reference to the following drawings and specific examples so that those skilled in the art can better understand the present invention and can practice the present invention, but the examples are not intended to limit the present invention.
Examples
The image is obtained mainly by a camera mounted on the refrigerator, and as shown in the installation diagram of the device in fig. 1, a camera 1 and a camera 2 are mounted above and in front of the refrigerator. The purpose is in order to shoot when the refrigerator door is opened, the business turn over of food. The example is two cameras, and when the two cameras are actually installed, the number and the positions of the installed cameras are changed according to the actual situation according to the size and the installation environment of the refrigerator.
The transmission and distribution of the images require the camera ID to be bound with the refrigerator ID before the image transmission, and in the example, all the cameras are web cameras and can be connected with a remote server. During initialization, the server binds the connected camera ID with the refrigerator, and then each connection can confirm the refrigerator from which the data comes.
In the example, as shown in fig. 2, the cameras bound to the refrigerator upload image data to the video streaming servers through the network, the relationship between the cameras and the video streaming servers is N to 1, and each video streaming server receives the data uploaded by N cameras. Meanwhile, the video streaming server forwards the processed image data to the identification server, the relation between the video streaming server and the identification server is 1 to N, the video streaming server serves as an intermediate server, on one hand, the video streaming server is connected with a camera of a terminal, and image data transmitted by different cameras are cached and primarily processed. On one hand, the image data is connected with the recognition server, and is distributed and scheduled according to the processing progress of the recognition server and distributed to different recognition servers.
As shown in fig. 2, in the determination process, the identification server exchanges data with the storage server according to the processing condition, for example, when it is determined that the food is taken, the corresponding food is taken out from the storage server, and compared, and the comparison result is returned to the storage server for storage. When the food is judged to be put, the data comprising the food type, the refrigerator ID, the food characteristics and the like are sent to the storage server for storage. The identification server and the storage server are in a corresponding relationship of N to 1. One storage server can be connected with a plurality of identification servers at the same time.
Specifically, a recognition server is loaded with a recognition model trained in advance, and receives data transmitted from a video streaming server, including but not limited to a camera ID, image data, an image timestamp, and the like. If the target is not included in the image, this recognition is ended. If the target is contained in the image, the current camera ID, image data, image time stamp, target area, type, reliability and other information are stored in a storage server, and a food tracking mode is started.
The food tracking, which is mainly used for determining whether the food is entering or exiting the refrigerator, is implemented by determining the entering or exiting direction of the food according to the position where the camera is installed, and making P (x, y) represent the coordinates of the food in the camera, and P is the coordinate of the food in the camerapre(x, y) denotes the coordinates of the food in the previous frame, Pnow(x, y) represents the current frame food coordinates, Pdiff(x, y) represents the difference between the two, and the specific formula is as follows:
Pdiff(x,y)=Pnow(x,y)-Ppre(x,y)
for example, when the camera is close to the food, P is considered to bediff(x, y) is positive and away from the camera is considered food, when P is presentdiffThe (x, y) value is negative. The state of the food is judged according to the moving trend of the same type of food in continuous multiple frames. Such as the coordinates P of the food in consecutive N framesdiffIf (x, y) is positive and N is the algorithm preset value, the state at this time is considered as food placement, and if the food coordinate P in N continuous frames isdiff(x, y) is negative, then the condition is considered food.
Referring to fig. 3, a method for identifying food in a cabinet includes the following steps:
1) acquiring images, aligning a front area of a cabinet door of the cabinet through a camera, shooting pictures of articles put into or taken out of the cabinet, binding the cabinet by at least one camera, wherein each camera has an ID (identity), and connecting the camera with a remote server;
2) transmitting and distributing images, wherein the images acquired by each cabinet need to be uploaded to a video streaming server and are distributed to an identification server by the video streaming server;
3) identifying food, identifying a corresponding picture by an identification model loaded by an identification server, if the picture contains a target, keeping the ID and picture information of the current cabinet, and starting a food tracking mode;
4) tracking food, and judging the state of the food through the moving trend of the food in continuous multiple frames, wherein after the model identifies that the cabinet door is opened, when the food is close to the cabinet, the food can be judged to be placed, and when the food is far from the cabinet, the food can be judged to be taken;
5) managing food, identifying a server, making feedback according to different states after obtaining the food state, sending related data of the food to a storage server when the food is judged to be placed, storing, updating cabinet food information, and simultaneously sending results to a terminal for display; and when the food is taken, the related data of the food is sent to the storage server, the characteristics of the same type of food returned by the storage server are received, the most possible food is screened out by comparing the same type of food, the most possible food is returned to the storage server, the food information is stored and updated, and the result is sent to the terminal for display.
The management of food mainly involves the data flow among several modules. Specifically, after the food state is obtained by the recognition server, different feedbacks are made according to different states. And when the food is judged to be placed, sending data including food types, refrigerator IDs, food characteristics and the like to the storage server, storing, updating refrigerator food information, and simultaneously sending results to the terminal for display. And when the food is judged to be taken, sending the data of the food, the refrigerator ID and the like to the storage server, receiving the characteristics of the food of the same type returned by the storage server, comparing the food of the same type, screening out the most possible food, returning the most possible food to the storage server, storing, updating the food information, and sending the result to the terminal for displaying. When the traffic is not large, the video streaming server and the storage server can be deployed in the same server. Specifically, the display terminal comprises but is not limited to a mobile phone APP, a WeChat applet and a display screen of an intelligent terminal refrigerator.
Wherein, camera acquisition system is as follows:
1. acquiring video data of a camera:
when detecting that an article in front of a lens reaches a trigger condition, a camera sends a signal to a video receiver, such as timing trigger, event (article movement) trigger and server soft trigger, the video receiver establishes socket connection with the camera after receiving the signal, the camera sends video data to the video receiver after the connection is successfully established, the sent data comprise video bytes, camera id (camera id) and unique id of each video (requestId: videos used for distinguishing different time periods of the same camera).
2. Decoding processing of video data
The video receiver sends the acquired video data to the video segmentation processor, and because the video data sent by the camera each time is jerky and the data received each time is not enough to synthesize one picture, the video segmentation processor is required to process the original data. The video segmentation processor starts a new thread after receiving a section of data, continuously monitors original video data acquired in the next minute by taking a camera id (camera id) as an identifier, performs h264 decoding on the accumulated data every 3 seconds, captures pictures, captures 10 frames (left and right) of pictures every second, creates a new video picture processor after each video (distinguished by requestId) captures the pictures successfully for the first time, and places the processor into a queue in a scheduling thread of the video picture processor. Each captured picture is placed in a container in the video picture processor, and the video picture processor is identified as being captured after the whole video is captured.
3. Processing video pictures
When the server is started, a video picture processor scheduling thread is started, the thread returns to continuously take out the video picture processors in the queue of the thread, when the video picture processors are taken out, a picture identification interface caller is used, the caller provides a function for calling the picture identification interface, pictures can be identified, picture processing results (including the actions of taking out or storing and guessing the items such as food and the number of the items such as food) can be returned, and then the results are assigned to a picture identification result processor.
4. Picture recognition result processing
The image recognition result processor can perform cloud storage on the obtained results, for example, 1 apple is put into a refrigerator, 3 watermelons are put into the refrigerator, and 2 beef boxes are taken out
The video processing system comprises the following components:
the whole system comprises an imaging module, a labeling module, a training module, an identification module, a trajectory tracking module and a behavior judgment module. The imaging module is placed at a fixed position through an external network camera, and shoots and captures a moving target in a visual field range.
And the marking module is used for providing training data for the training module and is divided into three parts, wherein the first part is used for marking the palm, the second part is used for marking hand-held articles, and the third part is used for marking different types of articles.
And the training module can select different models for training the labeled images. Such as the rcnn series model using a two-step process or the yolo series using a one-step process. The training categories include 3 categories, one being the palm-held item, one being the item. The first two types are fixed, and the last type of article is selected according to actual needs, and can be one article or multiple articles.
And the recognition module is used for recognizing the picture by using the trained model, wherein the objects comprise a palm A, a palm-handheld object B and a handheld object C, and the recognized target position box, the category class and the score are given. The reason for identifying the three is that the hand-held object is easily shielded and the identification rate is low. The palm is high in recognition rate, and the palm is used as an auxiliary judgment condition, so that the misjudgment rate is reduced more easily.
The track tracking module comprises the following specific steps:
1. and judging the object type and the position. And judging whether the class returned by the identification module contains the class palm class _ A or not, if not, considering that the frame has no target, and waiting for the detection of the next frame. If class _ A is included, save the data of box _ A, score _ A and proceed to check class _ B, class _ C. If the related data exists, the related data of class, box and score are saved.
2. A combination of targets. Due to the multi-target detection. There may be multiple class _ a, class _ B, class _ C. IOU can be controlled by different intersection ratios of class _ A and class _ C targetsA_CTo determine which objects are unitary. Thus, the wrong target is eliminated. The formula is as follows:
Figure BDA0002612589320000071
multiple IOU groupsA_CForm set E (IOU)A_C) Set E (IOU)A_C) Internally push IOUA_CThe values are sorted incrementally.
3. And (4) removing the weight. Due to aggregation
Figure BDA0002612589320000072
And a set E of a plurality of class _ B (class _ B) represent all palm-handheld objects, and therefore, there is a possibility that duplication occurs, and that an duplicated object in both sets is removed by suppressing nms by a non-maximum value. nms is generally the culling of overlapping boxes by score, where sets of facets
There is a score of class B in E (class _ B), but inside set E (class _ B) is
IOUA_CCannot be compared with score _ B, can set
Figure BDA0002612589320000073
And then the comparison is carried out.
And the behavior judging module confirms the track in the continuous frames. The track has three states, namely, taking, placing and uncertainty. The status is determined by trend, i.e. depending on the current target boxnowCenter point and target box for comparison with previous framepreThe position difference of (a). Assuming that the suspected target set of the current frame is E (object), the specific determination steps for the current track set to be E (track) are as follows:
1. trend determination, if boxnow.x>boxpre.x and boxnow.y>boxpre.y:take
if boxnow.x<boxpre.x and boxnow.y<boxpre.y:put
if(boxnow.x-boxpre.x)*(boxnow.y-boxpre.y)<0:unsure
2. And updating the track, and if the moving trend of the first object obj of the E (object) is consistent with a track _ x in the E (track), considering that the object belongs to the track, and updating the track. If not, then the target is selected in turn in E (object) for comparison with the track of E (track). If there is no match, then the current E (object) is listed as a suspected track and added to E (track).
3. And (4) track elimination, determining the behavior when multiple continuous frames in the E (track) are all trend, and deleting the track.
The invention has the beneficial effects that:
the device is simple, the low price, application scope are wide, can dispose on the case cabinet that the storage has food, calculate through special cloud ware, and it is powerful to calculate, can discern different kinds of article, and recognition effect is good, can deposit food kind, cabinet ID, food characteristic, putting into time and taking out time data in storage server after the discernment simultaneously, realizes the further intellectuality of house.
The above-described embodiments of the present invention are not intended to limit the scope of the present invention, and the embodiments of the present invention are not limited thereto, and various other modifications, substitutions and alterations can be made to the above-described structure of the present invention without departing from the basic technical concept of the present invention as described above, according to the common technical knowledge and conventional means in the field of the present invention.

Claims (9)

1. The utility model provides a case cabinet food identification system which characterized in that, including identification mechanism, video stream server, recognition server and storage server, wherein:
the identification mechanism is used for acquiring image information of food entering and exiting the cabinet and transmitting the image information to the video streaming server;
the video streaming server receives the image information collected by the identification mechanism and distributes the image information to the identification server;
the identification server receives the image information, identifies the picture according to the image information, performs tracking judgment and identification judgment through the picture, and finally sends a judgment result to the storage server;
and the storage server receives the judgment result and updates the food data information in the cabinet.
2. A cabinet food identification system as claimed in claim 1 wherein: the cabinet is one of a cabinet, a freezer, a food cabinet and a storage cabinet.
3. A cabinet food identification system as claimed in claim 2 wherein: the identification mechanism, the video streaming server, the identification server and the storage server are all remote cloud servers and perform data transmission between the remote cloud servers through the inside and the outside.
4. A cabinet food identification system as claimed in claim 3 wherein: the identification mechanism comprises at least one camera arranged outside the box body and positioned at the top of the box body.
5. A cabinet food identification system as claimed in any of claims 1 to 4 wherein: the identification mechanism comprises at least one camera arranged in the box body and positioned at the bottom of the box body.
6. A cabinet food identification system as claimed in claim 5 wherein: the food data information includes a kind, a cabinet ID, food characteristics, a putting time and a taking time.
7. A cabinet food identification method, characterized by applying the cabinet food identification system of claim 1, comprising the steps of:
1) acquiring images, aligning a front area of a cabinet door of the cabinet through a camera, shooting pictures of articles put into or taken out of the cabinet, binding the cabinet by at least one camera, wherein each camera has an ID (identity), and connecting the camera with a remote server;
2) transmitting and distributing images, wherein the images acquired by each cabinet need to be uploaded to a video streaming server and are distributed to an identification server by the video streaming server;
3) identifying food, identifying a corresponding picture by an identification model loaded by an identification server, if the picture contains a target, keeping the ID and picture information of the current cabinet, and starting a food tracking mode;
4) tracking food, and judging the state of the food through the moving trend of the food in continuous multiple frames, wherein after the model identifies that the cabinet door is opened, when the food is close to the cabinet, the food can be judged to be placed, and when the food is far from the cabinet, the food can be judged to be taken;
5) managing food, identifying a server, making feedback according to different states after obtaining the food state, sending related data of the food to a storage server when the food is judged to be placed, storing, updating cabinet food information, and simultaneously sending results to a terminal for display; and when the food is taken, the related data of the food is sent to the storage server, the characteristics of the same type of food returned by the storage server are received, the most possible food is screened out by comparing the same type of food, the most possible food is returned to the storage server, the food information is stored and updated, and the result is sent to the terminal for display.
8. The cabinet food identification method of claim 7, wherein in step 4), the food tracking method comprises:
the in-and-out direction of the food is determined by the position of the camera mounting, let P (x, y) denote the coordinates of the food in the camera, Ppre(x, y) denotes the coordinates of the food in the previous frame, Pnow(x, y) represents the current frame food coordinates,
Pdiff(x, y) represents the difference between the two, and the specific formula is as follows: pdiff(x,y)=Pnow(x,y)-Ppre(x, y) if it is close to the camera, it is considered to be food, when P isdiff(x, y) is positive and away from the camera is considered food, when P is presentdiff(x, y) is negative; coordinates P of food in N consecutive framesdiff(x, y) are all positive, N is an algorithm preset value, the state at the moment is considered as food placement, and when food coordinates P in continuous N framesdiffIf (x, y) is negative, the state is considered as food intake.
9. The cabinet food identification method of claim 7, wherein in step 5), the intelligent terminal comprises a mobile phone APP, a WeChat applet and a display screen of the cabinet.
CN202010759222.7A 2020-07-31 2020-07-31 Cabinet food identification system and identification method Pending CN111950414A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010759222.7A CN111950414A (en) 2020-07-31 2020-07-31 Cabinet food identification system and identification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010759222.7A CN111950414A (en) 2020-07-31 2020-07-31 Cabinet food identification system and identification method

Publications (1)

Publication Number Publication Date
CN111950414A true CN111950414A (en) 2020-11-17

Family

ID=73339867

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010759222.7A Pending CN111950414A (en) 2020-07-31 2020-07-31 Cabinet food identification system and identification method

Country Status (1)

Country Link
CN (1) CN111950414A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113723498A (en) * 2021-08-26 2021-11-30 广东美的厨房电器制造有限公司 Food maturity identification method, device, system, electric appliance, server and medium
TWI748788B (en) * 2020-12-08 2021-12-01 台灣松下電器股份有限公司 Judgment method for accessing items and smart refrigerator
CN113810577A (en) * 2021-09-13 2021-12-17 张元本 External camera system and application thereof
CN113837144A (en) * 2021-10-25 2021-12-24 广州微林软件有限公司 Intelligent image data acquisition and processing method for refrigerator

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107403249A (en) * 2016-05-19 2017-11-28 阿里巴巴集团控股有限公司 Article control method, device, intelligent storage equipment and operating system
CN107679573A (en) * 2017-09-30 2018-02-09 深圳市锐曼智能装备有限公司 The article identification system and its method of wisdom counter
CN108391092A (en) * 2018-03-21 2018-08-10 四川弘和通讯有限公司 Danger identifying system based on deep learning
CN111415461A (en) * 2019-01-08 2020-07-14 虹软科技股份有限公司 Article identification method and system and electronic equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107403249A (en) * 2016-05-19 2017-11-28 阿里巴巴集团控股有限公司 Article control method, device, intelligent storage equipment and operating system
CN107679573A (en) * 2017-09-30 2018-02-09 深圳市锐曼智能装备有限公司 The article identification system and its method of wisdom counter
CN108391092A (en) * 2018-03-21 2018-08-10 四川弘和通讯有限公司 Danger identifying system based on deep learning
CN111415461A (en) * 2019-01-08 2020-07-14 虹软科技股份有限公司 Article identification method and system and electronic equipment

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI748788B (en) * 2020-12-08 2021-12-01 台灣松下電器股份有限公司 Judgment method for accessing items and smart refrigerator
CN113723498A (en) * 2021-08-26 2021-11-30 广东美的厨房电器制造有限公司 Food maturity identification method, device, system, electric appliance, server and medium
CN113810577A (en) * 2021-09-13 2021-12-17 张元本 External camera system and application thereof
CN113837144A (en) * 2021-10-25 2021-12-24 广州微林软件有限公司 Intelligent image data acquisition and processing method for refrigerator

Similar Documents

Publication Publication Date Title
CN111950414A (en) Cabinet food identification system and identification method
CN112242940B (en) Intelligent management system and management method for food in bins
CN108416901A (en) Method and device for identifying goods in intelligent container and intelligent container
CN109472920A (en) A kind of automatic vending machine intelligent inventory management system and method
CN109241933A (en) Video linkage monitoring method, monitoring server, video linkage monitoring system
CN107222711A (en) Monitoring system, method and the client of warehoused cargo
CN102081890A (en) Information control device and method and information display system with information control device
CN108024134A (en) It is a kind of based on live data analysing method, device and terminal device
CN109829397A (en) A kind of video labeling method based on image clustering, system and electronic equipment
CN113135368A (en) Intelligent garbage front-end classification system and method
CN109272647A (en) The update method and device of automatic vending warehouse item state
CN108170781A (en) A kind of accurate positioning method and system of household storage article
CN109615158A (en) Asset management monitoring method, device, computer equipment and storage medium
CN108898591A (en) Methods of marking and device, electronic equipment, the readable storage medium storing program for executing of picture quality
CN108667942A (en) A kind of Intelligent House Light system and application method based on Internet of Things
CN109708427A (en) A kind of refrigerator intelligent management
CN107948512A (en) Image pickup method, device, readable storage medium storing program for executing and intelligent terminal
CN113038079A (en) Positioning video linkage system and method based on 5G + MEC
CN111950413A (en) Video identification processing method for grabbed objects
CN108765780A (en) A kind of doll machine and its application method based on recognition of face
CN111967352A (en) Multi-target tracking and behavior judgment device and method for handheld article
CN113888865B (en) Electronic device and vehicle information acquisition method
CN105282520A (en) Chip for augmented reality explanation device, and augmented reality information transmission and interaction method
CN206712974U (en) A kind of monitoring system of warehoused cargo
CN210199776U (en) Inventory system of intelligent sales counter

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination