CN110826506A - Target behavior identification method and device - Google Patents

Target behavior identification method and device Download PDF

Info

Publication number
CN110826506A
CN110826506A CN201911096214.2A CN201911096214A CN110826506A CN 110826506 A CN110826506 A CN 110826506A CN 201911096214 A CN201911096214 A CN 201911096214A CN 110826506 A CN110826506 A CN 110826506A
Authority
CN
China
Prior art keywords
target
image
images
determining
preset number
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911096214.2A
Other languages
Chinese (zh)
Inventor
仇雪雅
臧云波
鲁邹尧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Second Network Technology Co Ltd
Original Assignee
Shanghai Second Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Second Network Technology Co Ltd filed Critical Shanghai Second Network Technology Co Ltd
Priority to CN201911096214.2A priority Critical patent/CN110826506A/en
Publication of CN110826506A publication Critical patent/CN110826506A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a method and a device for identifying target behaviors, wherein the method comprises the following steps: acquiring a first image in a video, identifying a target area on a first object in the first image, and a second object, wherein the first object is used for acquiring a target item by using the second object; under the condition that the distance between the second object and the target area is smaller than a preset threshold value, acquiring a preset number of second images which are positioned in front of the first image in the video; and analyzing the preset number of second images, and determining that the first object has target behaviors under the condition that the second object is coincident with a third object, wherein the third object is used for bearing the target object. According to the invention, the problem of low efficiency of identifying the target behavior depending on manual work in the related technology is solved, and the accuracy of identifying the target behavior is improved.

Description

Target behavior identification method and device
Technical Field
The invention relates to the field of communication, in particular to a method and a device for identifying target behaviors.
Background
In the catering field, restaurants generally have plain text regulations, and workers are strictly prohibited from eating the food in the back kitchen. However, no matter how strict the management system is, a hole still exists, and the situation that the food is stolen by the kitchen is still rare. The kitchen staff steal the dish in the kitchen, seriously influences the weight of dish, leads to the product weight to be inconsistent, very big influence consumer's consumption experience. In addition, some cooks continue to use the frying pan to fry the dishes after using the frying pan to eat the dishes, which is extremely insanitary.
In the related art, identification or supervision of a target behavior (e.g., a steal behavior of a kitchen worker) mainly relies on manual work, and supervision work is taken charge of by a chef master. The chef needs to regularly visit in the kitchen to check whether the situation that the person steals the dishes occurs. However, such a method depends on manual inspection completely, and depends heavily on subjective judgment and working attitude of the person in charge of inspection. Due to the numerous kitchen staff and the complex environment, the supervision and the full coverage are difficult to achieve only by a chef master. In addition, if the working attitude of the chef is not correct or the chef has a habit of eating by theft, the supervision is easy to flow into a form, and the supervision efficiency is low.
Aiming at the problem of low efficiency of manually identifying target behaviors in the related technology, no technical scheme is provided.
Disclosure of Invention
The embodiment of the invention provides a target behavior identification method and device, which are used for at least solving the problem of low efficiency of manually identifying a target behavior in the related art.
According to an embodiment of the present invention, there is provided a method for identifying a target behavior, including:
acquiring a first image in a video, identifying a target area on a first object in the first image, and a second object, wherein the first object is used for acquiring a target item by using the second object;
under the condition that the distance between the second object and the target area is smaller than a preset threshold value, acquiring a preset number of second images which are positioned in front of the first image in the video;
and analyzing the preset number of second images, and determining that the first object has target behaviors under the condition that the second object is coincident with a third object, wherein the third object is used for bearing the target object.
According to an embodiment of the present invention, there is provided an apparatus for identifying a target behavior, including:
a first acquisition module for acquiring a first image in a video, identifying a target area on a first object in the first image, and a second object, wherein the first object is for acquiring a target item using the second object;
a second obtaining module, configured to obtain a preset number of second images located before the first image in the video when a distance between the second object and the target area is smaller than a preset threshold;
and the determining module is used for analyzing the preset number of second images and determining that the first object has target behaviors under the condition that the second object is superposed with a third object, wherein the third object is used for accommodating the target object.
According to a further embodiment of the present invention, there is also provided a computer-readable storage medium having a computer program stored thereon, wherein the computer program is arranged to perform the steps of any of the above method embodiments when executed.
According to yet another embodiment of the present invention, there is also provided an electronic device, including a memory in which a computer program is stored and a processor configured to execute the computer program to perform the steps in any of the above method embodiments.
By the invention, a first image in a video is acquired, a target area on a first object in the first image is identified, and a second object is identified, wherein the first object is used for acquiring a target object by using the second object; under the condition that the distance between the second object and the target area is smaller than a preset threshold value, acquiring a preset number of second images which are positioned in front of the first image in the video; and analyzing the preset number of second images, and determining that the first object has target behaviors under the condition that the second object is coincident with a third object, wherein the third object is used for bearing the target object. Therefore, the problem that the efficiency of identifying the target behavior depending on manual work is low in the related technology can be solved, and the accuracy of identifying the target behavior is improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
FIG. 1 is a flow diagram of a method of identifying a target behavior according to an embodiment of the invention;
fig. 2 is a block diagram of a structure of an apparatus for recognizing a target behavior according to an embodiment of the present invention.
Detailed Description
The invention will be described in detail hereinafter with reference to the accompanying drawings in conjunction with embodiments. It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order.
Example 1
The embodiment of the invention provides a method for identifying target behaviors. Fig. 1 is a flowchart of a method for identifying a target behavior according to an embodiment of the present invention, as shown in fig. 1, including:
step S102, acquiring a first image in a video, identifying a target area on a first object in the first image, and identifying a second object, wherein the first object is used for acquiring a target object by using the second object;
step S104, acquiring a preset number of second images before the first image in the video under the condition that the distance between the second object and the target area is smaller than a preset threshold value;
and S106, analyzing the preset number of second images, and determining that the first object has a target behavior under the condition that the second object is superposed with a third object, wherein the third object is used for accommodating the target object.
By the invention, a first image in a video is acquired, a target area on a first object in the first image is identified, and a second object is identified, wherein the first object is used for acquiring a target object by using the second object; under the condition that the distance between the second object and the target area is smaller than a preset threshold value, acquiring a preset number of second images which are positioned in front of the first image in the video; and analyzing the preset number of second images, and determining that the first object has target behaviors under the condition that the second object is coincident with a third object, wherein the third object is used for bearing the target object. Therefore, the problem that the efficiency of identifying the target behavior depending on manual work is low in the related technology can be solved, and the accuracy of identifying the target behavior is improved.
As an alternative implementation, the video is a monitoring video obtained by monitoring a specified area, and the first object may be an object located in the specified area. Alternatively, the designated area may be a kitchen of a restaurant and the target item may be a dish. In the embodiment, the monitoring is analyzed in real time, so that the coverage rate of target behavior identification is improved.
It should be noted that, in the above embodiment, in the case that the distance between the second object and the target area is smaller than the preset threshold, the analysis is continued on the preset number of second images located before the first image in the video, and in the case that the second object is overlapped with the third object, it is determined that the target behavior exists in the first object, for example, when the first image is a display image corresponding to 10 points in the video, 20 images located before 10 points are acquired as the second image, and the 20 images are analyzed, so that the accuracy of identifying the target behavior is further improved. Alternatively, the third object may be used for a container for holding the target item, such as a plate, a wok, or the like.
As an optional implementation, the second object includes at least one of the following: a hand of the first object for cooking dishware of the target item; the target region is an oral portion of the first object.
Since there may be a case where the first object acquires the target item with a hand or tableware in an actual environment, in recognizing the target behavior, the hand of the first object or the second object of the type of tableware or the like for cooking the target item may be recognized. The target area may be a mouth of the first object, i.e. a part of the first object such as a mouth or lips.
As an optional implementation, the determining that the target behavior exists for the first object in the case that the second object coincides with a third object includes: analyzing the second images of the preset number to determine the moving track of the second object; determining that the target behavior exists for the first object if the movement trajectory coincides with the third object.
In the above embodiment, when the movement trajectory of the second object coincides with the third object, that is, there is a point where the movement trajectory of the second object intersects with the third object, it indicates that there is a moment when the second object makes contact with the third object during the movement of the second object, so as to determine that the first object has the target behavior.
As an optional implementation, the determining that the target behavior exists for the first object in the case that the second object coincides with a third object includes: and analyzing the preset number of second images, and determining that the target behavior exists in the first object under the condition that a third image exists in the preset number of second images, wherein the third image is provided with a part where the second object and the third object overlap.
In the above embodiment, when a portion where the second object overlaps with the third object is displayed in the third image, that is, the second object makes contact with the third object, it is further determined that the first object has the target behavior.
As an optional implementation, after the determining that the target behavior exists for the first object, the method further includes: and carrying out face recognition on the first object to obtain face information of the first object.
As an optional implementation manner, after the face information of the first object is obtained, the occurrence time of the target behavior and the face information of the first object are uploaded to a management platform, so as to instruct the management platform to record the occurrence time and the face information.
In the above embodiment, the occurrence time of the target behavior and the corresponding face information of the first object are recorded, so that the behavior of the first object can be managed and normalized conveniently according to the record.
Through the above description of the embodiments, those skilled in the art can clearly understand that the method according to the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but the former is a better implementation mode in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
The above examples are further illustrated below with the target behavior as the food stealing behavior:
in this embodiment, a monitoring camera is deployed in a kitchen area, real-time monitoring is performed on the working condition of the kitchen by using the monitoring camera, and the behavior of stealing eating in the kitchen is recognized by analyzing the actions of people in a monitoring picture. The embodiment relates to identification of two stealing eating conditions, namely the behavior of stealing eating by grabbing dishes by hands, and the behavior of stealing eating by using tableware such as chopsticks, spoons, frying pans and the like.
Through this embodiment, application digital information means, through image recognition technology, can automatic identification back kitchen personnel steal the action of eating.
The technical scheme of the invention is as follows:
step 1, the camera automatically collects real-time pictures and uploads the monitoring pictures to a server. Identifying hands (or common tableware such as a frying pan, chopsticks and a spoon) through image identification;
step 2, detecting a face area in the picture by using a face detection algorithm, and determining an oral area in the face;
step 3, identifying whether the situation that the hands (or the tableware such as the frying pan, the chopsticks and the spoon) are overlapped with the mouth part occurs or not; if a picture with overlapped hand parts (or tableware such as a frying pan, chopsticks and a spoon) and mouth parts appears, forward capturing a plurality of frames of videos for analysis; analyzing the moving track of the hand (or the tableware such as a wok, chopsticks and a spoon) in the forward captured frames of videos;
step 4, analyzing whether the moving track of the human hand (or the tableware such as the frying pan, the chopsticks and the spoon) is superposed with the containers for holding food such as a dinner plate and a frying pan; if the superposition occurs, judging that the stealing behavior occurs;
and 5, automatically analyzing the face information by the system, and determining the person who steals the food.
As an alternative embodiment, when the tableware is used for eating by theft, the identification of eating by theft can be further performed according to the following steps:
step 1, deeply learning the tableware such as a wok, chopsticks and a spoon to obtain an identification model of the tableware such as the wok, the chopsticks and the spoon, and identifying the tableware such as the wok, the chopsticks and the spoon existing in a monitoring image by using the identification model;
and 2, analyzing the monitoring image, identifying whether tableware is overlapped with population parts or not, if so, intercepting a plurality of frames of videos forwards, analyzing whether the tableware in food is overlapped with containers for holding food such as dinner plates, woks and the like or not, if so, judging that the tableware is stolen, and automatically analyzing the face information by the system to determine people who steal the food.
And 3, pushing the information of the stealer, the stealer time and the like to a kitchen supervision responsible person end in real time, and sending the stealer video to a supervisor as a video evidence.
In the embodiment, the digital technology is used for analyzing the situation of steal eating in the kitchen by analyzing the coincidence of the hand part, the tableware and the mouth area of the person in the kitchen, so that the incomplete situation of the traditional manual inspection and inspection can be reduced, the workload of the manual inspection is reduced, and the steal eating situation can be prompted in time.
Example 2
According to another embodiment of the present invention, there is provided a device for identifying a target behavior, which is used to implement the foregoing embodiments and preferred embodiments, and which has been described above and will not be described again. As used below, the term "module" may be a combination of software and/or hardware that implements a predetermined function. Although the means described in the embodiments below are preferably implemented in software, an implementation in hardware, or a combination of software and hardware is also possible and contemplated.
Fig. 2 is a block diagram of a structure of an apparatus for identifying a target behavior according to an embodiment of the present invention, the apparatus including:
a first acquisition module 22 for acquiring a first image in a video, identifying a target area on a first object in the first image, and a second object, wherein the first object is for acquiring a target item using the second object;
a second obtaining module 24, configured to obtain a preset number of second images located before the first image in the video when a distance between the second object and the target area is smaller than a preset threshold;
a determining module 26, configured to analyze the preset number of second images, and determine that a target behavior exists in the first object when the second object coincides with a third object, where the third object is used for accommodating the target item.
By the invention, a first image in a video is acquired, a target area on a first object in the first image is identified, and a second object is identified, wherein the first object is used for acquiring a target object by using the second object; under the condition that the distance between the second object and the target area is smaller than a preset threshold value, acquiring a preset number of second images which are positioned in front of the first image in the video; and analyzing the preset number of second images, and determining that the first object has target behaviors under the condition that the second object is coincident with a third object, wherein the third object is used for bearing the target object. Therefore, the problem that the efficiency of identifying the target behavior depending on manual work is low in the related technology can be solved, and the accuracy of identifying the target behavior is improved.
As an optional implementation, the second object includes at least one of the following: a hand of the first object for cooking dishware of the target item; the target region is an oral portion of the first object.
As an optional implementation manner, the determining module 26 is further configured to: analyzing the second images of the preset number to determine the moving track of the second object; determining that the target behavior exists for the first object if the movement trajectory coincides with the third object.
As an optional implementation manner, the determining module 26 is further configured to: and analyzing the preset number of second images, and determining that the target behavior exists in the first object under the condition that a third image exists in the preset number of second images, wherein the third image is provided with a part where the second object and the third object overlap.
As an optional implementation manner, the apparatus further includes a recognition module, configured to perform face recognition on the first object, so as to obtain face information of the first object.
It should be noted that, the above modules may be implemented by software or hardware, and for the latter, the following may be implemented, but not limited to: the modules are all positioned in the same processor; alternatively, the modules are respectively located in different processors in any combination.
Embodiments of the present invention also provide a computer-readable storage medium having a computer program stored thereon, wherein the computer program is arranged to perform the steps of any of the above-mentioned method embodiments when executed.
Optionally, in this embodiment, the computer-readable storage medium may include, but is not limited to: various media capable of storing computer programs, such as a usb disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk.
Embodiments of the present invention also provide an electronic device comprising a memory having a computer program stored therein and a processor arranged to run the computer program to perform the steps of any of the above method embodiments.
Optionally, the electronic apparatus may further include a transmission device and an input/output device, wherein the transmission device is connected to the processor, and the input/output device is connected to the processor.
Optionally, the specific examples in this embodiment may refer to the examples described in the above embodiments and optional implementation manners, and this embodiment is not described herein again.
It will be apparent to those skilled in the art that the modules or steps of the present invention described above may be implemented by a general purpose computing device, they may be centralized on a single computing device or distributed across a network of multiple computing devices, and alternatively, they may be implemented by program code executable by a computing device, such that they may be stored in a storage device and executed by a computing device, and in some cases, the steps shown or described may be performed in an order different than that described herein, or they may be separately fabricated into individual integrated circuit modules, or multiple ones of them may be fabricated into a single integrated circuit module. Thus, the present invention is not limited to any specific combination of hardware and software.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A method for identifying a target behavior, comprising:
acquiring a first image in a video, identifying a target area on a first object in the first image, and a second object, wherein the first object is used for acquiring a target item by using the second object;
under the condition that the distance between the second object and the target area is smaller than a preset threshold value, acquiring a preset number of second images which are positioned in front of the first image in the video;
and analyzing the preset number of second images, and determining that the first object has target behaviors under the condition that the second object is coincident with a third object, wherein the third object is used for bearing the target object.
2. The method of claim 1, wherein the second object comprises at least one of: a hand of the first object for cooking dishware of the target item; the target region is an oral portion of the first object.
3. The method of claim 1, wherein determining that the target behavior exists for the first object if the second object coincides with a third object comprises:
analyzing the second images of the preset number to determine the moving track of the second object;
determining that the target behavior exists for the first object if the movement trajectory coincides with the third object.
4. The method of claim 1, wherein determining that the target behavior exists for the first object if the second object coincides with a third object comprises:
and analyzing the preset number of second images, and determining that the target behavior exists in the first object under the condition that a third image exists in the preset number of second images, wherein the third image is provided with a part where the second object and the third object overlap.
5. The method of claim 1, wherein after the determining that the target behavior exists for the first object, the method further comprises:
and carrying out face recognition on the first object to obtain face information of the first object.
6. An apparatus for identifying a target behavior, comprising:
a first acquisition module for acquiring a first image in a video, identifying a target area on a first object in the first image, and a second object, wherein the first object is for acquiring a target item using the second object;
a second obtaining module, configured to obtain a preset number of second images located before the first image in the video when a distance between the second object and the target area is smaller than a preset threshold;
and the determining module is used for analyzing the preset number of second images and determining that the first object has target behaviors under the condition that the second object is superposed with a third object, wherein the third object is used for accommodating the target object.
7. The apparatus of claim 6, wherein the second object comprises at least one of: a hand of the first object for cooking dishware of the target item; the target region is an oral portion of the first object.
8. The apparatus of claim 6, wherein the determining module is further configured to:
analyzing the second images of the preset number to determine the moving track of the second object;
determining that the target behavior exists for the first object if the movement trajectory coincides with the third object.
9. The apparatus of claim 6, wherein the determining module is further configured to:
and analyzing the preset number of second images, and determining that the target behavior exists in the first object under the condition that a third image exists in the preset number of second images, wherein the third image is provided with a part where the second object and the third object overlap.
10. The apparatus according to claim 6, further comprising a recognition module configured to perform face recognition on the first object to obtain face information of the first object.
CN201911096214.2A 2019-11-11 2019-11-11 Target behavior identification method and device Pending CN110826506A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911096214.2A CN110826506A (en) 2019-11-11 2019-11-11 Target behavior identification method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911096214.2A CN110826506A (en) 2019-11-11 2019-11-11 Target behavior identification method and device

Publications (1)

Publication Number Publication Date
CN110826506A true CN110826506A (en) 2020-02-21

Family

ID=69553980

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911096214.2A Pending CN110826506A (en) 2019-11-11 2019-11-11 Target behavior identification method and device

Country Status (1)

Country Link
CN (1) CN110826506A (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106709401A (en) * 2015-11-13 2017-05-24 中国移动通信集团公司 Diet information monitoring method and device
CN108021865A (en) * 2017-11-03 2018-05-11 阿里巴巴集团控股有限公司 The recognition methods of illegal act and device in unattended scene
CN108108690A (en) * 2017-12-19 2018-06-01 深圳创维数字技术有限公司 A kind of method, apparatus, equipment and storage medium for monitoring diet
CN108229407A (en) * 2018-01-11 2018-06-29 武汉米人科技有限公司 A kind of behavioral value method and system in video analysis
CN108289201A (en) * 2018-01-24 2018-07-17 北京地平线机器人技术研发有限公司 Video data handling procedure, device and electronic equipment
CN108427914A (en) * 2018-02-08 2018-08-21 阿里巴巴集团控股有限公司 Enter to leave the theatre condition detection method and device
CN109815851A (en) * 2019-01-03 2019-05-28 深圳壹账通智能科技有限公司 Kitchen hygiene detection method, device, computer equipment and storage medium
CN110110732A (en) * 2019-05-08 2019-08-09 杭州视在科技有限公司 A kind of intelligence inspection algorithm for kitchen after food and drink
CN110147717A (en) * 2019-04-03 2019-08-20 平安科技(深圳)有限公司 A kind of recognition methods and equipment of human action
CN110363166A (en) * 2019-07-18 2019-10-22 上海秒针网络科技有限公司 The monitoring method and device for situation of washing one's hands

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106709401A (en) * 2015-11-13 2017-05-24 中国移动通信集团公司 Diet information monitoring method and device
CN108021865A (en) * 2017-11-03 2018-05-11 阿里巴巴集团控股有限公司 The recognition methods of illegal act and device in unattended scene
CN108108690A (en) * 2017-12-19 2018-06-01 深圳创维数字技术有限公司 A kind of method, apparatus, equipment and storage medium for monitoring diet
CN108229407A (en) * 2018-01-11 2018-06-29 武汉米人科技有限公司 A kind of behavioral value method and system in video analysis
CN108289201A (en) * 2018-01-24 2018-07-17 北京地平线机器人技术研发有限公司 Video data handling procedure, device and electronic equipment
CN108427914A (en) * 2018-02-08 2018-08-21 阿里巴巴集团控股有限公司 Enter to leave the theatre condition detection method and device
CN109815851A (en) * 2019-01-03 2019-05-28 深圳壹账通智能科技有限公司 Kitchen hygiene detection method, device, computer equipment and storage medium
CN110147717A (en) * 2019-04-03 2019-08-20 平安科技(深圳)有限公司 A kind of recognition methods and equipment of human action
CN110110732A (en) * 2019-05-08 2019-08-09 杭州视在科技有限公司 A kind of intelligence inspection algorithm for kitchen after food and drink
CN110363166A (en) * 2019-07-18 2019-10-22 上海秒针网络科技有限公司 The monitoring method and device for situation of washing one's hands

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
冯莹莹 等: "《智能监控视频中运动目标跟踪方法研究》", 30 June 2018 *

Similar Documents

Publication Publication Date Title
US20170004357A1 (en) Systems and method for activity monitoring
CN109846303A (en) Service plate surplus automatic testing method, system, electronic equipment and storage medium
CN106871567A (en) Food recommendation process method, device and intelligent refrigerator based on intelligent refrigerator
JP2015138452A (en) Device and program for cuisine residual quantity detection
CN109886555A (en) The monitoring method and device of food safety
CN111415470A (en) Article access method, server, intelligent distribution cabinet and computer readable medium
CN110633697A (en) Intelligent monitoring method for kitchen sanitation
CN110363813A (en) The monitoring method and device of object, storage medium, electronic device
EP3449400B1 (en) A food monitoring system
CN110351598A (en) The transmission method and device of multimedia messages
JP2021096766A (en) Information processing device, information processing system, notification method, and program
CN113591826B (en) Dining table cleaning intelligent reminding method based on computer vision
CN103116838B (en) The system and method for measuring service time interval
CN110287928A (en) Out of Stock detection method and device
CN110826506A (en) Target behavior identification method and device
CN110233897A (en) Information-pushing method and device
CN109615034A (en) The information source tracing method and device of reusable table ware
CN110807441A (en) Automatic hand washing monitoring method and device
CN111915452A (en) Monitoring system, method and device, monitoring processing equipment and storage medium
CN111539346A (en) Food quality detection method and device
KR20130025003A (en) Automated user behavior monitoring system for work environment
CN112949747A (en) Dish detection method, related equipment, system and storage medium
CN209118354U (en) A kind of cafeteria's service system
KR20210049704A (en) A method, device and program for measuring food
CN111260716A (en) Method, device, server and storage medium for determining commercial tenant seat interval

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200221

RJ01 Rejection of invention patent application after publication