CN117576490B - Kitchen environment detection method and device, storage medium and electronic equipment - Google Patents

Kitchen environment detection method and device, storage medium and electronic equipment Download PDF

Info

Publication number
CN117576490B
CN117576490B CN202410065645.7A CN202410065645A CN117576490B CN 117576490 B CN117576490 B CN 117576490B CN 202410065645 A CN202410065645 A CN 202410065645A CN 117576490 B CN117576490 B CN 117576490B
Authority
CN
China
Prior art keywords
image
kitchen environment
target area
kitchen
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410065645.7A
Other languages
Chinese (zh)
Other versions
CN117576490A (en
Inventor
屠一凡
晏阳
曹青骊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lazas Network Technology Shanghai Co Ltd
Koubei Shanghai Information Technology Co Ltd
Original Assignee
Lazas Network Technology Shanghai Co Ltd
Koubei Shanghai Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lazas Network Technology Shanghai Co Ltd, Koubei Shanghai Information Technology Co Ltd filed Critical Lazas Network Technology Shanghai Co Ltd
Priority to CN202410612466.0A priority Critical patent/CN118628790A/en
Priority to CN202410612456.7A priority patent/CN118506072A/en
Priority to CN202410065645.7A priority patent/CN117576490B/en
Publication of CN117576490A publication Critical patent/CN117576490A/en
Application granted granted Critical
Publication of CN117576490B publication Critical patent/CN117576490B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The application discloses a kitchen environment detection method and device, a storage medium and electronic equipment, comprising the following steps: determining a static reference substrate image corresponding to a collection device arranged in a kitchen according to a first kitchen environment image set collected by the collection device; extracting an image of a target area from the acquired second kitchen environment image set according to the static reference substrate image; performing type prediction by taking the image of the target area as a candidate identification image, and determining the prediction type of the object in the target area; inputting the image of the target area as an image to be detected into a type detection model corresponding to the prediction type for detection, and determining the safety state of the object in the kitchen environment; avoiding consumption of computing resources and providing accuracy of identification.

Description

Kitchen environment detection method and device, storage medium and electronic equipment
Technical Field
The application relates to the field of machine vision, in particular to a kitchen environment detection method and device. The application also relates to a computer storage medium and an electronic device.
Background
The bright kitchen range is a common kitchen safety and sanitation sign, and originally means that a lamp box in a kitchen is kept bright during cooking, so that a working area of a chef can be clearly seen. This sign emphasizes the importance of food safety and hygiene, enabling customers to see the entire cooking process, increasing transparency and reliability. Through bright kitchen, the customer can be with ease observe cook's operation, ensures quality and the health of food and accords with the standard.
The bright kitchen range is not just a sign, but is a concept related to the whole catering industry and food safety problems. The kitchen range can improve the cognition and trust of the public on food safety and sanitation, and can promote the management of catering enterprises and improve the service quality. In the dining market today, more and more restaurants and ordering services adopt a kitchen-lighting kitchen method so as to improve dining experience and satisfaction of customers.
With the continuous development of internet technology, the bright kitchen range has been extended from the safety supervision process that enables customers to see the environment and operation process inside the kitchen to the identification (i.e. computer vision means) of the image information collected by the collection device installed in the kitchen, so as to find the potential safety hazard in real time, thereby helping merchants to improve the safety management level and the sanitation standard, reduce the food safety risk, and ensure the quality and safety of food. In a word, bright kitchen has played important effect in promoting food safety consciousness and quality of service.
Disclosure of Invention
The application provides a kitchen environment detection method for solving the problem of overlarge consumption of computing resources in a detection process in the prior art.
The application provides a kitchen environment detection method, which comprises the following steps:
determining a static reference substrate image corresponding to a collection device arranged in a kitchen according to a first kitchen environment image set collected by the collection device;
extracting an image of a target area from the acquired second kitchen environment image set according to the static reference substrate image; performing type prediction by taking the image of the target area as a candidate identification image, and determining the prediction type of the object in the target area;
and inputting the image of the target area as an image to be detected into a type detection model corresponding to the prediction type for detection, and determining the safety state of the object in the kitchen environment.
In some embodiments, the determining the static reference substrate image corresponding to the acquisition device according to the first kitchen environment image set acquired by the acquisition device arranged according to the kitchen environment comprises:
object recognition is carried out on the image in the first kitchen environment image set, and a recognition image is obtained;
Determining whether a difference image exists in the identification images according to the comparison between the identification images;
if not, determining the image selected randomly in the first kitchen environment image set as the static reference substrate image.
In some embodiments, the object identifying the image in the first kitchen environment image set, and acquiring an identification image includes:
detecting object edges of images in the first kitchen environment image set, and determining object segmentation areas in the images;
and carrying out object recognition on the divided areas, and acquiring recognition images corresponding to the divided areas.
In some embodiments, further comprising:
when the difference image which determines whether the difference exists in the identification image is yes, determining whether the difference is in a difference threshold range determined according to the acquisition equipment specification and/or the acquisition angle;
if yes, determining the randomly selected image in the first kitchen environment image as the static reference substrate image.
In some embodiments, further comprising:
and when the difference is determined to be within a difference threshold value range determined according to the specification and/or the acquisition angle of the acquisition equipment, determining an image with the minimum number of object objects in the identification image as a static reference base image corresponding to the acquisition equipment.
In some embodiments, the determining a static reference substrate image corresponding to the acquisition device according to the first kitchen environment image set acquired by the acquisition device arranged in the kitchen comprises:
randomly selecting an image from the first kitchen environment image set as a candidate static reference substrate image;
performing object recognition on the image in the first kitchen environment image set;
updating the candidate static reference base image according to the image, the number of the object objects of which is recognized in the first kitchen environment image set is smaller than that of the object objects in the candidate static reference base image;
and determining whether the number of the object objects in the candidate static reference base image is the image with the least object objects in the first kitchen environment image set, if so, determining the updated candidate static reference base image as the static reference base image.
In some embodiments, the determining the type of the object prediction of the target area by using the image of the target area as the candidate recognition image and performing type prediction on the object corresponding to the target area includes:
determining whether an area ratio between the target area and the static reference substrate image is greater than or equal to a preset threshold range;
If yes, taking the image of the target area as a candidate identification image, and inputting the candidate identification image into a type prediction model for type prediction;
and determining the object prediction type of the target area according to the prediction result.
In some embodiments, the performing type prediction using the image of the target area as the candidate identification image, and determining the object prediction type corresponding to the target area includes:
extracting feature data of an object in the candidate identification image according to a feature extraction mode corresponding to the acquisition time of the candidate identification image;
and performing type prediction according to the characteristic data, and acquiring an object prediction type corresponding to the target area.
In some embodiments, the extracting feature data of the object in the candidate identification image according to the feature extraction mode corresponding to the acquisition time of the candidate identification image includes:
when the acquisition time is a daytime period, extracting local feature data of the object in the candidate identification image according to a local feature extraction mode;
and when the acquisition time is a night time interval, extracting reflective characteristic data of the object in the candidate identification image according to a light reflective characteristic extraction mode.
In some embodiments, the determining a static reference substrate image corresponding to the acquisition device according to the first kitchen environment image set acquired by the acquisition device arranged in the kitchen comprises:
acquiring a first kitchen environment image set of the acquisition equipment according to the configured acquisition dimension of the kitchen environment image set; wherein the acquisition dimension comprises: at least one of a time dimension, a meal type dimension, and a kitchen location dimension;
and determining a static reference substrate image corresponding to the acquisition equipment according to the first kitchen environment image set.
The application also provides a kitchen environment detection device, include:
the first determining unit is used for determining a static reference substrate image corresponding to the acquisition equipment according to a first kitchen environment image set acquired by the acquisition equipment arranged in the kitchen;
the extraction unit is used for extracting an image of a target area from the acquired second kitchen environment image set according to the static reference substrate image;
a second determining unit, configured to perform type prediction by using the image of the target area as a candidate recognition image, and determine an object prediction type corresponding to the target area;
And the third determining unit is used for inputting the image of the target area as an image to be detected into a type detection model corresponding to the object prediction type to detect, and determining the safety state of the object in the kitchen environment.
The present application also provides a computer storage medium for storing a computer program;
the program performs the kitchen environment detection method as described above.
The application also provides an electronic device comprising:
a processor;
and a memory for storing a computer program, the program executing the kitchen environment detection method as described above.
Compared with the prior art, the application has the following advantages:
according to the kitchen environment detection method, firstly, the acquired images in the first kitchen environment image set are screened, the static reference base image is determined, then, the static reference base image is compared with the acquired images in the second kitchen environment image set, the image with the target area is extracted, and further, the consumption of calculation resources is reduced for the object type prediction of the subsequent target area, and on one hand, the second kitchen environment image is not required to be globally identified, and whether the target area exists can be determined only by comparing the second kitchen environment image with the static reference base image; on the other hand, the type prediction is only aimed at the extracted target area part rather than the whole image, so that a large amount of calculation resources are not consumed. Based on the type prediction result of the object corresponding to the target area, type detection is performed again, so that the accuracy of object type detection of the target area can be ensured, and the accuracy of safety state identification in the kitchen environment can be ensured.
Drawings
Fig. 1 is a flowchart of an embodiment of a kitchen environment detection method provided in the present application.
Fig. 2 is a schematic diagram of object edge segmentation in an embodiment of a kitchen environment detection method provided in the present application.
Fig. 3 is a schematic structural diagram of an embodiment of a kitchen environment detecting device provided in the present application.
Fig. 4 is a schematic structural diagram of an embodiment of an electronic device provided in the present application.
Detailed Description
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application. This application is, however, susceptible of embodiment in many other ways than those herein described and similar generalizations can be made by those skilled in the art without departing from the spirit of the application and the application is therefore not limited to the specific embodiments disclosed below.
The terminology used in the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the present application. The manner of description used in this application and in the appended claims is for example: "a", "a" and "a" etc. are not limited in number or order, but are used to distinguish the same type of information from each other.
According to the background technology, the application scene of the bright kitchen range is sourced from the invention conception, and particularly, in the application scene of the bright kitchen range, cameras are arranged in a kitchen of a catering merchant so as to realize detection of operation specifications of kitchen personnel, environment conditions of the kitchen and the like. However, in the prior art, on one hand, because cameras are provided by different manufacturers, abnormal condition identification is locally performed through the cameras, and the mode is limited by the performance and hardware conditions of the cameras, and different camera specifications have different image proportions, under the condition of facing a large amount of acquired data, finer identification cannot be realized due to performance reasons; on the other hand, because the hardware and the manufacturer related to the camera are different, the local recognition capability is also different, so that the recognition standards are not uniform, and the problem of poor recognition consistency exists. Therefore, the kitchen environment detection method provided by the application can be used for avoiding the problems of low detection accuracy and large calculation amount caused by the problems of camera performance, hardware and the like. The following describes a kitchen environment detection method provided by the application.
As shown in fig. 1, fig. 1 is a flowchart of an embodiment of a kitchen environment detection method provided in the present application, where the method includes:
step S101: determining a static reference substrate image corresponding to a collection device arranged in a kitchen according to a first kitchen environment image set collected by the collection device;
step S102: extracting an image of a target area from the acquired second kitchen environment image set according to the static reference substrate image;
step S103: performing type prediction by taking the image of the target area as a candidate identification image, and determining an object prediction type corresponding to the target area;
step S104: and inputting the image of the target area as an image to be detected into a type detection model corresponding to the object prediction type for detection, and determining the safety state of the object in the kitchen environment.
The above steps S101 to S104 will be described in detail with reference to specific examples
Regarding the step S101: and determining a static reference substrate image corresponding to the acquisition equipment according to the first kitchen environment image set acquired by the acquisition equipment arranged in the kitchen.
The acquisition device can be a camera or the like for acquiring image information or video information. The first set of kitchen environment images may be a set of images comprising a plurality of consecutive images, for example: sequential frame images in video data; multiple non-continuous images are also possible, for example: and respectively sending images according to the set acquisition time. The plurality of continuous images and the discontinuous image may also be understood as differences in time intervals, such as: the consecutive images may be short intervals between images, such as one second of a frame of an image, and the non-consecutive images may be long intervals between images, such as five seconds of a frame of an image. The images in the first kitchen environment image set may be video stream data collected by a camera, and the video stream data may include frame images of different time intervals. Of course, the image may be transmitted by the acquisition device according to the set transmission time. The static reference base image is understood in this embodiment to be the reference base image determined in the first set of kitchen environment images.
The purpose of said step S101 is to determine a static reference substrate image from the first set of kitchen environment images. Because different acquisition devices have different specification signals and arrangement angles, when a plurality of acquisition devices are arranged in a kitchen environment, corresponding static reference substrate images need to be established for the different acquisition devices.
In this embodiment, when the collection device collects kitchen environment information, the collection device may collect the kitchen environment information according to different collection dimensions, which may specifically include:
step S101-1: according to the configured acquisition dimension of the kitchen environment image set, a first kitchen environment image set is acquired from the acquisition equipment; wherein the acquisition dimension comprises: at least one of a time dimension, a meal type dimension, and a kitchen location dimension;
step S101-2: and determining a static reference substrate image corresponding to the acquisition equipment according to the first kitchen environment image set.
The specific implementation of the step S101-1 may include: setting the acquisition frequency according to the food safety high-emission period, for example: the summer is a food safety high-emission period, and the acquisition frequency of the period can be increased so as to improve the detection frequency of the period to the kitchen environment. The acquisition frequency may also be set according to the food safety high hair industry, i.e. the type of food safety high hair, for example: the acquisition frequency is increased for snack types such as spicy soup, barbecue and the like so as to improve the detection frequency of the kitchen environment in the catering industry of the type. The acquisition frequency may also be set according to the location of the merchant's kitchen, for example: the frequency of acquisition can be improved in the kitchen behind the trade company, and the kitchen can be improved in acquisition frequency in the trade company behind the trade company just below the 2 floors of trade company. Of course, the method can also comprise the steps of setting the acquisition frequency according to the number of workers in the back kitchen, and the specific acquisition dimension is not limited to the specific scene of food safety.
In this embodiment, it should be noted that, for the collection device deployed in the kitchen environment, the collection device may include devices with multiple different specifications, and may be the same provider from different collection devices. The acquired kitchen environment image set may exist in multiple formats, whether from different providers or from the same provider, and thus require an adaptation process to process continuous or discontinuous images acquired from the acquisition device into images that can be identified by the CV large model or the multi-modal large model. The CV large model is a large deep learning model for computer vision tasks, and is usually implemented by a deep learning algorithm such as convolutional neural network (Convolutional Neural Network, CNN). In recent years, with the development of deep learning technology and the improvement of computing power, a CV large model has achieved a lot of important achievements in the field of computer vision, such as tasks of image classification, object detection, image segmentation, and the like. The basic idea of CV large models is to convert an input image into an output result by learning a mapping from the input image to the output result, for example, information identifying the type or position of an object in the image.
The prior art can know that the acquisition equipment is limited by the performance of the acquisition equipment, so that the acquisition equipment has a certain limitation on image processing, and the identification result is inaccurate. In this embodiment, in order to reduce the consumption of computing resources at the time of abnormality detection, the processing efficiency is improved from the service side (or processing side) for image detection. The method specifically comprises the steps that matched static reference base map images are built for acquisition equipment arranged in a kitchen environment, so that the calculated amount is reduced in the subsequent kitchen safety risk detection process. Namely: the early preparation is used as an important point, a reliable basis is provided for subsequent detection, and the consumption of computing resources is reduced while the detection accuracy is improved. The specific implementation may include a variety of implementations, described in turn below.
The first mode comprises the following steps:
step S101-11: object recognition is carried out on the image in the first kitchen environment image set, and a recognition image is obtained;
step S101-12: and determining whether a difference image exists in the identification images according to the comparison between the identification images.
Step S101-13: if not, determining the image selected randomly in the first kitchen environment image set as the static reference substrate image.
Wherein, for the step S101-11, there may be at least two cases in the present embodiment:
in the first case, when the first kitchen environment image set is a continuous sequence frame image in the video stream acquired by the acquisition device, object identification can be performed according to the context between the images, and an identification image is acquired.
The second case is that when the first kitchen environment image set is a discontinuous image acquired by the acquisition device, the edge detection may be used to determine the edge of the object in the image (as shown in fig. 2), which may specifically include:
step S101-111: detecting object edges of images in the first kitchen environment image set, and determining object segmentation areas in the images;
steps S101-112: and carrying out object recognition on the divided areas, and acquiring recognition images corresponding to the divided areas. The contour of the object in the divided region may be determined by object recognition, and the contour and the object in the contour may be used as the recognition image.
It should be noted that, for both continuous and discontinuous images, object recognition can be achieved through steps S101-111 and steps S101-112. To improve detection efficiency, successive images may be identified in a contextual fashion.
The purpose of said step S101-12 is to determine whether there is a difference or a variation between the identification images, which may be, but is not limited to, by comparing contours and/or pixel values.
In the first mode, the object recognition is performed on the images in the first kitchen environment image set, then the recognition images are compared, whether the difference exists between the images is determined, if not, the fact that no object change occurs or no living object appears in the images in the first kitchen environment image set is indicated, and therefore one image can be randomly selected from the first kitchen environment image set to serve as a static reference base image. On the contrary, if there is a difference, it indicates that there is an object change or a living object in the image, in which case further judgment is needed, because the occurrence of the difference is not necessarily the occurrence of the living object, and for the kitchen environment, the frequency of the occurrence of the object change during the food processing process is more, for example: the probability of image differences occurring is high if the image is an operation area, and thus the probability of variations occurring has a certain correlation with the specification and/or the angle of installation of the acquisition device, for example: the ratio of the selected image ranges is also different for the same region for the wide angle lens and the fisheye lens. In order to improve the accuracy of the judgment, the method can further comprise:
When the difference image which determines whether the difference exists in the identification image is yes, determining whether the difference is in a difference threshold range determined according to the acquisition equipment specification and/or the acquisition angle; the difference threshold range may be set according to the specification and/or the collection angle of the collection device, and may of course also be determined according to historical experience values, for example: and may be between 5% and 30%.
If yes, determining the randomly selected image in the first kitchen environment image as the static reference substrate image.
If not, determining the image with the least number of the object objects in the identification image as a static reference base image corresponding to the acquisition equipment.
The second mode comprises:
step S101-21: randomly selecting an image from the first kitchen environment image set as a candidate static reference substrate image;
step S101-22: performing object recognition on the image in the first kitchen environment image set;
step S101-23: updating the candidate static reference base image according to the image, recognized in the first kitchen environment image set, of which the number of the object objects is smaller than that of the object objects in the candidate static reference base image;
Step S101-24: and determining whether the number of the object objects in the candidate static reference base image is the image with the least object objects in the first kitchen environment image set, if so, determining the updated candidate static reference base image as the static reference base image.
In the second mode, the object recognition may be performed in the same manner as in the first mode.
In either the first or the second mode, the static reference base image may be an image with the least objects in the images, and when the images are compared by identification, the number of objects in the images may be iteratively updated, and after all the images in the first kitchen environment image set are compared, the static reference base image is determined. In the first mode, whether the identification images have differences or not is determined through comparison between the identification images, and when the identification images have no differences, the static reference base image is determined. And in the second mode, the object quantity is compared and determined to the minimum image mainly through the randomly selected candidate static reference substrate image, and then the candidate static reference substrate image is updated.
The above is a determination of a static reference substrate image, it being understood that the above determination process may be applied to a different first set of kitchen environment images, for example: the first kitchen environment image set image is a continuous image or a discontinuous image. The determination of the static reference base image in the successive images may also be performed in the following manner:
Extracting a first frame image in the sequence frame images as a reference image;
comparing the second frame image with the first frame image according to the sequence of the sequence frame images, and extracting a difference part;
performing object recognition on the difference part, and taking the part outside the target object area as a static reference base image if a target object exists; the process is repeated until all the successive images are identified. Areas outside the area where the target object is identified may be saved as static reference base images. It will be appreciated that the determination of the static reference base image for a continuous image may be a partial stitching of pictures taken at different times, requiring more rapid identification of contours and regions than for a non-continuous picture.
After determining the static reference base image through step S101, an image in which the target area exists in the second kitchen environment image set may be extracted according to the acquired second kitchen environment image set and the static reference base image. That is, the step S102 is performed.
Regarding step S102: and extracting an image of the target area from the acquired second kitchen environment image set according to the static reference substrate image.
The specific implementation process of step S102 may be that the image in the second kitchen environment image set is compared with the static reference substrate image, and the image with the target area in the image is extracted, so that the objects in the image in the second kitchen environment image set do not need to be identified in sequence, and only need to be compared with the static reference substrate image, thereby reducing the cost of computing resources.
It should be noted that the purpose of this embodiment is to detect the safety problem existing in the kitchen environment, so as to improve the food safety. Thus, the extraction of the target area may be determined in connection with related objects related to security issues, such as: no person is in the static reference base image, which extracts the relevant area of the person if there is a person when compared with the image in the second set of kitchen environment images, as well as for example: for still extraction involving safety problems, if there is a trash can in the static reference base image and the image in the second kitchen environment image set, the trash can is extracted as a target area so as to detect the safety state of the trash can later, such as whether the cover is closed, whether the trash overflows, and the like. Of course, if the extracted target image is determined based on the change, the static reference base image may be a base image in a safe state, and in this embodiment, it is also described that the static reference base image includes the number of objects that is the smallest in the number of images in the first set of kitchen environment images, and may include an image in which the static object is in a safe state, for example: and the garbage can is in a closed state. Of course, the extraction of the subsequent target area and the subsequent type prediction and identification of the security state are not affected if the state is on.
Regarding step S103: and performing type prediction by taking the image of the target area as a candidate identification image, and determining the prediction type of the object in the target area.
In order to improve the accuracy of the detection result, the target area may be further determined, which may specifically include:
step S103-11: determining whether the area ratio of the target area in the static reference substrate image is greater than or equal to a preset threshold range; for example: assuming that the target area is a reflecting point, the area ratio of the reflecting point in the static reference base image is different from the area ratio of the reflecting point of the living eye in the static reference base image. Assuming that the target area is the contour of a mouse, its area ratio in the static reference base image is necessarily smaller than that of the contour of the human body as the target area. That is, in order to avoid the occurrence of processing such as determination of the light emitting point of the other device as a living eye without performing recognition calculation by the computing resource, it is possible to determine whether or not the type prediction is necessary by advancing the one-step threshold range judgment. The preset threshold range may be a general preset empirical value, or may be based on data related to the current camera in determining the static reference base image, for example: the static reference base image iteratively optimizes the data involved. For example, if the change area is 5% -7% of mice, but the camera is fish-eye, there will be some amplification and distortion, so the preset threshold range may be adjusted, for example, 3% -6%. Therefore, the preset threshold range can be adjusted in real time along with factors such as the specification, the performance and the installation angle of the camera.
Step S103-12: if yes, taking the image of the target area as a candidate identification image, and inputting the candidate identification image into a type prediction model for type prediction;
step S103-13: and determining the object prediction type of the target area according to the prediction result.
The specific identifying process in step S103 may include:
step S103-21: extracting feature data of an object in the candidate identification image according to a feature extraction mode corresponding to the acquisition time of the candidate identification image;
step S103-22: and performing type prediction according to the characteristic data, and acquiring an object prediction type corresponding to the target area.
The collection time in step S103-21 may be a daytime period or a nighttime period, and in general, the probability of living things such as kitchen, cat and mouse occurring at night is high, and at this time, feature extraction may be performed by light reflection. The daytime usually belongs to working time, and mainly detects equipment such as kitchen personnel's behavior standard, garbage bin and the like, so that a local feature extraction mode can be adopted. Of course, the daytime period or the nighttime period is not limited to the above two feature extraction methods, and the local feature extraction method may be applied to nighttime feature extraction.
Thus, the specific implementation procedure of the step S103-21 may include:
step S103-21-1: when the acquisition time is a daytime period, extracting local feature data of the object in the candidate identification image according to a local feature extraction mode;
step S103-21-2: and when the acquisition time is a night time interval, extracting reflective characteristic data of the object in the candidate identification image according to a light reflective characteristic extraction mode.
Regarding step S104: and inputting the image of the target area as an image to be detected into a type detection model corresponding to the prediction type for detection, and determining the safety state of the object in the kitchen environment. Specifically, when the predicted type of the object in the target area is a post-kitchen worker, the image of the target area is input into a first type detection model (a detection model with a human type) as an image to be detected for detection, whether the dressing of the post-kitchen worker meets the regulation is determined, and if the dressing does not meet the regulation, safety state abnormality information is sent to a merchant; and/or determining whether the staff in the back kitchen has illegal behaviors, and if so, sending security state abnormal information to a merchant. When the object in the target area is a still, for example, a garbage can or the like, the image of the target area is input as an image to be detected into a second type detection model (a detection model of the type of the still) to be detected, and whether the still is in a safe state is determined, for example: whether the dustbin is covered, whether the gas stove is closed, the using state of the chopping board and the like. When the object in the target area is a living object, such as a mouse, a cat, a dog, a cockroach and the like, determining that the image of the target area is input into a third type detection model (a detection model with the type of living object) as an image to be detected for detection, and determining whether the living object is in a safe state.
Of course, the above examples, such as stills, living things, people, etc., are just one example of a combination of conventional scenarios, and the kitchen detection involved in food safety may also include: objects that cause safety hazards in post-kitchen, such as: objects (such as a lighter, a charger, etc. which are easy to cause fire disaster) which are irrelevant to the kitchen, the state of the kitchen gas cooker, the state of the cooker, etc. The food safety can be used as the content to be detected.
The above is a description of an embodiment of a kitchen environment detection method provided by the application, in the embodiment of the method, firstly, an image in a first kitchen environment image set is obtained is screened, a static reference base image is determined, then, the static reference base image is compared with an image in a second kitchen environment image set, an image with a target area is extracted, and then, the consumption of computing resources is reduced for the object type prediction of the subsequent target area, because on one hand, the second kitchen environment image is not required to be globally identified, and whether the target area exists can be determined only by comparing the second kitchen environment image with the static reference base image; on the other hand, the type prediction is only aimed at the extracted target area part rather than the whole image, so that a large amount of calculation resources are not consumed. Based on the type prediction result of the object corresponding to the target area, type detection is performed again, so that the accuracy of object type detection of the target area can be ensured, and the accuracy of safety state identification in the kitchen environment can be ensured.
The foregoing is a specific description of an embodiment of a kitchen environment detection method provided in the present application, corresponding to the foregoing embodiment of the kitchen environment detection method provided in the present application, and further discloses an embodiment of a kitchen environment detection device, please refer to fig. 3, and since the device embodiment is substantially similar to the method embodiment, the description is relatively simple, and relevant places refer to part of the description of the method embodiment. The device embodiments described below are merely illustrative.
As shown in fig. 3, fig. 3 is a schematic structural diagram of an embodiment of a kitchen environment detection device provided in the present application, where the embodiment of the device may include:
a first determining unit 301, configured to determine a static reference substrate image corresponding to a collection device disposed in a kitchen according to a first kitchen environment image set collected by the collection device;
an extracting unit 302, configured to extract an image of a target area from the acquired second kitchen environment image set according to the static reference substrate image;
a second determining unit 303, configured to perform type prediction using the image of the target area as a candidate identification image, and determine an object prediction type corresponding to the target area;
And a third determining unit 304, configured to input, as an image to be detected, an image of the target area into a type detection model corresponding to the predicted type of the object for detection, and determine a safety state of the object in the kitchen environment.
The first determining unit 301 in the first mode may include: the method comprises the steps of acquiring a subunit, a first determining subunit and a second determining subunit;
the acquisition subunit is used for carrying out object identification on the image in the first kitchen environment image set to acquire an identification image;
the first determining subunit is configured to determine whether a difference image exists in the identification images according to a comparison between the identification images;
and the second determining subunit is configured to determine, according to the determination result of the first determining subunit, the image randomly selected from the first kitchen environment image set as the static reference substrate image when the determination result is no.
The acquisition subunit may include: a region determination subunit and an image acquisition subunit;
the region determining subunit is configured to perform object edge detection on an image in the first kitchen environment image set, and determine an object segmentation region in the image;
The image acquisition subunit is used for carrying out object identification on the segmentation area and acquiring an identification image corresponding to the segmentation area.
Further comprises: a difference determining subunit and a selecting subunit;
the difference determining subunit is used for determining whether the difference is in a difference threshold range determined according to the specification and/or the acquisition angle of the acquisition equipment when the difference image which determines whether the difference exists in the identification image is yes;
the selecting subunit is configured to determine, when the determination result of the difference determining subunit is yes, an image randomly selected from the first kitchen environment image as the static reference substrate image; and when the determination result of the difference determination subunit is negative, determining an image with the minimum number of object objects in the identification image as a static reference base image corresponding to the acquisition equipment.
The second mode of the first determining unit 301 may include: selecting a subunit, identifying a subunit, updating the subunit and determining the subunit;
the selecting subunit is configured to randomly select an image in the first kitchen environment image set as a candidate static reference substrate image;
The identification subunit is used for carrying out object identification on the image in the first kitchen environment image set;
the updating subunit is configured to update the candidate static reference base image according to an image that the number of object objects identified in the first kitchen environment image set is smaller than the number of object objects in the candidate static reference base image;
and the determining subunit is configured to determine whether the number of object objects in the candidate static reference base image is an image with the least object objects in the first kitchen environment image set, and if yes, determine the updated candidate static reference base image as the static reference base image.
The first determining unit 301 may include: an acquisition subunit and a determination subunit;
the acquisition subunit is used for acquiring a first kitchen environment image set of the acquisition equipment according to the configured acquisition dimension of the kitchen environment image set; wherein the acquisition dimension comprises: at least one of a time dimension, a meal type dimension, and a kitchen location dimension;
the determining subunit is configured to determine, according to the first kitchen environment image set, a static reference substrate image corresponding to the acquisition device.
The second determining unit 303 may include: an area ratio determination subunit, a prediction subunit, and a determination subunit;
the area ratio determining subunit is used for determining whether the area ratio between the target area and the static reference substrate image is greater than or equal to a preset threshold range;
the prediction subunit is used for taking the image of the target area as a candidate identification image and inputting the candidate identification image into a type prediction model for type prediction when the determination result of the area ratio determination subunit is yes;
and the determination subunit is used for determining the object prediction type of the target area according to the prediction result.
The second determining unit 303 may include: an extraction subunit and an acquisition subunit;
the extraction subunit is used for extracting the feature data of the object in the candidate identification image according to the feature extraction mode corresponding to the acquisition time of the candidate identification image; specifically, the method comprises the following steps: when the acquisition time is a daytime period, extracting local feature data of the object in the candidate identification image according to a local feature extraction mode; and when the acquisition time is a night time interval, extracting reflective characteristic data of the object in the candidate identification image according to a light reflective characteristic extraction mode.
And the obtaining subunit is used for carrying out type prediction according to the characteristic data to obtain the object prediction type corresponding to the target area.
For the content of the above-described apparatus embodiment, reference may be made to step S101 to step S103 in the above-described method embodiment, which will not be described in detail here.
Based on the foregoing, the present application further provides a computer storage medium for storing a computer program;
the program executes the contents of steps S101 to S103 as referred to in the above-described kitchen environment detection method embodiment.
Based on the foregoing, the present application further provides an electronic device, as shown in fig. 4, including:
a processor 401;
a memory 402 for storing a computer program that executes the contents of steps S101 to S103 as referred to in the above-described kitchen environment detection method embodiment.
It should be noted that, the user information (including but not limited to user equipment information, user personal information, etc.) and the data (including but not limited to data for analysis, stored data, presented data, etc.) related to the present application are information and data authorized by the user or fully authorized by each party, and the collection, use and processing of the related data need to comply with the related laws and regulations and standards of the related country and region, and provide corresponding operation entries for the user to select authorization or rejection.
In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
1. Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer readable media, as defined herein, does not include non-transitory computer readable media (transmission media), such as modulated data signals and carrier waves.
2. It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
While the preferred embodiment has been disclosed, it is not intended to limit the invention, and any person skilled in the art can make possible the objects and modifications without departing from the spirit and scope of the invention, so the scope of the invention shall be defined by the claims of the present application.

Claims (8)

1. A kitchen environment detection method, comprising:
determining a static reference substrate image corresponding to a collection device arranged in a kitchen according to a first kitchen environment image set collected by the collection device;
extracting an image of a target area from the acquired second kitchen environment image set according to the static reference substrate image;
Performing type prediction by taking the image of the target area as a candidate identification image, and determining the prediction type of the object in the target area; comprising the following steps: extracting feature data of an object in the candidate identification image according to a feature extraction mode corresponding to the acquisition time of the candidate identification image; performing type prediction according to the characteristic data to obtain an object prediction type corresponding to the target area;
inputting the image of the target area as an image to be detected into a type detection model corresponding to the prediction type for detection, and determining the safety state of the object in the kitchen environment;
wherein, according to the first kitchen environment image set collected by the collection device arranged in the kitchen, determining the static reference substrate image corresponding to the collection device comprises:
object recognition is carried out on the image in the first kitchen environment image set, and a recognition image is obtained;
determining whether a difference image exists in the identification images according to the comparison between the identification images;
if not, determining the image selected randomly in the first kitchen environment image set as the static reference substrate image;
Further comprises: when the difference image with the difference in the identification image is yes, determining whether the difference is in a difference threshold range determined according to the specification and/or the acquisition angle of the acquisition equipment; if yes, the image selected randomly from the first kitchen environment image is determined to be the static reference base image, and if not, the image with the least number of object objects in the identification image is determined to be the static reference base image corresponding to the acquisition equipment.
2. The kitchen environment detection method according to claim 1, wherein the object recognition is performed on the image in the first kitchen environment image set, and obtaining a recognition image includes:
detecting object edges of images in the first kitchen environment image set, and determining object segmentation areas in the images;
and carrying out object recognition on the divided areas, and acquiring recognition images corresponding to the divided areas.
3. The kitchen environment detection method according to claim 1, wherein the determining the object prediction type of the target area by performing type prediction on the object corresponding to the target area by using the image of the target area as a candidate recognition image includes:
Determining whether an area ratio between the target area and the static reference substrate image is greater than or equal to a preset threshold range;
if yes, taking the image of the target area as a candidate identification image, and inputting the candidate identification image into a type prediction model for type prediction;
and determining the object prediction type of the target area according to the prediction result.
4. The kitchen environment detection method according to claim 1, wherein the extracting feature data of the object in the candidate recognition image according to the feature extraction mode corresponding to the acquisition time of the candidate recognition image includes:
when the acquisition time is a daytime period, extracting local feature data of the object in the candidate identification image according to a local feature extraction mode;
and when the acquisition time is a night time interval, extracting reflective characteristic data of the object in the candidate identification image according to a light reflective characteristic extraction mode.
5. The kitchen environment detection method according to claim 1, wherein the determining a static reference substrate image corresponding to a kitchen layout acquisition device according to a first kitchen environment image set acquired by the acquisition device comprises:
Acquiring a first kitchen environment image set of the acquisition equipment according to the configured acquisition dimension of the kitchen environment image set; wherein the acquisition dimension comprises: at least one of a time dimension, a meal type dimension, and a kitchen location dimension;
and determining a static reference substrate image corresponding to the acquisition equipment according to the first kitchen environment image set.
6. A kitchen environment detection device, comprising:
the first determining unit is used for determining a static reference substrate image corresponding to the acquisition equipment according to a first kitchen environment image set acquired by the acquisition equipment arranged in the kitchen;
the extraction unit is used for extracting an image of a target area from the acquired second kitchen environment image set according to the static reference substrate image;
a second determining unit, configured to perform type prediction by using the image of the target area as a candidate recognition image, and determine an object prediction type corresponding to the target area; comprising the following steps: extracting feature data of an object in the candidate identification image according to a feature extraction mode corresponding to the acquisition time of the candidate identification image; performing type prediction according to the characteristic data to obtain an object prediction type corresponding to the target area;
A third determining unit, configured to input, as an image to be detected, an image of the target area into a type detection model corresponding to the object prediction type for detection, and determine a safety state of the object in the kitchen environment;
the first determining unit is specifically configured to:
object recognition is carried out on the image in the first kitchen environment image set, and a recognition image is obtained;
determining whether a difference image exists in the identification images according to the comparison between the identification images;
if not, determining the image selected randomly in the first kitchen environment image set as the static reference substrate image;
further comprises: when the difference image with the difference in the identification image is yes, determining whether the difference is in a difference threshold range determined according to the specification and/or the acquisition angle of the acquisition equipment; if yes, the image selected randomly from the first kitchen environment image is determined to be the static reference base image, and if not, the image with the least number of object objects in the identification image is determined to be the static reference base image corresponding to the acquisition equipment.
7. A computer storage medium storing a computer program;
the program performs the kitchen environment detection method as claimed in any one of the preceding claims 1 to 5.
8. An electronic device, comprising:
a processor;
a memory for storing a computer program for executing the kitchen environment detection method according to any one of the preceding claims 1-5.
CN202410065645.7A 2024-01-16 2024-01-16 Kitchen environment detection method and device, storage medium and electronic equipment Active CN117576490B (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN202410612466.0A CN118628790A (en) 2024-01-16 2024-01-16 Kitchen environment detection method and device, storage medium and electronic equipment
CN202410612456.7A CN118506072A (en) 2024-01-16 2024-01-16 Kitchen environment detection method and device, storage medium and electronic equipment
CN202410065645.7A CN117576490B (en) 2024-01-16 2024-01-16 Kitchen environment detection method and device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410065645.7A CN117576490B (en) 2024-01-16 2024-01-16 Kitchen environment detection method and device, storage medium and electronic equipment

Related Child Applications (2)

Application Number Title Priority Date Filing Date
CN202410612456.7A Division CN118506072A (en) 2024-01-16 2024-01-16 Kitchen environment detection method and device, storage medium and electronic equipment
CN202410612466.0A Division CN118628790A (en) 2024-01-16 2024-01-16 Kitchen environment detection method and device, storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN117576490A CN117576490A (en) 2024-02-20
CN117576490B true CN117576490B (en) 2024-04-05

Family

ID=89892259

Family Applications (3)

Application Number Title Priority Date Filing Date
CN202410612456.7A Pending CN118506072A (en) 2024-01-16 2024-01-16 Kitchen environment detection method and device, storage medium and electronic equipment
CN202410065645.7A Active CN117576490B (en) 2024-01-16 2024-01-16 Kitchen environment detection method and device, storage medium and electronic equipment
CN202410612466.0A Pending CN118628790A (en) 2024-01-16 2024-01-16 Kitchen environment detection method and device, storage medium and electronic equipment

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN202410612456.7A Pending CN118506072A (en) 2024-01-16 2024-01-16 Kitchen environment detection method and device, storage medium and electronic equipment

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202410612466.0A Pending CN118628790A (en) 2024-01-16 2024-01-16 Kitchen environment detection method and device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (3) CN118506072A (en)

Citations (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105989743A (en) * 2015-02-13 2016-10-05 杭州海存信息技术有限公司 Nighttime detection for parking vehicles
CN106056584A (en) * 2016-05-24 2016-10-26 努比亚技术有限公司 Foreground-background segmenting device and foreground-background segmenting method
GB201721296D0 (en) * 2017-12-19 2018-01-31 Sony Interactive Entertainment Inc Image generating device and method of generating an image
CN109005368A (en) * 2018-10-15 2018-12-14 Oppo广东移动通信有限公司 A kind of generation method of high dynamic range images, mobile terminal and storage medium
WO2019101021A1 (en) * 2017-11-23 2019-05-31 腾讯科技(深圳)有限公司 Image recognition method, apparatus, and electronic device
CN110264470A (en) * 2019-05-07 2019-09-20 平安科技(深圳)有限公司 Goods train tarpaulin monitoring method, device, terminal and storage medium
CN110633697A (en) * 2019-09-30 2019-12-31 华中科技大学 Intelligent monitoring method for kitchen sanitation
CN112241649A (en) * 2019-07-16 2021-01-19 浙江宇视科技有限公司 Target identification method and device
CN112649900A (en) * 2020-11-27 2021-04-13 上海眼控科技股份有限公司 Visibility monitoring method, device, equipment, system and medium
CN113516612A (en) * 2020-04-13 2021-10-19 阿里巴巴集团控股有限公司 Data processing method, device, equipment and storage medium
CN113569676A (en) * 2021-07-16 2021-10-29 北京市商汤科技开发有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN113706436A (en) * 2020-05-20 2021-11-26 天津科技大学 Target detection method based on self-supervision generation and antagonistic learning background modeling
CN113780110A (en) * 2021-08-25 2021-12-10 中国电子科技集团公司第三研究所 Method and device for detecting weak and small targets in image sequence in real time
CN114463654A (en) * 2022-02-09 2022-05-10 阿里巴巴(中国)有限公司 State detection method, device, equipment and computer storage medium
CN114898279A (en) * 2022-06-10 2022-08-12 深圳市商汤科技有限公司 Object detection method and device, computer equipment and storage medium
CN115424352A (en) * 2022-09-07 2022-12-02 沈阳双杰网络科技集团有限公司 Method for identifying kitchen pest invasion based on computer vision
CN115565103A (en) * 2022-09-23 2023-01-03 深圳市亚略特科技股份有限公司 Dynamic target detection method and device, computer equipment and storage medium
CN115620244A (en) * 2021-07-13 2023-01-17 中国移动通信有限公司研究院 Image detection method, device and equipment based on vehicle-road cooperation and storage medium
CN115661194A (en) * 2022-09-22 2023-01-31 内蒙古智诚物联股份有限公司 Moving object extraction method, system, electronic device and medium
CN116486359A (en) * 2023-04-26 2023-07-25 吉林大学 All-weather-oriented intelligent vehicle environment sensing network self-adaptive selection method
CN116740607A (en) * 2023-06-08 2023-09-12 商汤人工智能研究中心(深圳)有限公司 Video processing method and device, electronic equipment and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3487163B1 (en) * 2016-07-13 2023-07-05 SCREEN Holdings Co., Ltd. Image processing method, image processor, and imaging captureing device

Patent Citations (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105989743A (en) * 2015-02-13 2016-10-05 杭州海存信息技术有限公司 Nighttime detection for parking vehicles
CN106056584A (en) * 2016-05-24 2016-10-26 努比亚技术有限公司 Foreground-background segmenting device and foreground-background segmenting method
WO2019101021A1 (en) * 2017-11-23 2019-05-31 腾讯科技(深圳)有限公司 Image recognition method, apparatus, and electronic device
GB201721296D0 (en) * 2017-12-19 2018-01-31 Sony Interactive Entertainment Inc Image generating device and method of generating an image
CN109005368A (en) * 2018-10-15 2018-12-14 Oppo广东移动通信有限公司 A kind of generation method of high dynamic range images, mobile terminal and storage medium
CN110264470A (en) * 2019-05-07 2019-09-20 平安科技(深圳)有限公司 Goods train tarpaulin monitoring method, device, terminal and storage medium
CN112241649A (en) * 2019-07-16 2021-01-19 浙江宇视科技有限公司 Target identification method and device
CN110633697A (en) * 2019-09-30 2019-12-31 华中科技大学 Intelligent monitoring method for kitchen sanitation
CN113516612A (en) * 2020-04-13 2021-10-19 阿里巴巴集团控股有限公司 Data processing method, device, equipment and storage medium
CN113706436A (en) * 2020-05-20 2021-11-26 天津科技大学 Target detection method based on self-supervision generation and antagonistic learning background modeling
CN112649900A (en) * 2020-11-27 2021-04-13 上海眼控科技股份有限公司 Visibility monitoring method, device, equipment, system and medium
CN115620244A (en) * 2021-07-13 2023-01-17 中国移动通信有限公司研究院 Image detection method, device and equipment based on vehicle-road cooperation and storage medium
CN113569676A (en) * 2021-07-16 2021-10-29 北京市商汤科技开发有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN113780110A (en) * 2021-08-25 2021-12-10 中国电子科技集团公司第三研究所 Method and device for detecting weak and small targets in image sequence in real time
CN114463654A (en) * 2022-02-09 2022-05-10 阿里巴巴(中国)有限公司 State detection method, device, equipment and computer storage medium
CN114898279A (en) * 2022-06-10 2022-08-12 深圳市商汤科技有限公司 Object detection method and device, computer equipment and storage medium
CN115424352A (en) * 2022-09-07 2022-12-02 沈阳双杰网络科技集团有限公司 Method for identifying kitchen pest invasion based on computer vision
CN115661194A (en) * 2022-09-22 2023-01-31 内蒙古智诚物联股份有限公司 Moving object extraction method, system, electronic device and medium
CN115565103A (en) * 2022-09-23 2023-01-03 深圳市亚略特科技股份有限公司 Dynamic target detection method and device, computer equipment and storage medium
CN116486359A (en) * 2023-04-26 2023-07-25 吉林大学 All-weather-oriented intelligent vehicle environment sensing network self-adaptive selection method
CN116740607A (en) * 2023-06-08 2023-09-12 商汤人工智能研究中心(深圳)有限公司 Video processing method and device, electronic equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
一种两阶段的无监督裂纹图像分割方法;梁凤娇 等;《北京交通大学学报》;20230531;全文 *
基于机器视觉的运动目标检测算法综述;张冬梅;武杰;李丕丁;;智能计算机与应用;20200301(03);全文 *

Also Published As

Publication number Publication date
CN117576490A (en) 2024-02-20
CN118506072A (en) 2024-08-16
CN118628790A (en) 2024-09-10

Similar Documents

Publication Publication Date Title
US11756131B1 (en) Automated damage assessment and claims processing
US10395120B2 (en) Method, apparatus, and system for identifying objects in video images and displaying information of same
EP3509014A1 (en) Detecting objects in images
US11620678B2 (en) Advertising method, device and system, and computer-readable storage medium
CN110796646A (en) Method and device for detecting defects of screen area of electronic device
WO2019085064A1 (en) Medical claim denial determination method, device, terminal apparatus, and storage medium
JP7376489B2 (en) Methods and systems for classifying foods
CN110381293A (en) Video monitoring method, device and computer readable storage medium
JP2021526269A (en) Object tracking methods and equipment, electronics and storage media
CN111178116A (en) Unmanned vending method, monitoring camera and system
US20210158356A1 (en) Fraud Mitigation Using One or More Enhanced Spatial Features
CN111369557A (en) Image processing method, image processing device, computing equipment and storage medium
CN117576490B (en) Kitchen environment detection method and device, storage medium and electronic equipment
US11527091B2 (en) Analyzing apparatus, control method, and program
US11594079B2 (en) Methods and apparatus for vehicle arrival notification based on object detection
CN108090391A (en) The recognition methods of Quick Response Code and device
CN112418159A (en) Attention mask based diner monitoring method and device and electronic equipment
KR20200124887A (en) Method and Apparatus for Creating Labeling Model with Data Programming
CN114419544A (en) Intelligent monitoring method, device, equipment and medium based on image recognition
KR20210031444A (en) Method and Apparatus for Creating Labeling Model with Data Programming
US20220366182A1 (en) Techniques for detection/notification of package delivery and pickup
JP7477590B1 (en) Information processing device, information processing method, and information processing program
CN117893815A (en) Data analysis processing system, processing method, storage medium and electronic equipment
CN118506356A (en) Automatic detection method, device, medium and equipment for comprehensive treatment of food safety
CN113435419B (en) Illegal garbage discarding behavior detection method, device and application

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant