CN116246215B - Method for identifying new articles based on visual algorithm, barrel cover and intelligent recycling bin - Google Patents

Method for identifying new articles based on visual algorithm, barrel cover and intelligent recycling bin Download PDF

Info

Publication number
CN116246215B
CN116246215B CN202310527110.2A CN202310527110A CN116246215B CN 116246215 B CN116246215 B CN 116246215B CN 202310527110 A CN202310527110 A CN 202310527110A CN 116246215 B CN116246215 B CN 116246215B
Authority
CN
China
Prior art keywords
frame
suspicious region
new
suspicious
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310527110.2A
Other languages
Chinese (zh)
Other versions
CN116246215A (en
Inventor
张彦钧
王锐
罗诚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiaoshou Innovation Hangzhou Technology Co ltd
Original Assignee
Xiaoshou Innovation Hangzhou Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiaoshou Innovation Hangzhou Technology Co ltd filed Critical Xiaoshou Innovation Hangzhou Technology Co ltd
Priority to CN202310527110.2A priority Critical patent/CN116246215B/en
Publication of CN116246215A publication Critical patent/CN116246215A/en
Application granted granted Critical
Publication of CN116246215B publication Critical patent/CN116246215B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02WCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO WASTEWATER TREATMENT OR WASTE MANAGEMENT
    • Y02W30/00Technologies for solid waste management
    • Y02W30/10Waste collection, transportation, transfer or storage, e.g. segregated refuse collecting, electric or hybrid propulsion

Abstract

The invention relates to the technical field of article recovery, and provides a method for identifying new articles based on a visual algorithm, a barrel cover and an intelligent recovery barrel, which comprise the following steps: the method comprises the steps of light correction, frame difference image generation, suspicious region detection, suspicious region investigation and new-entering article confirmation, wherein a candidate frame and article category with the highest intersection ratio are found out on a later frame of article detection image corresponding to coordinates of the new-entering article according to suspicious region investigation results, and the new-entering article is obtained. The influence of illumination imbalance on a later frame image is reduced by adopting a Gaussian filter algorithm or a gray level algorithm to process, so that the problem of inaccurate identification of new articles due to frame illumination change is solved. By detecting, comparing and sequencing suspicious areas of the frame difference images, the accuracy and the robustness of the algorithm are improved, and the interference of the displacement of articles in the barrel on the algorithm result is reduced; and the highest candidate frame and the highest item category of the identified new item in the coordinates of the new item are used as the new item, so that the fault tolerance of the new item identification is lower.

Description

Method for identifying new articles based on visual algorithm, barrel cover and intelligent recycling bin
Technical Field
The invention relates to the technical field of article recovery, in particular to a method for identifying new articles based on a visual algorithm, a barrel cover and an intelligent recovery barrel.
Background
At present, the intelligent recycling bin adopts a traditional visual algorithm to identify the new article, and the intelligent recycling bin can respectively shoot images of conditions in the bin before and after the user delivers the article, and judge the type of the new article according to the front frame and the rear frame. The use of a frame difference map of two frames before and after to detect a moving object is a currently common algorithm. However, the frame difference image is simply used for judging the new articles in the barrel, various problems exist in the actual use process, the frame difference image is difficult to accurately identify due to illumination change of front and back frames, displacement of the original articles in the barrel, vibration of the camera and the like, when the new articles are delivered into an area with poor illumination or the displacement of the articles changes, the front and back frames acquired by the camera are unclear, so that the frame difference image calculation is inaccurate, and the fault tolerance of the identification of the new articles is higher; in addition, the pose change causes the problem of wrong judgment of the new objects.
Disclosure of Invention
In order to solve the problems, the invention provides the following technical scheme:
the invention provides a method for identifying an incoming article based on a visual algorithm, which comprises the following steps:
the method comprises the steps of obtaining a front frame picture and a rear frame picture of an intelligent recycling bin for receiving a new article to be delivered through visual hardware;
and (3) light correction: obtaining corrected front and back two-frame images by carrying out self-adaptive correction on the illumination imbalance of the front and back two-frame images;
generating a frame difference map: obtaining a frame difference image through a frame difference method for the two frame images before and after correction;
suspicious region detection: detecting whether suspicious areas with displacement changes formed by object displacement exist in the front frame image and the rear frame image after correction, and outputting a plurality of suspicious areas;
suspicious region investigation: extracting pictures of corresponding areas of the front frame and the rear frame according to the suspicious area coordinates, and performing similarity processing to obtain coordinates of the newly-entered object;
a new article confirmation step: and according to the suspicious region checking result, finding out a candidate frame with the highest intersection ratio and an article category on the article detection diagram of the next frame corresponding to the coordinates of the new article, namely the new article.
Further, the self-adaptive correction is based on Retinex theory multi-scale Gaussian filtering to obtain illumination components, and a two-dimensional Gamma function is used for carrying out brightness change on brightness components of HSV space of front and rear two frame images of illumination imbalance, so that corrected front and rear two frame images are obtained.
Further, the step of determining the suspicious region includes: detecting whether a suspicious region with displacement change formed by object displacement exists in a frame difference image of two frames before and after correction through a target detection algorithm, and outputting a plurality of suspicious region coordinates; in addition, a target detection algorithm is needed to detect the articles in the barrel for the front and rear frames of pictures. The target detection algorithm can adopt a yolov5 target detection algorithm to directly detect suspicious regions.
Further, the suspicious region investigation step is specifically implemented as follows:
extracting front and rear two frames of pictures and coordinates of a plurality of suspicious regions;
image similarity calculation and sequencing are carried out on the corresponding areas of the front frame and the rear frame;
and determining the area with the lowest similarity as the coordinates of the new article.
Further, the method further comprises the following steps:
and (3) illumination balance judging step: if the illumination in the front frame picture and the rear frame picture is balanced, directly executing the object detection step, otherwise, executing the light correction step;
and (3) detecting the articles: detecting the articles in the front frame image and the rear frame image by using a target detection algorithm;
further, the task of the object detection algorithm is to find all objects of interest in the image, determine their category and location. Specifically, a yolov5 target detection algorithm is adopted to train a neural network to identify coordinates and categories of target objects in the barrel.
Further, if the suspicious region is not detected in the suspicious region detection step, it is determined that no new delivery is delivered.
The invention provides an intelligent recycling barrel cover, and the intelligent recycling barrel cover judges new objects by adopting the method for identifying the new objects based on the vision algorithm.
The invention provides an intelligent recycling bin which comprises a bin body and an intelligent recycling bin cover, wherein the intelligent recycling bin cover is arranged above the bin body.
The invention has the following beneficial effects:
(1) The invention reduces the influence of unbalanced illumination on the post-frame image by adopting Gaussian filter algorithm or gray algorithm processing, and solves the problem of inaccurate identification of new articles due to frame illumination change;
(2) The method improves the accuracy and the robustness of the algorithm by detecting, comparing and sequencing suspicious areas on the frame difference image, and reduces the interference of the displacement of articles in the barrel on the algorithm result;
(3) The invention utilizes the coordinate of the confirmed new article to be crossed with the highest candidate frame and article category of the IOU to confirm the new article, so that the fault tolerance of the whole new article identification is lower;
(4) The invention obtains the difference images of the front and rear frame images by carrying out offset pixel processing and difference processing on the front and rear frame images, judges that a suspicious region exists in the frame difference images, can effectively solve the problem of wrong judgment of new objects caused by pose change of objects, improves the anti-interference degree, and is suitable for identifying the new objects in various anti-interference states.
Drawings
Fig. 1 is a flowchart for identifying an incoming article in example 1.
Fig. 2 is a front frame view and a rear frame view of the intelligent recycling bin obtained in embodiment 1.
Fig. 3 is a flow chart of suspicious region investigation in embodiment 2.
Fig. 4 is a schematic diagram of suspicious region extraction in the front frame image and the rear frame image in embodiment 2.
Fig. 5 is a flowchart for identifying an incoming article in example 3.
Fig. 6 is a schematic diagram of the structure of the intelligent recycling bin cover in embodiment 4.
Fig. 7 is a schematic diagram of the structure of the intelligent recycling bin in embodiment 5.
Detailed Description
The following detailed description of the embodiments of the invention, taken in conjunction with the accompanying drawings, should be taken as illustrative of the invention only and not as limiting, the examples being intended to provide those skilled in the art with a better understanding and reproduction of the technical solutions of the invention, the scope of the invention still being defined by the claims.
As shown in fig. 1, (embodiment 1) this embodiment provides a method for identifying an incoming article based on a visual algorithm, including:
s11, acquiring front and rear two frames of pictures of the intelligent recycling bin for receiving the delivered new articles through visual hardware, wherein the front frame of picture a and the rear frame of picture b are shown in fig. 2;
the vision hardware comprises a camera, in particular an AIOT-based camera.
The intelligent recycling bin comprises an intelligent recycling bin cover, a delivery opening is formed in the intelligent recycling bin cover, a camera is arranged around the delivery opening, and the camera shoots a delivery object, so that content information of the delivery object is identified according to an image. And the periphery of the camera is provided with a matched light supplementing lamp so as to maintain the brightness of the image when the camera acquires the image of the recycled object.
At present, an algorithm for judging a new object is based on a frame difference image, but the frame difference image is difficult to accurately identify due to unbalanced illumination and object displacement change, when the new object is delivered to an area with poor illumination or the object displacement changes, the front frame image and the rear frame image acquired by a camera are unclear, so that the frame difference image calculation is inaccurate.
S12, light correction: obtaining corrected front and back two-frame images by carrying out self-adaptive correction on the illumination imbalance of the front and back two-frame images; the self-adaptive correction is based on Retinex theory multi-scale Gaussian filtering to obtain illumination components, and a two-dimensional Gamma function is used for carrying out brightness change on brightness components of HSV space of front and rear two-frame images with unbalanced illumination, so that corrected front and rear two-frame images are obtained. In addition, the problem of misjudgment of new objects caused by hardware jitter can be reduced through self-adaptive correction. As an alternative implementation: the self-adaptive correction can be carried out by adopting a gray level algorithm to carry out self-adaptive correction on the illumination imbalance positions of the front frame image and the rear frame image, and the self-adaptive correction of the illumination is obtained by carrying out gray level correction and background subtraction on the illumination uneven image based on an OpenCV homomorphic filtering method.
Furthermore, a Gaussian filter algorithm can be adopted to adaptively correct the illumination imbalance of the front frame image and the rear frame image, a maximum filter can be utilized to estimate more accurate local illumination, the local illumination images are classified according to brightness, and the average brightness of different categories is used as an illumination normalization adjustment factor based on retina modeling to obtain the front frame image and the rear frame image with illumination adaptive correction.
S13, generating a frame difference map: obtaining a frame difference image through a frame difference method for the two frame images before and after correction;
the frame difference method is one of the background subtraction methods, but the frame difference method does not need modeling, and the background model is the graph of the previous frame, so the speed is very fast.
S14, suspicious region detection: detecting whether a suspicious region with displacement change formed by object displacement exists in the frame difference image or not through a target detection algorithm, and outputting a plurality of suspicious regions and object detection; detecting whether a suspicious region with displacement change formed by object displacement exists in a frame difference image of two frames before and after correction through a target detection algorithm, and outputting a plurality of suspicious region coordinates; in addition, a target detection algorithm is needed to detect the articles in the barrel for the front and rear frames of pictures. The yolov5 target detection algorithm can be used to directly detect suspicious regions.
The task of object detection (Object Dectection) is to find all objects (objects) of interest in the image, determine their category and location.
S15, suspicious region checking: extracting pictures of corresponding areas of the front frame and the rear frame according to the suspicious area coordinates, and performing similarity processing to obtain coordinates of the newly-entered object;
s16, a new article confirmation step: and according to the suspicious region checking result, intersecting the candidate frame and the object category with the highest merging ratio on the rear frame image corresponding to the coordinates of the new object, namely the new object.
And if the suspicious region is not detected in the suspicious region detection step, judging that no new delivery is delivered.
The intersection ratio-IOU (Intersection over Union) is a concept used in object detection, and IoU calculates the overlap ratio of the "predicted border" and the "real border", i.e., the ratio of their intersection and union.
According to the embodiment, the front frame picture and the rear frame picture for delivering the new article are acquired through visual hardware, then the self-adaptive correction is carried out on the illumination imbalance position, the frame difference image calculated by the acquired front frame picture and the acquired rear frame picture is guaranteed to be accurate in calculation, then the coordinates of the new article are output through detection, comparison and sequencing of suspicious areas on the frame difference image, the new article is accurately identified, finally the new article is identified through selection of the candidate frame and the article category which are highest in cross-over and than the IOU, and the fault tolerance of the identification of the whole new article is low.
As shown in fig. 3 (embodiment 2), the present embodiment provides a suspicious region investigation method, which is specifically implemented as follows:
s21, extracting front and rear two frames of pictures and coordinates of a plurality of suspicious regions; by acquiring the front and rear frame images and adopting a target detection algorithm, the outlines of a plurality of target edges are obtained, namely a plurality of coordinates (rectangular frames) are obtained, as shown in fig. 4, namely: a first previous frame suspicious region a11, a second previous frame suspicious region a12 in a previous frame map-c of the suspicious region, and a first subsequent frame suspicious region B11 and a second subsequent frame suspicious region B12 in a subsequent frame map-d of the suspicious region.
S22, calculating and sequencing image similarity of corresponding areas of the front frame and the rear frame;
s23, determining the area with the lowest similarity as the coordinates of the new article.
Such as: as shown in fig. 4, two suspicious regions are detected by the suspicious region detection step, and two previous frame pictures of the suspicious regions are respectively: the first previous frame suspicious region A11 and the second previous frame suspicious region A12 correspond to the suspicious region, and two subsequent frame pictures respectively are: the method comprises the steps of calculating the image similarity between a first later frame suspicious region B11 and a second later frame suspicious region B12, calculating the image similarity between a first earlier frame suspicious region A11 and the first later frame suspicious region B11 to obtain 50% of similarity, calculating the image similarity between a second earlier frame suspicious region A12 and the second later frame suspicious region B12 to obtain 60% of similarity, and judging the region corresponding to the first earlier/later frame suspicious region A11/B11 with lower similarity as the region where the new object occurs.
Preferably, the image similarity calculation may employ a conventional algorithm, such as a cosine similarity algorithm, a hash algorithm, a histogram, a structural similarity measure, and the like.
As shown in fig. 5, (embodiment 3) the present embodiment provides a method for identifying an incoming article by using an intelligent recycling bin based on a visual algorithm, which further includes, based on embodiment 1:
and (3) illumination balance judging step: if the illumination in the front and rear frames of pictures is balanced, directly executing the object detection step, otherwise executing the light correction step in the embodiment 1;
and (3) detecting the articles: detecting the articles in the front frame image and the rear frame image by using a target detection algorithm;
the specific steps of object detection by the object detection algorithm are as follows:
specifically, a yolov5 target detection algorithm is adopted to train a neural network to identify coordinates and categories of target objects in the barrel.
As shown in fig. 6, (embodiment 4) this embodiment provides an intelligent recycling bin cover, and the method for identifying new articles based on the visual algorithm is adopted for the judgment of the intelligent recycling bin cover 100 on new articles.
As shown in fig. 7, (embodiment 5) this embodiment provides an intelligent recycling bin, comprising a bin body 101 and an intelligent recycling bin cover 100, wherein the intelligent recycling bin cover 100 is disposed above the bin body 101.
In summary, the invention reduces the influence of unbalanced illumination on the post-frame image by adopting the Gaussian filter algorithm or the gray level algorithm to solve the problem of inaccurate identification of new articles due to frame illumination change; by detecting, comparing and sequencing suspicious areas on the frame difference image, the accuracy and the robustness of the algorithm are improved, and the interference of the displacement of articles in the barrel on the algorithm result is reduced; utilizing the coordinate of the confirmed new article to intersect with the highest candidate frame and article category of the IOU to confirm the new article, so that the fault tolerance of the whole new article identification is lower; the offset pixel processing and the difference processing are carried out on the front frame image and the rear frame image to obtain the difference image of the front frame image and the rear frame image, and the suspicious region exists in the frame difference image, so that the problem of error judgment of new objects caused by pose change of objects can be effectively solved, the anti-interference degree is improved, and the method is suitable for identifying the new objects in various anti-interference states.
It should be noted that technical features not described in detail in the present invention may be implemented by any prior art.
While preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiments and all such alterations and modifications as fall within the scope of the application.

Claims (7)

1. The method for identifying the new articles by the intelligent recycling bin based on the visual algorithm is characterized by comprising the following steps of:
the acquisition step: acquiring a front frame picture and a rear frame picture of an intelligent recycling bin for receiving the delivered new articles through visual hardware;
and (3) light correction: obtaining corrected front and back two-frame images by carrying out self-adaptive correction on the illumination imbalance of the front and back two-frame images;
generating a frame difference map: obtaining a frame difference image through a frame difference method for the two frame images before and after correction;
suspicious region detection: detecting whether a suspicious region of displacement change formed by displacement of an object exists in the frame difference image or not, and outputting a plurality of suspicious regions;
suspicious region investigation: extracting pictures of corresponding areas of the front frame and the rear frame according to the suspicious area coordinates, and performing similarity processing to obtain coordinates of the newly-entered object;
the method is concretely realized as follows:
s21, extracting front and rear two frames of pictures and coordinates of a plurality of suspicious regions; acquiring a front frame image and a rear frame image, adopting a target detection algorithm to obtain outlines of a plurality of target edges, namely obtaining a plurality of rectangular frames, wherein a first front frame suspicious region and a second front frame suspicious region in a front frame image of the suspicious region, and a first rear frame suspicious region and a second rear frame suspicious region in a rear frame image of the suspicious region;
s22, calculating and sequencing image similarity of corresponding areas of the front frame and the rear frame;
s23, determining the area with the lowest similarity as the coordinates of the new article;
two suspicious regions are detected through the suspicious region detection step, and two previous frame pictures of the suspicious regions are respectively: the first previous frame suspicious region and the second previous frame suspicious region, and two subsequent frame pictures corresponding to the suspicious region are respectively: the suspicious region of the first rear frame and the suspicious region of the second rear frame are subjected to image similarity calculation, and the suspicious region of the second front frame and the suspicious region of the second rear frame are subjected to image similarity calculation, so that the suspicious region with lower similarity is judged to be a region where new matters occur;
a new article confirmation step: according to the suspicious region investigation result, finding out a candidate frame with the highest intersection ratio and an article category on a next frame of article detection diagram corresponding to the coordinates of the newly-entered article, namely the newly-entered article;
the adaptive correction includes:
solving an illumination component based on Retinex theory multi-scale Gaussian filtering;
using a two-dimensional Gamma function to change brightness of brightness components of HSV space of front and rear two frame images with unbalanced illumination, and obtaining corrected front and rear two frame images;
or adopting a Gaussian filter algorithm to adaptively correct the illumination unbalance of the front and back two-frame images, estimating more accurate local illumination by using a maximum filter, classifying the local illumination images according to brightness, and taking the average brightness of different categories as an illumination normalization adjustment factor based on retina modeling to obtain the front and back two-frame images with illumination adaptive correction.
2. The method for identifying new items by using the intelligent recycling bin based on the visual algorithm according to claim 1, wherein the step of determining the suspicious region is:
detecting whether a suspicious region with displacement change formed by object displacement exists in a frame difference image of two frames before and after correction through a target detection algorithm, and outputting a plurality of suspicious region coordinates; in addition, a target detection algorithm is needed to detect the articles in the barrel for the front and rear frames of pictures.
3. The method for identifying new items in an intelligent recycling bin based on visual algorithms of claim 1, further comprising:
and (3) illumination balance judging step: if the illumination in the front frame picture and the rear frame picture is balanced, directly executing the object detection step, otherwise, executing the light correction step;
and (3) detecting the articles: and detecting the articles in the front frame image and the rear frame image by using a target detection algorithm.
4. A method for identifying new items in an intelligent recycling bin based on visual algorithms according to claim 3, wherein the task of the object detection algorithm is to find all objects of interest in the image, determine their category and location.
5. The method for identifying new items in an intelligent recycling bin based on visual algorithms of claim 1, wherein if no suspicious region is detected in the suspicious region detection step, determining that no new delivery is delivered.
6. The intelligent recycling bin cover is characterized in that the intelligent recycling bin cover is used for judging new objects, and the method for identifying the new objects by the intelligent recycling bin based on the visual algorithm is adopted by any one of claims 1-5.
7. The intelligent recycling bin is characterized by comprising a bin body and the intelligent recycling bin cover according to claim 6, wherein the intelligent recycling bin cover is arranged above the bin body.
CN202310527110.2A 2023-05-11 2023-05-11 Method for identifying new articles based on visual algorithm, barrel cover and intelligent recycling bin Active CN116246215B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310527110.2A CN116246215B (en) 2023-05-11 2023-05-11 Method for identifying new articles based on visual algorithm, barrel cover and intelligent recycling bin

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310527110.2A CN116246215B (en) 2023-05-11 2023-05-11 Method for identifying new articles based on visual algorithm, barrel cover and intelligent recycling bin

Publications (2)

Publication Number Publication Date
CN116246215A CN116246215A (en) 2023-06-09
CN116246215B true CN116246215B (en) 2024-01-09

Family

ID=86635342

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310527110.2A Active CN116246215B (en) 2023-05-11 2023-05-11 Method for identifying new articles based on visual algorithm, barrel cover and intelligent recycling bin

Country Status (1)

Country Link
CN (1) CN116246215B (en)

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104658249A (en) * 2013-11-22 2015-05-27 上海宝康电子控制工程有限公司 Method for rapidly detecting vehicle based on frame difference and light stream
CN105512667A (en) * 2014-09-22 2016-04-20 中国石油化工股份有限公司 Method for fire identification through infrared and visible-light video image fusion
CN106373320A (en) * 2016-08-22 2017-02-01 中国人民解放军海军工程大学 Fire identification method based on flame color dispersion and continuous frame image similarity
KR101822924B1 (en) * 2016-11-28 2018-01-31 주식회사 비젼인 Image based system, method, and program for detecting fire
WO2018130016A1 (en) * 2017-01-10 2018-07-19 哈尔滨工业大学深圳研究生院 Parking detection method and device based on monitoring video
CN112499017A (en) * 2020-11-18 2021-03-16 苏州中科先进技术研究院有限公司 Garbage classification method and device and garbage can
CN112784759A (en) * 2021-01-25 2021-05-11 揭阳市聆讯软件有限公司 Elevator human detection identification method based on artificial intelligence similarity comparison
CN113673454A (en) * 2021-08-26 2021-11-19 北京声智科技有限公司 Remnant detection method, related device, and storage medium
CN113989743A (en) * 2021-10-29 2022-01-28 青岛海信智慧生活科技股份有限公司 Garbage overflow detection method, detection equipment and system
CN114067242A (en) * 2021-11-11 2022-02-18 上海皓维电子股份有限公司 Method and device for automatically detecting garbage random throwing behavior and electronic equipment
CN114764786A (en) * 2022-03-14 2022-07-19 什维新智医疗科技(上海)有限公司 Real-time focus area detection device based on ultrasonic video streaming
CN115330993A (en) * 2022-10-18 2022-11-11 小手创新(杭州)科技有限公司 Recovery system new-entry discrimination method based on low computation amount
CN115601594A (en) * 2022-10-18 2023-01-13 小手创新(杭州)科技有限公司(Cn) Visual data processing method and system based on intelligent recovery
CN115830545A (en) * 2022-12-13 2023-03-21 苏州市伏泰信息科技股份有限公司 Intelligent supervision method and system for garbage classification

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108292366B (en) * 2015-09-10 2022-03-18 美基蒂克艾尔有限公司 System and method for detecting suspicious tissue regions during endoscopic surgery
US10346982B2 (en) * 2016-08-22 2019-07-09 Koios Medical, Inc. Method and system of computer-aided detection using multiple images from different views of a region of interest to improve detection accuracy

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104658249A (en) * 2013-11-22 2015-05-27 上海宝康电子控制工程有限公司 Method for rapidly detecting vehicle based on frame difference and light stream
CN105512667A (en) * 2014-09-22 2016-04-20 中国石油化工股份有限公司 Method for fire identification through infrared and visible-light video image fusion
CN106373320A (en) * 2016-08-22 2017-02-01 中国人民解放军海军工程大学 Fire identification method based on flame color dispersion and continuous frame image similarity
KR101822924B1 (en) * 2016-11-28 2018-01-31 주식회사 비젼인 Image based system, method, and program for detecting fire
WO2018130016A1 (en) * 2017-01-10 2018-07-19 哈尔滨工业大学深圳研究生院 Parking detection method and device based on monitoring video
CN112499017A (en) * 2020-11-18 2021-03-16 苏州中科先进技术研究院有限公司 Garbage classification method and device and garbage can
CN112784759A (en) * 2021-01-25 2021-05-11 揭阳市聆讯软件有限公司 Elevator human detection identification method based on artificial intelligence similarity comparison
CN113673454A (en) * 2021-08-26 2021-11-19 北京声智科技有限公司 Remnant detection method, related device, and storage medium
CN113989743A (en) * 2021-10-29 2022-01-28 青岛海信智慧生活科技股份有限公司 Garbage overflow detection method, detection equipment and system
CN114067242A (en) * 2021-11-11 2022-02-18 上海皓维电子股份有限公司 Method and device for automatically detecting garbage random throwing behavior and electronic equipment
CN114764786A (en) * 2022-03-14 2022-07-19 什维新智医疗科技(上海)有限公司 Real-time focus area detection device based on ultrasonic video streaming
CN115330993A (en) * 2022-10-18 2022-11-11 小手创新(杭州)科技有限公司 Recovery system new-entry discrimination method based on low computation amount
CN115601594A (en) * 2022-10-18 2023-01-13 小手创新(杭州)科技有限公司(Cn) Visual data processing method and system based on intelligent recovery
CN115830545A (en) * 2022-12-13 2023-03-21 苏州市伏泰信息科技股份有限公司 Intelligent supervision method and system for garbage classification

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
Change Detection in Feature Space Using Local Binary Similarity Patterns;Guillaume-Alexandre Bilodeau等;《2013 International Conference on Computer and Robot Vision》;106-112 *
Small infrared target detection utilizing Local Region Similarity Difference map;He Qi等;《Infrared Physics & Technology》;第71卷;131-139 *
基于自适应特性二维经验模式分解的Retinex 彩色图像增强;南栋等;《计算机应用》;第31卷(第6期);1552-1559 *
输电线路覆冰及异物智能视频检测算法研究;任贵新;《中国优秀硕士学位论文全文数据库 工程科技II辑》;第2019年卷(第8期);C042-1035 *
铁路轨道异物入侵的自动检测及预警研究;宝才文;《中国优秀硕士学位论文全文数据库 工程科技I辑》;第2023年卷(第2期);B026-304 *

Also Published As

Publication number Publication date
CN116246215A (en) 2023-06-09

Similar Documents

Publication Publication Date Title
CN109376631B (en) Loop detection method and device based on neural network
CN112287866A (en) Human body action recognition method and device based on human body key points
US9767383B2 (en) Method and apparatus for detecting incorrect associations between keypoints of a first image and keypoints of a second image
CN112287868B (en) Human body action recognition method and device
CN110428442B (en) Target determination method, target determination system and monitoring security system
CN104408707A (en) Rapid digital imaging fuzzy identification and restored image quality assessment method
TWI668669B (en) Object tracking system and method thereof
CN108830133A (en) Recognition methods, electronic device and the readable storage medium storing program for executing of contract image picture
CN111415339B (en) Image defect detection method for complex texture industrial product
CN110189375A (en) A kind of images steganalysis method based on monocular vision measurement
CN111126393A (en) Vehicle appearance refitting judgment method and device, computer equipment and storage medium
CN112287867A (en) Multi-camera human body action recognition method and device
Bullkich et al. Moving shadow detection by nonlinear tone-mapping
US11216905B2 (en) Automatic detection, counting, and measurement of lumber boards using a handheld device
CN109886195A (en) Skin identification method based on depth camera near-infrared single color gradation figure
CN110807354A (en) Industrial production line product counting method
CN112784712B (en) Missing child early warning implementation method and device based on real-time monitoring
CN110956616B (en) Object detection method and system based on stereoscopic vision
CN116246215B (en) Method for identifying new articles based on visual algorithm, barrel cover and intelligent recycling bin
CN112861645A (en) Infrared camera dim light environment compensation method and device and electronic equipment
CN109741370B (en) Target tracking method and device
US20030044067A1 (en) Apparatus and methods for pattern recognition based on transform aggregation
CN113723432B (en) Intelligent identification and positioning tracking method and system based on deep learning
Jia et al. A novel moving cast shadow detection of vehicles in traffic scene
CN110345919A (en) Space junk detection method based on three-dimensional space vector and two-dimensional plane coordinate

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant