CN115504121A - AI-based system and method for automatically identifying garbage delivery behaviors of users - Google Patents

AI-based system and method for automatically identifying garbage delivery behaviors of users Download PDF

Info

Publication number
CN115504121A
CN115504121A CN202211164639.4A CN202211164639A CN115504121A CN 115504121 A CN115504121 A CN 115504121A CN 202211164639 A CN202211164639 A CN 202211164639A CN 115504121 A CN115504121 A CN 115504121A
Authority
CN
China
Prior art keywords
garbage
user
delivery
camera
behavior
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211164639.4A
Other languages
Chinese (zh)
Inventor
鲍承德
王远喆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Lianyun Zhihui Technology Co ltd
Original Assignee
Zhejiang Lianyun Zhihui Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Lianyun Zhihui Technology Co ltd filed Critical Zhejiang Lianyun Zhihui Technology Co ltd
Priority to CN202211164639.4A priority Critical patent/CN115504121A/en
Publication of CN115504121A publication Critical patent/CN115504121A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B65CONVEYING; PACKING; STORING; HANDLING THIN OR FILAMENTARY MATERIAL
    • B65FGATHERING OR REMOVAL OF DOMESTIC OR LIKE REFUSE
    • B65F1/00Refuse receptacles; Accessories therefor
    • B65F1/14Other constructional features; Accessories
    • B65F1/1484Other constructional features; Accessories relating to the adaptation of receptacles to carry identification means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B65CONVEYING; PACKING; STORING; HANDLING THIN OR FILAMENTARY MATERIAL
    • B65FGATHERING OR REMOVAL OF DOMESTIC OR LIKE REFUSE
    • B65F1/00Refuse receptacles; Accessories therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B65CONVEYING; PACKING; STORING; HANDLING THIN OR FILAMENTARY MATERIAL
    • B65FGATHERING OR REMOVAL OF DOMESTIC OR LIKE REFUSE
    • B65F2210/00Equipment of refuse receptacles
    • B65F2210/138Identification means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B65CONVEYING; PACKING; STORING; HANDLING THIN OR FILAMENTARY MATERIAL
    • B65FGATHERING OR REMOVAL OF DOMESTIC OR LIKE REFUSE
    • B65F2210/00Equipment of refuse receptacles
    • B65F2210/176Sorting means

Landscapes

  • Engineering & Computer Science (AREA)
  • Mechanical Engineering (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the field of garbage recovery, in particular to a system and a method for automatically identifying garbage delivery behaviors of users based on AI. The technical scheme relates to a method for automatically identifying a user garbage delivery behavior based on AI, which only needs to be arranged on a first camera, a second camera, a computing terminal and a loudspeaker, and the computing terminal is connected with a cloud. Particularly, in the scheme, real-time monitoring is firstly carried out through the first camera, and the user is snapshotted to enter the delivery behavior in the shooting area. After the delivery behavior is confirmed to exist, face recognition is carried out on the user, and therefore the identity of the user is confirmed. Then, whether the user delivers the garbage into the garbage can or not and whether the classified delivery is carried out correctly or not are judged. And under the condition of indiscriminate delivery or incorrect classification, recording the unlawful behaviors at the cloud end, and performing voice supervision. This scheme not only need not the box when fixed, can carry out whole period detection and supervise to this region moreover, can effectively solve the social resource problem.

Description

System and method for automatically identifying user garbage delivery behavior based on AI (Artificial Intelligence)
Technical Field
The invention relates to the field of garbage collection, in particular to a system and a method for automatically identifying a user garbage delivery behavior based on AI.
Background
With the development of national economy, the living standard of people is continuously improved, the types and the quantity of garbage generated in daily life are increased, and the time cost and the currency cost of only depending on manpower for recycling are increased day by day; how to classify and recycle the garbage quickly, accurately and efficiently becomes a difficult problem to be solved urgently. The garbage can and the recycling device commonly existing in the community at present comprise a double-box type classification garbage can, a single plastic garbage can and a push type garbage truck; the last two kinds of garbage cans and recovery units do not have classified input openings, and even if the double-box type classified garbage can with the classified input openings is not obvious in classification standard and input mark, the existing garbage can and recovery unit do not have good effect on garbage recycling. In this context, intelligent garbage cans have been successively popularized and used in many cells. When the intelligent garbage can is used, the identity of a user is registered firstly, the weight of garbage thrown by the user is identified and scored, and when the score is accumulated to a certain value, the user can exchange the score for commodities.
In the prior art, generally, an intelligent garbage can is adopted for garbage recovery, garbage classification and recovery can be realized, and further automatic garbage identification and scoring can be realized, and specifically, reference is made to an automatic scoring intelligent garbage can and an operation method thereof described in an invention patent publication with a publication number of CN 108706247A. But above-mentioned intelligent dustbin cost of manufacture is higher, is difficult to promote on a large scale. In addition, at present, the industry puts garbage into users and supervises and urges the users to perform garbage classification behaviors, manual supervision and persuasion are adopted, a large amount of manpower and financial resources are required to be invested, all-weather supervision for 24 hours cannot be achieved, and a large amount of time can be consumed.
Disclosure of Invention
In order to solve the above problems, a first object of the present invention is to provide a method for automatically identifying a user's garbage delivery behavior based on AI; the box body is not needed to be fixed, so that the cost is reduced; and can carry out real-time detection and supervision for 24 hours in the region, can effectively solve the social resource problem.
In order to achieve the purpose, the invention adopts the following technical scheme:
a method for automatically identifying a user garbage delivery behavior based on AI comprises the following steps:
step 1, a first camera monitors in real time, detects whether a user has a behavior of delivering garbage when the user is identified to enter a delivery range, and executes step 2 if the user has the delivery behavior;
step 2, the first camera carries out face snapshot on the user, and transmits a picture containing a face image to the cloud for user information identification;
step 3, at least one second camera shoots a plurality of pictures representing different classified trash can mouths and surrounding areas of the trash can mouths, and whether new trash exists is judged based on comparison of the pictures before and after delivery; if new garbage is added, further judging whether the garbage is positioned in the opening of the garbage can; if the garbage is in the opening of the bucket, executing step 4; if the garbage is outside the bucket opening, executing step 5;
step 4, judging to be matched with the current garbage can opening area based on the garbage type based on the garbage image characteristics acquired by the first camera and/or the second camera; if not, executing step 5;
and 5, binding and feeding back the video clip obtained by the first camera, the picture shot by the second camera and the user information of face recognition to a cloud end, and simultaneously carrying out language supervision through a loudspeaker.
Compared with the prior publication scheme with the publication number of CN108706247A, the method does not need to be provided with an intelligent garbage can, and only needs to be arranged on the first camera, the second camera, the computing terminal and the loudspeaker, and the computing terminal is connected with the cloud. Particularly, in the scheme, real-time monitoring is firstly carried out through the first camera, and the user is snapshotted to enter the delivery behavior in the shooting area. After the delivery behavior is confirmed to exist, face recognition is carried out on the user, and therefore the identity of the user is confirmed. Then, whether the user delivers the garbage into the garbage can or not and whether the classified delivery is carried out correctly or not are judged. And under the condition of indiscriminate delivery or incorrect classification, recording the unlawful behaviors at the cloud end, and performing voice supervision.
Based on the scheme, the box body is not needed to be fixed, so that the cost is reduced; and can carry out real-time detection and supervision for 24 hours in the region, can effectively solve the social resource problem.
In addition, the scheme can be used for identifying delivery of the user at the barrel opening and identifying delivery outside the barrel, namely long-distance throwing.
Preferably, in the step 1, the first camera judges the delivery behavior of the upper limb of the user from the shot continuous photos or videos, and after the delivery behavior of the upper limb of the user is judged, a continuous image sequence is obtained, and whether a moving object exists in the image sequence is judged; if so, the user is considered to have the behavior of delivering the garbage, and step 2 is executed. According to the scheme, on one hand, the motion of the upper limbs of the user is identified to judge whether a delivery behavior exists, and on the other hand, whether a moving object exists in the image is identified.
In a specific scheme, the identification of human contour features and upper limb image features in an image belongs to the prior art. On the basis, when the user puts garbage, the user has limb changes, each limb has each key point, a PAF is generated between every two key points, and a PAF forearm is generated between a wrist key point and an elbow key point; if the point P is on the limb, then
Figure BDA0003861641960000021
The value at is the unit vector pointing from body part j1 to j2, the vector being zero for all other points;
wherein,
Figure BDA0003861641960000022
inputting a graph, detecting each point above it, if this point is on the limb, its value is the unit vector pointing from j1 to j2, i.e. v = (x) j2,k -x j1,k )/||x j2,k -x j1,k || 2
The set of points on the limb is defined as the set of points within the segment distance threshold, i.e., those points P, 0. Ltoreq. V. (P-Xj 1, k). Ltoreq.lc, k and | v |. - (P-Xj 1, k) |. Ltoreq. σ |, where the limb width σ is as if it wereDistance in units of elements, length of limbs L c,k ||x j2,k x j1,k || 2 V ±) is a vector perpendicular to v;
taking the average value of the PAFs of the K individuals for the final PAF of each part, wherein nc (P) is the number of non-zero vectors of all the K individuals at the point P; if two arms intersect, but the hands can see, the PAF value and direction at the intersection of the two arms will be different; the final PAF is calculated from the real body key points.
Preferably, in a period of time when the user puts in the garbage, an object has an instantaneous speed when observing the pixel motion on the imaging plane, the corresponding relation between the previous frame and the current frame is found by utilizing the change of the pixels in the image sequence on a time domain and the correlation between adjacent frames, and the motion information of the object between the adjacent frames is calculated, so that whether the picture has a motion condition or not is judged, and whether the user has a garbage putting behavior or not is preliminarily judged;
in the above steps, the constraint equation of the object motion estimation algorithm is as follows:
consider the light intensity of a pixel I (x, y, t) in the first frame (where t represents the time dimension in which it is located); it moves the distance (dx, dy) to the next frame, taking dt times; because the pixel is the same pixel point, it is assumed that the light intensity of the pixel before and after the motion is unchanged, that is:
I(x,y,t)=I(x+dx,y+dy,t+dt)
and (3) carrying out Qinle expansion on the right end of the formula to obtain:
Figure BDA0003861641960000031
wherein epsilon represents a second order infinitesimal term, which is negligible; then combining the two formulas and then removing dt to obtain:
Figure BDA0003861641960000032
and setting u and v as velocity vectors of optical flows along an X axis and a Y axis respectively to obtain:
Figure BDA0003861641960000033
order to
Figure BDA0003861641960000041
Respectively representing partial derivatives of gray levels of pixel points in the image along X, Y and T directions;
to sum up, finally obtain:
I x u+I y v+I t =0
where Ix, iy, it can be obtained from the image data, and (u, v) is the optical flow vector.
Preferably, the step 3 specifically includes the following steps:
3.1, before and after delivery, a second camera shoots a picture containing a plurality of garbage bin mouths and surrounding areas thereof which represent different classifications;
3.2, newly added image features in the delivered photos are found based on photo comparison before and after delivery;
3.3, acquiring the edge range of the opening of the garbage can in the post-delivery photo;
3.4, judging that the characteristics of the newly added image are positioned in and out of a range enclosed by the edge of the opening of the garbage can in the image; if the garbage is in the opening of the bucket, executing step 4; if the garbage is outside the bung, step 5 is performed.
In the scheme, whether newly added garbage exists or not can be judged through the picture shot by the second camera, and the area where the garbage falls is located in the barrel or outside the barrel, so that the non-civilized behavior of delivering the garbage to the outside of the barrel can be identified.
Preferably, the following two schemes can be adopted in step 3:
scheme 1: a second camera is adopted to shoot a picture containing a plurality of garbage bin mouths with different characteristics and surrounding areas; require a plurality of garbage bins to set up side by side in this scheme to reduce the shooting dead angle of second camera.
Scheme 2: every garbage bin bung hole top all sets up a second camera, and this second camera shoots the photo that corresponds garbage bin bung hole and region around it, and two adjacent second cameras shoot and have at least partial coincidence. A plurality of garbage bins in this scheme can the dispersion set up, only need with its second camera that corresponds aim at can, but two adjacent second cameras are taken and are had at least partial coincidence to avoid appearing the shooting dead angle of delivery area.
Preferably, in step 4; when the delivered garbage is a garbage bag, the color of the garbage bag is judged from the image acquired by the first camera and/or the second camera, and whether the garbage bag with the color is classified and consistent with the current delivered garbage can or not is judged. The characteristics of the garbage bags are obtained from the images, and the colors can be judged by referring to the method for identifying the colors of the garbage bags to open the doors, which is described in the prior patent with the publication number of CN 108706247A.
When the delivered garbage is garbage, the garbage image characteristics obtained from the images obtained by the first camera and/or the second camera are sent to the cloud end to judge the garbage type, and the garbage type is judged to be matched with the current garbage can opening area based on the garbage type. For obtaining the garbage features from the image and determining the garbage types, reference may be made to the automatic garbage grading operation method described in the prior patent with publication number CN 108706247A.
Preferably, be provided with the sign of this garbage bin of mark corresponding rubbish kind on the garbage bin opening, include in the picture that the second camera was shot the sign, this sign can be discerned to the computing terminal. On the basis that above-mentioned scheme carries out disposal bag colour judgement or rubbish kind judgement, this garbage bin bung hole region of this department calculation terminal should be posted which kind of colour disposal bag or which kind of rubbish through the sign, combines to judge together that waste classification is whether correct.
According to the scheme, after the garbage throwing behavior of the user is preliminarily judged in the step 1; in step 3, a deep learning algorithm is further used to determine whether the difference exists between the garbage in the garbage can and the previous frame of image, and finally, whether a garbage throwing action exists in the user is determined.
Scheme of change detection task based on deep learning:
making a training sample- > training a model- > traversing each pixel in a picture by using the trained model to obtain a result
1 screening training samples
Firstly, a difference map is generated through the front image and the rear image.
And a second part, which divides the difference map to generate a rough binary map.
And a third part, randomly selecting a part of pixels from the generated rough binary image, wherein each pixel takes the neighborhood thereof as a changed sample and an unchanged sample according to the category thereof:
2 training model
And inputting the prepared changed samples and unchanged samples into a neural network model for training.
3 obtaining a variation chart
And traversing the neighborhood of each pixel in the picture by using the trained model to perform secondary classification on each pixel, namely performing secondary classification on the neighborhood (3 multiplied by 3,5 multiplied by 5,7 multiplied by 7,9 multiplied by 9) of each pixel to obtain a detection result.
And finally judging whether the user has a garbage throwing action.
The system for automatically identifying the garbage delivery behavior of the user based on AI adopts the method.
Drawings
Fig. 1 is a scene diagram of a system and a method related to the invention.
Description of reference numerals:
1. the system comprises a computing terminal 2, a loudspeaker 3, a face recognition and action behavior recognition camera 4, a garbage recognition camera 5, a garbage can I6, a garbage can II 7, a garbage can III 8 and a garbage bag.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to the same or similar elements or elements having the same or similar functions throughout. The embodiments described below with reference to the drawings are illustrative and intended to be illustrative of the invention and are not to be construed as limiting the invention.
In the description of the present invention, it is to be understood that the terms "center", "longitudinal", "lateral", "length", "width", "thickness", "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "clockwise", "counterclockwise", and the like, indicate orientations or positional relationships based on those shown in the drawings, merely for convenience of description and simplicity of description, and do not indicate or imply that the device or element so referred to must have a particular orientation, be constructed in a particular orientation, and be operated, and thus, are not to be construed as limiting the present invention.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the present invention, unless otherwise specified, "a plurality" means two or more unless explicitly defined otherwise.
In the present invention, unless otherwise expressly specified or limited, the terms "mounted," "connected," "secured," and the like are to be construed broadly and can, for example, be fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood by those skilled in the art according to specific situations.
In the present invention, unless otherwise expressly stated or limited, "above" or "below" a first feature means that the first and second features are in direct contact, or that the first and second features are not in direct contact but are in contact with each other via another feature therebetween. Also, the first feature being "on," "above" and "over" the second feature includes the first feature being directly on and obliquely above the second feature, or merely indicating that the first feature is at a higher level than the second feature. "beneath," "under" and "beneath" a first feature includes the first feature being directly beneath and obliquely beneath the second feature, or simply indicating that the first feature is at a lesser elevation than the second feature.
Example 1:
referring to the attached figure 1, the method for automatically identifying the garbage delivery behavior of the user based on AI comprises the following steps:
step 1, a first camera monitors in real time, detects whether a user has a garbage delivery behavior when the user is identified to enter a delivery range, and executes step 2 if the user has the garbage delivery behavior;
step 2, the first camera carries out face snapshot on the user, and transmits a picture containing a face image to the cloud for user information identification;
step 3, at least one second camera shoots a plurality of pictures representing different classified trash can mouths and surrounding areas of the trash can mouths, and whether new trash exists is judged based on comparison of the pictures before and after delivery; if new garbage is added, further judging whether the garbage is positioned in the opening of the garbage can; if the garbage is in the opening of the bucket, executing step 4; if the garbage is outside the bucket opening, executing step 5;
step 4, judging to be matched with the current garbage can opening area based on the garbage type based on the garbage image characteristics acquired by the first camera and/or the second camera; if not, executing step 5;
and 5, binding and feeding back the video clip obtained by the first camera, the picture shot by the second camera and the user information of face recognition to a cloud end, and simultaneously carrying out language supervision through a loudspeaker.
Compared with the prior publication scheme with the bulletin number of CN108706247A, the method does not need to be provided with an intelligent garbage can, and only needs to be arranged on the first camera, the second camera, the computing terminal and the loudspeaker, and the computing terminal is connected with the cloud. Particularly, in the scheme, real-time monitoring is firstly carried out through the first camera, and the user is snapshotted to enter the delivery behavior in the shooting area. After the delivery behavior is confirmed to exist, face recognition is carried out on the user, and therefore the identity of the user is confirmed. And then, judging whether the user delivers the garbage into the garbage can or not and whether the classified delivery is carried out correctly or not. And under the condition of indiscriminate delivery or incorrect classification, recording the unlawful behaviors at the cloud end, and performing voice supervision.
Based on the scheme, the box body is not needed to be fixed, so that the cost is reduced; and can carry out real-time detection and supervision for 24 hours in the region, can effectively solve the social resource problem.
In addition, the scheme can be used for identifying delivery of a user at the barrel opening and identifying delivery outside the barrel, namely long-distance throwing.
Preferably, in the step 1, the first camera judges the delivery behavior of the upper limb of the user from the shot continuous photos or videos, and after the delivery behavior of the upper limb of the user is judged, a continuous image sequence is obtained, and whether a moving object exists in the image sequence is judged; if so, the user is considered to have the behavior of delivering the garbage, and step 2 is executed. According to the scheme, on one hand, the motion of the upper limbs of the user is identified to judge whether a delivery behavior exists, and on the other hand, whether a moving object exists in the image is identified.
In a specific scheme, the identification of human contour features and upper limb image features in an image belongs to the prior art. On the basis, when a user puts garbage, the body changes, each key point is arranged on the body, one PAF is generated between every two key points, and one PAF forearm is generated between the wrist key point and the elbow key point; if the point P is on the limb, then
Figure BDA0003861641960000071
The value at is the unit vector pointing from body part j1 to j2, the vector being zero for all other points;
wherein,
Figure BDA0003861641960000072
inputting a graph, detecting each point above it, if this point is on the limb, its value is the unit vector pointing from j1 to j2, i.e. v = (x) j2,k -x j1,k )/||x j2,k -x j1,k || 2
The set of points on the limb is defined as the set of points within the segment distance threshold, i.e., those points P,0 ≦ v ≦ lc (P-X j1, k ≦ lc, k, and | v ≦ σ L (P-X j1, k) |, where the limb width σ is the distance in pixels, and the limb length is L c,k =||x j2,k -x j1,k || 2 V ±) is a vector perpendicular to v;
taking the average value of the PAFs of the K individuals for the final PAF of each part, wherein nc (P) is the number of non-zero vectors of all the K individuals at the point P; if two arms intersect, but the hands can see, the PAF value and direction at the intersection of the two arms will be different; the final PAF is calculated from the real body key points.
Preferably, in a period of time when the user puts in the garbage, an object has an instantaneous speed when observing the pixel motion on the imaging plane, the corresponding relation between the previous frame and the current frame is found by utilizing the change of the pixel in the image sequence on a time domain and the correlation between the adjacent frames, and the motion information of the object between the adjacent frames is calculated, so that whether the picture has a motion condition or not is judged, and whether the user has a garbage putting action or not is preliminarily judged;
in the above steps, the constraint equation of the object motion estimation algorithm is as follows:
consider the light intensity of a pixel I (x, y, t) in the first frame (where t represents the time dimension in which it is located); it moves the distance (dx, dy) to the next frame, taking dt times; because the pixel is the same pixel point, it is assumed that the light intensity of the pixel before and after the motion is unchanged, that is:
I(x,y,t)=I(x+dX,y+dy,t+dt)
and (3) performing Qinle expansion on the right end of the formula to obtain:
Figure BDA0003861641960000081
wherein epsilon represents a second order infinite small term, which is negligible; then combining the two formulas and then removing dt to obtain:
Figure BDA0003861641960000082
and u and v are velocity vectors of optical flows along an X axis and a Y axis respectively, and the velocity vectors are obtained:
Figure BDA0003861641960000083
order to
Figure BDA0003861641960000084
Respectively representing partial derivatives of gray levels of pixel points in the image along X, Y and T directions; to sum up, finally obtain:
I x u+I y v+I t =0
where Ix, iy, it can be obtained from the image data, and (u, v) is the optical flow vector obtained.
Preferably, the step 3 specifically comprises the following steps:
3.1, before and after delivery, a second camera shoots a picture containing a plurality of garbage bin mouths and surrounding areas thereof which represent different classifications;
3.2, newly added image features in the delivered photos are found based on photo comparison before and after delivery;
3.3, acquiring the edge range of the opening of the garbage can from the post-delivery picture;
3.4, judging that the characteristics of the newly added image are positioned in and out of a range enclosed by the edge of the opening of the garbage can in the image; if the garbage is in the opening of the bucket, executing step 4; if the garbage is outside the bucket mouth, step 5 is executed.
In the scheme, whether newly added garbage exists or not can be judged through the picture shot by the second camera, and the area where the garbage falls is located in the barrel or outside the barrel, so that the non-civilized behavior of delivering the garbage to the outside of the barrel can be identified.
Preferably, the following two schemes can be adopted in step 3:
scheme 1: shooting a picture containing a plurality of trash can mouths representing different classifications and surrounding areas by adopting a second camera; require a plurality of garbage bins to set up side by side in this scheme to reduce the shooting dead angle of second camera.
Scheme 2: every garbage bin bung hole top all sets up a second camera, and this second camera shoots the photo that corresponds garbage bin bung hole and region around it, and two adjacent second cameras shoot and have at least partial coincidence. A plurality of garbage bins in this scheme can the dispersion setting, only need with its second camera that corresponds aim at can, but two adjacent second cameras shoot at least partial coincidence to avoid appearing the shooting dead angle of delivery area.
In step 4; when the delivered garbage is a garbage bag, the color of the garbage bag is judged from the image acquired by the first camera and/or the second camera, and whether the garbage bag with the color is classified and consistent with the current delivered garbage can or not is judged. The characteristics of the trash bag are obtained from the image, and the color can be judged by referring to the method for identifying the color of the trash bag to open the door as described in the prior patent with publication number CN 108706247A. When the delivered garbage is garbage, the garbage image characteristics obtained from the images obtained by the first camera and/or the second camera are sent to the cloud end to judge the garbage type, and the garbage type is judged to be matched with the current garbage can opening area based on the garbage type. For obtaining the garbage features from the image and determining the garbage type, reference may be made to the automatic garbage grading operation method described in the prior patent with publication number CN 108706247A.
In addition in step 4, be provided with the sign of mark this garbage bin corresponding rubbish kind on the garbage bin opening, include in the photo that the second camera was shot the sign, this sign can be discerned to the computational terminal. On the basis that above-mentioned scheme carries out disposal bag colour judgement or rubbish kind judgement, this department calculation terminal should post which kind of colour disposal bag or which kind of rubbish through sign this garbage bin bung hole region, combines together and can judge that waste classification is correct.
The scheme is that after the garbage throwing behavior of the user is preliminarily judged in the step 1; and 3, judging whether the garbage in the garbage can is different from the previous frame of image by using a deep learning algorithm, and finally judging whether the garbage throwing action exists in the user.
Scheme of change detection task based on deep learning:
making a training sample- > training a model- > traversing each pixel in a picture by using the trained model to obtain a result
1 screening training samples
Firstly, a difference map is generated through the front image and the rear image.
And a second part, which divides the difference map to generate a rough binary map.
And a third part, randomly selecting a part of pixels from the generated rough binary image, wherein each pixel takes the neighborhood thereof as a changed sample and an unchanged sample according to the category thereof:
2 training model
And inputting the manufactured changed samples and unchanged samples into a neural network model for training.
3 obtaining a variation chart
And traversing the neighborhood of each pixel in the picture by using the trained model to perform secondary classification on each pixel, namely performing secondary classification on the neighborhood (3 multiplied by 3,5 multiplied by 5,7 multiplied by 7,9 multiplied by 9) of each pixel to obtain a detection result.
And finally judging whether the user has a garbage throwing action.
Example 1:
referring to fig. 1, the present embodiment relates to a system for automatically identifying a user's garbage delivery behavior based on AI, and adopts the method described in embodiment 1.
In the description of the specification, reference to the description of "one embodiment," "some embodiments," "an example," "a specific example," or "some examples" or the like means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
Although embodiments of the present invention have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present invention, and that variations, modifications, substitutions and alterations can be made in the above embodiments by those of ordinary skill in the art without departing from the principle and spirit of the present invention.

Claims (10)

1. A method for automatically identifying a user garbage delivery behavior based on AI comprises the following steps:
step 1, a first camera monitors in real time, detects whether a user has a behavior of delivering garbage when the user is identified to enter a delivery range, and executes step 2 if the user has the delivery behavior;
step 2, the first camera carries out face snapshot on the user, and transmits a picture containing a face image to a cloud for user information identification;
step 3, at least one second camera shoots a photo containing a plurality of garbage bin mouths with different characteristics and surrounding areas, and judges whether new garbage exists or not based on comparison of the photo before and after delivery; if new garbage is added, further judging whether the garbage is in the opening of the garbage can; if the garbage is in the opening of the bucket, executing step 4; if the garbage is outside the bucket opening, executing step 5;
step 4, judging to be matched with the current garbage can opening area based on the garbage type based on the garbage image characteristics acquired by the first camera and/or the second camera; if not, executing step 5;
and 5, binding and feeding back the video clip obtained by the first camera, the picture shot by the second camera and the user information of face recognition to a cloud end, and simultaneously carrying out language supervision through a loudspeaker.
2. The AI-based method for automatically identifying user delivery spam behavior according to claim 1, further comprising:
in the step 1, the first camera judges the delivery behavior of the upper limb of the user from the shot continuous photos or videos, and acquires a continuous image sequence after judging that the delivery behavior of the upper limb of the user is generated, and judges whether a moving object exists in the image sequence; if so, the user is considered to have the behavior of delivering the garbage, and step 2 is executed.
3. The AI-based automatic identification of user spam delivery behavior of claim 2, wherein:
when a user puts garbage, the body changes, each key point is arranged on the body, a PAF is generated between every two key points, and a PAF forearm is generated between a wrist key point and an elbow key point; if the point P is on the limb, then
Figure FDA0003861641950000011
The value at is the unit vector pointing from body part j1 to j2, the vector being zero for all other points;
wherein,
Figure FDA0003861641950000012
inputting a graph, detecting each point above it, if this point is on the limb, its value is the unit vector pointing from j1 to j2, i.e. v = (x) j2,k -x j1,k )/||x j2,k -x j1,k || 2
The point set on the limb is defined as the set of points within the threshold of the segment distance, i.e., those points P,0 ≦ v · (P-Xj 1, k) ≦ Lc, k, and | v ≦ σ L (P-Xj 1, k) |, where the limb width σ is the distance in pixels, and the limb length L is the length of the limb c,k =||x j2,k -x j1,k || 2 V ±) is a vector perpendicular to v;
for the final PAF of each part, taking the average value of the PAF of K individuals, wherein nc (P) is the number of non-zero vectors of all the K individuals at the point P; if two arms intersect, but the hands can see, the PAF value and direction at the intersection of the two arms will be different; the final PAF is calculated from the real body key points.
4. The AI-based method for automatically identifying user delivery spam behavior according to claim 2, further comprising:
in a period of time when a user puts garbage in, an object has an instantaneous speed when observing the pixel motion on an imaging plane, the corresponding relation between the previous frame and the current frame is found by utilizing the change of the pixel in the image sequence on a time domain and the correlation between adjacent frames, and the motion information of the object between the adjacent frames is calculated, so that whether the picture has a motion condition or not is judged, and whether the user has a garbage putting behavior or not is preliminarily judged.
5. The AI-based method for automatically identifying user spam delivery behavior as recited in claim 4, wherein:
the constraint equation for the object motion estimation algorithm is as follows:
consider the light intensity of a pixel I (x, y, t) in the first frame (where t represents the time dimension in which it is located); it moves the distance (dx, dy) to the next frame, taking dt times; because the pixel is the same pixel point, it is assumed that the light intensity of the pixel before and after the motion is unchanged, that is:
I(x,y,t)=I(x+dx,y+dy,t+dt)
and performing Taylor expansion on the right end of the formula to obtain:
Figure FDA0003861641950000021
wherein epsilon represents a second order infinite small term, which is negligible; combining the two formulas and then removing dt to obtain:
Figure FDA0003861641950000022
and setting u and v as velocity vectors of optical flows along an X axis and a Y axis respectively to obtain:
Figure FDA0003861641950000023
order to
Figure FDA0003861641950000031
Respectively representing partial derivatives of gray levels of pixel points in the image along X, Y and T directions;
in conclusion, the following results are obtained:
I x u+I y v+I t =0
where Ix, iy, it can be obtained from the image data, and (u, v) is the optical flow vector.
6. The AI-based method for automatically identifying user delivery spam behavior according to claim 1, wherein: the step 3 specifically comprises the following steps:
3.1, before and after delivery, a second camera shoots a photo containing a plurality of garbage bin mouths and surrounding areas representing different classifications;
3.2, based on the comparison of the photos before and after delivery, finding new image characteristics in the delivered photos;
3.3, acquiring the edge range of the opening of the garbage can from the post-delivery picture;
3.4, judging that the newly added image features are positioned in and out of a range enclosed by the edge of the opening of the garbage can in the image; if the garbage is in the opening of the bucket, executing step 4; if the garbage is outside the bucket mouth, step 5 is executed.
7. The AI-based method for automatically identifying user delivery spam behavior according to claim 1, wherein:
step 3, a second camera is adopted to shoot a picture containing a plurality of garbage bin mouths representing different classifications and surrounding areas thereof; or a second camera is arranged above the opening of each garbage can, the second camera shoots the pictures of the opening of the corresponding garbage can and the surrounding area of the opening of the corresponding garbage can, and at least part of the pictures shot by the two adjacent second cameras are overlapped.
8. The AI-based method for automatically identifying user delivery spam behavior according to claim 1, further comprising:
step 4; when delivered garbage is a garbage bag, judging the color of the garbage bag from the images acquired by the first camera and/or the second camera, and judging whether the garbage bag with the color is consistent with the classification of the currently delivered garbage can or not; when delivering rubbish and being rubbish, the rubbish image characteristic that obtains in the image that obtains from first camera and/or second camera sends to the high in the clouds and judges this rubbish kind to judge and current garbage bin bung hole region adaptation based on rubbish kind.
9. The AI-based method for automatically identifying user delivery spam behavior according to claim 8, further comprising:
be provided with the sign of this garbage bin of mark corresponding rubbish kind on the garbage bin opening, include in the picture that the second camera was shot the sign, this sign can be discerned to computing terminal.
10. AI-based system for automatically identifying garbage delivery behaviors of users is characterized in that: method for automatic recognition of user delivery spam behaviour based on AI according to any of claims 1-9.
CN202211164639.4A 2022-09-23 2022-09-23 AI-based system and method for automatically identifying garbage delivery behaviors of users Pending CN115504121A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211164639.4A CN115504121A (en) 2022-09-23 2022-09-23 AI-based system and method for automatically identifying garbage delivery behaviors of users

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211164639.4A CN115504121A (en) 2022-09-23 2022-09-23 AI-based system and method for automatically identifying garbage delivery behaviors of users

Publications (1)

Publication Number Publication Date
CN115504121A true CN115504121A (en) 2022-12-23

Family

ID=84506439

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211164639.4A Pending CN115504121A (en) 2022-09-23 2022-09-23 AI-based system and method for automatically identifying garbage delivery behaviors of users

Country Status (1)

Country Link
CN (1) CN115504121A (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111476085A (en) * 2020-02-29 2020-07-31 国网江苏省电力有限公司苏州供电分公司 Garbage classification putting system and method based on image recognition and point reward
CN113148479A (en) * 2021-05-16 2021-07-23 曾禹博 Traceable garbage classification method and system based on block chain
CN113887519A (en) * 2021-10-29 2022-01-04 平安国际智慧城市科技股份有限公司 Artificial intelligence-based garbage throwing identification method, device, medium and server

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111476085A (en) * 2020-02-29 2020-07-31 国网江苏省电力有限公司苏州供电分公司 Garbage classification putting system and method based on image recognition and point reward
CN113148479A (en) * 2021-05-16 2021-07-23 曾禹博 Traceable garbage classification method and system based on block chain
CN113887519A (en) * 2021-10-29 2022-01-04 平安国际智慧城市科技股份有限公司 Artificial intelligence-based garbage throwing identification method, device, medium and server

Similar Documents

Publication Publication Date Title
CN110697273A (en) Intelligent household garbage identification and automatic classification system and method based on iterative learning control
CN110458082B (en) Urban management case classification and identification method
CN106339657B (en) Crop straw burning monitoring method based on monitor video, device
CN111611970B (en) Urban management monitoring video-based random garbage throwing behavior detection method
CN108205661A (en) A kind of ATM abnormal human face detection based on deep learning
CN110414441B (en) Pedestrian track analysis method and system
CN104951784A (en) Method of detecting absence and coverage of license plate in real time
Wei et al. Face detection for image annotation
CN111209868A (en) Passenger and luggage information association method and device for passenger station
CN104134078B (en) Automatic selection method for classifiers in people flow counting system
CN109918971A (en) Number detection method and device in monitor video
CN106886216A (en) Robot automatic tracking method and system based on RGBD Face datections
CN101739569A (en) Crowd density estimation method, device and monitoring system
CN105184291B (en) A kind of polymorphic type detection method of license plate and system
CN112499017A (en) Garbage classification method and device and garbage can
CN107330414A (en) Act of violence monitoring method
CN112488021A (en) Monitoring video-based garbage delivery violation detection method and system
CN104463232A (en) Density crowd counting method based on HOG characteristic and color histogram characteristic
CN113213017A (en) Intelligent garbage classification propaganda and education and delivery supervision system based on visual AI technology
CN110427815A (en) Realize the method for processing video frequency and device of the effective contents interception of gate inhibition
CN110516625A (en) A kind of method, system, terminal and the storage medium of rubbish identification classification
CN115861963A (en) Forklift operation safety early warning system based on deep learning and digital twins
CN114092877A (en) Garbage can unattended system design method based on machine vision
CN111217062A (en) Garbage can garbage identification method based on edge calculation and deep learning
CN115100588A (en) Deep learning-based illegal delivery garbage behavior detection method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination