CN111814517B - Garbage delivery detection method and related product - Google Patents

Garbage delivery detection method and related product Download PDF

Info

Publication number
CN111814517B
CN111814517B CN201910290934.6A CN201910290934A CN111814517B CN 111814517 B CN111814517 B CN 111814517B CN 201910290934 A CN201910290934 A CN 201910290934A CN 111814517 B CN111814517 B CN 111814517B
Authority
CN
China
Prior art keywords
target
garbage
determining
matching
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910290934.6A
Other languages
Chinese (zh)
Other versions
CN111814517A (en
Inventor
黄涛
李瑜
蒋小林
吕胜军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Jiajia Classification Technology Co ltd
Original Assignee
Shenzhen Jiajia Classification Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Jiajia Classification Technology Co ltd filed Critical Shenzhen Jiajia Classification Technology Co ltd
Priority to CN201910290934.6A priority Critical patent/CN111814517B/en
Publication of CN111814517A publication Critical patent/CN111814517A/en
Application granted granted Critical
Publication of CN111814517B publication Critical patent/CN111814517B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The embodiment of the application discloses a junk delivery detection method and a related product, wherein the method comprises the steps of obtaining a video sequence when a target user delivers junk; behavior extraction is carried out on the video sequence to obtain target delivery behavior characteristic parameters; judging the characteristic parameters of the target delivery behaviors; and when the target delivery behavior does not meet the preset requirement, confirming that the target user garbage delivery behavior is unreasonable. By adopting the method and the device, the garbage delivery behavior can be detected.

Description

Garbage delivery detection method and related product
Technical Field
The application relates to the technical field of electronics, in particular to a garbage delivery detection method and a related product.
Background
With the widespread use of electronic devices (such as mobile phones, tablet computers, and the like), the electronic devices have more and more applications and more powerful functions, and the electronic devices are developed towards diversification and personalization, and become indispensable electronic products in the life of users.
In life, most people still have great randomness for garbage treatment, for example, the delivery of garbage is not standard, so the problem of how to detect the garbage delivery behavior is urgently solved.
Disclosure of Invention
The embodiment of the application provides a garbage delivery detection method and a related product, which can realize the detection of garbage delivery behaviors.
In a first aspect, an embodiment of the present application provides a method for detecting spam delivery, including:
acquiring a video sequence of a target user when delivering garbage;
behavior extraction is carried out on the video sequence to obtain target delivery behavior characteristic parameters;
judging the characteristic parameters of the target delivery behaviors;
and when the target delivery behavior does not meet the preset requirement, confirming that the target user garbage delivery behavior is unreasonable.
In a second aspect, an embodiment of the present application provides a garbage delivery detecting device, where the device includes:
the acquisition unit is used for acquiring a video sequence when a target user delivers garbage;
the extraction unit is used for carrying out behavior extraction on the video sequence to obtain target delivery behavior characteristic parameters;
the judging unit is used for judging the target delivery behavior characteristic parameters;
and the determining unit is used for determining that the target user garbage delivery behavior is unreasonable when the target delivery behavior does not meet the preset requirement.
In a third aspect, an embodiment of the present application provides an electronic device, including a processor, a memory, a communication interface, and one or more programs, where the one or more programs are stored in the memory and configured to be executed by the processor, and the program includes instructions for executing the steps in the first aspect of the embodiment of the present application.
In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium, where the computer-readable storage medium stores a computer program for electronic data exchange, where the computer program enables a computer to perform some or all of the steps described in the first aspect of the embodiment of the present application.
In a fifth aspect, embodiments of the present application provide a computer program product, where the computer program product includes a non-transitory computer-readable storage medium storing a computer program, where the computer program is operable to cause a computer to perform some or all of the steps as described in the first aspect of the embodiments of the present application. The computer program product may be a software installation package.
It can be seen that the method for detecting spam delivery and the related product described in the embodiments of the present application obtain a video sequence when a target user delivers spam, perform behavior extraction on the video sequence to obtain target delivery behavior characteristic parameters, determine the target delivery behavior characteristic parameters, and confirm that the target user has an unreasonable spam delivery behavior when the target delivery behavior does not meet preset requirements, thereby being capable of detecting the spam delivery behavior.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1A is a schematic flow chart diagram illustrating a method for spam delivery detection as disclosed in an embodiment of the present application;
FIG. 1B is a schematic illustration of an interface presentation of a garbage classification platform disclosed in an embodiment of the present application;
FIG. 2 is a schematic flow chart diagram of another method for detecting spam delivery disclosed in an embodiment of the present application;
fig. 3 is a schematic structural diagram of another electronic device disclosed in the embodiments of the present application;
FIG. 4A is a schematic structural diagram of a garbage delivery detecting device disclosed in the embodiments of the present application;
FIG. 4B is a schematic structural diagram of another garbage delivery detecting device disclosed in the embodiments of the present application.
Detailed Description
In order to make the technical solutions of the present application better understood, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "first," "second," and the like in the description and claims of the present application and in the above-described drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
The electronic device according to the embodiment of the present application may include various handheld devices (e.g., smart phones), vehicle-mounted devices, smart trash cans, biochemical degradation devices, Virtual Reality (VR)/Augmented Reality (AR) devices, wearable devices, computing devices, or other processing devices connected to wireless modems, and various forms of User Equipment (UE), Mobile Stations (MS), terminal devices (terminal device), research and development/test platforms, servers, and so on. For convenience of description, the above-mentioned devices are collectively referred to as electronic devices.
The following describes embodiments of the present application in detail.
Referring to fig. 1A, fig. 1A is a schematic flow chart of a method for detecting spam delivery according to an embodiment of the present application, the method for detecting spam delivery includes the following steps 101-104.
101. And acquiring a video sequence of the target user when delivering the garbage.
The target user can be a registered user, specifically, the target user can log in the garbage classification platform by scanning the two-dimensional code to complete registration, and the two-dimensional code can be a two-dimensional code corresponding to the garbage classification platform. The two-dimensional code may be a website or a page link, and in a specific implementation, after the target two-dimensional code is scanned, a registration page of the garbage classification platform may be entered, and the user is prompted to complete a registration operation, specifically, identity information of the target user is input.
Further, the registration operation can be completed according to the identity information, and the identification information for tracing the garbage bags can be formulated for the target user according to the identity information. The identification information may be at least one of: two-dimensional codes, character strings, watermarks, LOGO, patterns or other invisible marks, and the like, which are not limited herein. The identification information can be branded on the garbage bag. In the embodiment of the present application, the garbage bags may include a dry garbage bag for containing dry garbage, such as paper, packaging material, etc., and a wet garbage bag for containing wet garbage, such as pericarp, vegetable leaves, leftovers, etc., which are not limited herein. The dry garbage bag and the wet garbage bag can be different in color and material.
Further optionally, the registration page includes a face input control, as shown in fig. 1B, the garbage classification platform may include a face input control, in a specific implementation, when a touch operation for the face input control is detected, the camera may be started, and shooting is performed with the first shooting parameter to obtain a face image, and after the face image is identified, the garbage can may be opened.
Optionally, in the step 101, obtaining a video sequence when the target user delivers the spam includes the following steps:
11. acquiring a target face image of the target user;
12. acquiring target environment parameters corresponding to the target face image;
13. determining a target matching threshold corresponding to the target environment parameter according to a mapping relation between a preset environment parameter and the matching threshold;
14. extracting the contour of the target face image to obtain a first contour;
15. extracting feature points of the target face image to obtain a first feature point set;
16. acquiring the brightness of a target environment;
17. determining a target weight distribution factor corresponding to the target ambient brightness according to a preset mapping relation between the ambient brightness and the weight distribution factor, wherein the target weight distribution factor comprises a target contour weight factor and a target characteristic point weight factor;
18. determining a contour matching threshold value according to the target contour weight factor and the target matching threshold value;
19. determining a feature point matching threshold according to the target feature point weight factor and the target matching threshold;
20. acquiring a second contour and the second feature point set corresponding to the preset face template, wherein the preset face template is any one face template in the preset database;
21. matching the first contour with the second contour to obtain a first matching value;
22. matching the first characteristic point set and the second characteristic point set to obtain a second matching value;
23. when the first matching value is larger than the contour matching threshold and the second matching value is larger than the feature point matching threshold, determining a target matching value according to the first matching value, the second matching value and the target weight distribution factor;
24. and when the target matching value is larger than the target matching threshold value, acquiring a video sequence when delivering rubbish within a preset time period corresponding to the preset face template.
Wherein, the environmental parameter may be at least one of the following: the method includes, but is not limited to, ambient brightness, ambient color temperature, geographical location, weather, humidity, temperature, magnetic field disturbance parameters, and the like, and the feature point extraction may be at least one of the following: harris corner detection, Scale Invariant Feature Transform (SIFT), SURF feature extraction algorithm, etc., which may be at least one of the following: hough transform, fourier transform, pyramid transform, etc., and are not limited herein. The mapping relationship between the preset environment parameter and the matching threshold value can be stored in the electronic device in advance. Of course, the electronic device may also pre-store a mapping relationship between preset ambient brightness and a weight distribution factor, where the weight distribution factor includes a profile weight factor and a feature point weight factor, and the profile weight factor + the feature point weight factor is 1, which is specifically as follows:
ambient brightness Weight assignment factor
K1 (A1,B1)
K2 (A2,B2)
Kn (An,Bn)
Where K1, K2, …, and Kn represent ambient brightness, a1, a2, …, and An represent contour weight factors, and B1, B2, …, and Bn represent feature point weight factors.
In the concrete implementation, the target environment parameters corresponding to the target face image can be obtained, the target matching threshold corresponding to the target environment parameters is determined according to the mapping relation between the preset environment parameters and the matching threshold, so that the matching value is suitable for the environment and is more beneficial to improving the matching accuracy, the target face image is subjected to contour extraction to obtain a first contour, the target face image is subjected to feature point extraction to obtain a first feature point set, the target environment brightness is obtained, the target weight distribution factor corresponding to the target environment brightness is determined according to the mapping relation between the preset environment brightness and the weight distribution factor, the target weight distribution factor comprises a target contour weight factor and a target feature point weight factor, the contour matching threshold is determined according to the target contour weight factor and the target matching threshold, namely the contour matching threshold is the contour weight factor and the target matching threshold, determining a feature point matching threshold according to the target feature point weight factor and the target matching threshold, wherein the feature point matching threshold is the target matching threshold and the target feature point weight factor, acquiring a second contour and a second feature point set corresponding to a preset face template, the preset face template is any face template in a preset database, matching the first contour with the second contour to obtain a first matching value, matching the first feature point set with the second feature point set to obtain a second matching value, when the first matching value is greater than the contour matching threshold and the second matching value is greater than the feature point matching threshold, determining a target matching value according to the first matching value, the second matching value and the target weight distribution factor, when the target matching value is greater than the target matching threshold, the target matching value is the first matching value and the target contour weight factor plus the second matching value and the target feature point weight factor, and confirming that the target face image is successfully matched with the preset face template, and further acquiring a video sequence corresponding to the preset face template in the process of delivering the garbage within a preset time period, wherein the preset time period can be set by a user or is defaulted by a system. Certainly, when the first matching value is smaller than or equal to the contour matching threshold, or the second matching value is smaller than or equal to the feature point matching threshold, or the target matching value is smaller than or equal to the target matching threshold, it is determined that the matching between the target face image and the preset face template fails, and thus, the face can be accurately identified.
Further, identity information corresponding to a preset face template can be obtained, and in the embodiment of the application, the identity information may be at least one of the following: name, gender, age, identification number, mobile phone number, graduation number, academic number, student number, driver's license number, home address, native place, work experience, bank card number, social account number, family relationship, etc., without limitation.
102. And performing behavior extraction on the video sequence to obtain target delivery behavior characteristic parameters.
Wherein, the target delivery behavior characteristic parameter may include at least one of the following: the volume of the garbage bag, the delivery position of the garbage bag, the saturation level of the garbage bag, the wrinkling level of the garbage bag, the color of the garbage bag, the surface cleanliness of the garbage bag, the weight of the garbage bag, the humidity of the garbage bag, etc., without limitation. Wherein, the disposal bag volume can establish the 3D image earlier, realizes the volume operation through the 3D image, and disposal bag delivery position can be for the position of falling to the ground of disposal bag, the disposal bag saturation degree can be for the ratio between actual volume and the standard volume, and the standard volume can set up in advance, and disposal bag fold degree can be realized by profile quantity or profile distribution density, and the disposal bag colour can be realized through colour discernment, and disposal bag surface cleanliness factor can be realized according to following mode: the method comprises the steps of taking an original garbage bag image as a background image, taking the garbage bag image during delivery-the original garbage bag image as a difference image, extracting color components of the difference image, dividing the color components into a plurality of independent areas, determining the number of characteristic points in each independent area, determining target independent areas with the number of the characteristic points being larger than a preset number, taking the ratio of the number of the target independent areas to the number of the independent areas as the surface cleanliness degree of a garbage bag, detecting the weight of the garbage bag by a pressure sensor of a garbage can, and reflecting the humidity of the garbage bag by the moisture content on the surface of the garbage bag.
Optionally, in the step 103, performing behavior extraction on the video sequence to obtain target delivery behavior feature parameters may include the following steps:
31. analyzing the video sequence to obtain a plurality of video images;
32. determining the number of the garbage bags carried by the target user according to the plurality of video images to obtain N garbage bags, wherein N is a positive integer;
33. performing target segmentation on the plurality of video images to obtain a plurality of garbage bag images;
34. dividing the garbage bag images into N types to obtain N types of garbage bag images, wherein each type of garbage bag image corresponds to a garbage bag;
35. extracting the characteristics of the N types of garbage bag images to obtain N characteristic sets, wherein each garbage bag image corresponds to one characteristic set;
36. and screening and integrating the N characteristic sets to obtain target delivery behavior characteristic parameters.
The video is composed of consecutive images, so that a video sequence can be analyzed to obtain multi-frame video images, the number of the garbage bags carried by a target user can be identified according to the video images to obtain N garbage bags, and naturally, in general, when a user acts to deliver a plurality of garbage bags, the delivery behavior of 1 garbage bag or a plurality of garbage bags is possibly unreasonable. Furthermore, target segmentation can be performed on a plurality of video images to obtain a plurality of garbage bag images, one video image can have 1 or a plurality of garbage bags, and after all, sometimes different angles exist, and certain shielding can be caused between the garbage bags, the garbage bag images can be divided into N types to obtain N types of garbage bag images, each type of garbage bag image corresponds to one garbage bag, and feature extraction is performed on the N types of garbage bag images, the feature extraction comprises feature point extraction or contour extraction, and the feature point extraction can be at least one of the following modes: harris corner detection, SIFT, SURF feature extraction algorithm, etc., and the contour extraction may be at least one of the following: hough transform, fourier transform, pyramid transform, etc., without limitation, N feature sets are obtained, each type of garbage bag image corresponds to one feature set, each feature set includes some unstable features, and therefore these features need to be screened, specifically, for a feature point, a module of each feature point may be calculated, a module threshold value is set, feature points greater than the module threshold value are retained, for a profile, a profile length of each profile may be determined, a profile length threshold value is set, a profile having a profile length greater than the profile length threshold value is retained, and the like.
103. And judging the target delivery behavior characteristic parameters.
In specific implementation, the target delivery behavior characteristic parameters can be input into a preset neural network model for operation to obtain a behavior probability value, when the behavior probability value is in a preset probability range, the target user spam behavior is determined to be unreasonable, otherwise, when the behavior probability value is not in the preset probability range, the target user spam delivery behavior is determined to be reasonable. The preset probability range can be set by the user or the default of the system.
The unreasonable garbage delivery behaviors can be at least one of the following behaviors: waste is unsorted (e.g., dry waste and wet waste are not sorted), and the waste delivery location is incorrect (e.g., not thrown into a trash bin).
104. And when the target delivery behavior does not meet the preset requirement, confirming that the target user garbage delivery behavior is unreasonable.
The preset requirement may be set by a user or default by a system, and the preset requirement may be classification of dry and wet garbage, delivery of garbage to a specified location, inability of overweight of each bag of garbage, certain range of pressure saturation of each bag of garbage, and the like, which is not limited herein. In specific implementation, when the target delivery behavior does not meet the preset requirement, the target user is determined to have unreasonable garbage delivery behavior.
Optionally, before the step 101, the following steps may be further included:
a1, shooting the target user to obtain a first face image;
a2, carrying out image segmentation on the first face image to obtain a face region image;
a3, extracting the features of the face region image to obtain a feature point set;
a4, inputting the feature point set into a preset artificial neural network model to obtain a target value;
a5, when the target value is in a preset range, confirming that the target user is a child;
a6, when the target user is a child, executing the step of obtaining the video sequence when the target user delivers the garbage.
The preset artificial neural network model and the preset range can be set by a user or defaulted by a system. In the concrete implementation, can shoot the target user, obtain first face image, not only include the people's face in the first face image, still probably include the background region, therefore, can carry out image segmentation to first face image, obtain face region image, carry out feature extraction to face region image, obtain the characteristic point set, input this characteristic point set to predetermine artificial neural network model, can obtain the target value, this target value is a probability value, when this target value is in predetermineeing the scope, then confirm that the target user is children, when the target user is children, carry out step 101, thus, can detect to children's rubbish delivery action, and can realize standardizing children's rubbish delivery action.
Optionally, in the step a1, the capturing the target user to obtain the first face image may include the following steps:
a11, detecting the target distance and the walking speed between the target user and the camera;
a12, when the target distance is smaller than the preset distance, determining the shooting time according to the walking speed;
a13, estimating a shooting position according to the shooting time;
a14, pre-estimating shooting parameters between the target user and the camera according to the shooting position to obtain target shooting parameters;
and A15, shooting according to the target shooting parameters to obtain the first person image.
In the embodiment of the present application, the shooting parameters may be at least one of the following: the focal length, the exposure duration, the aperture size, the screen fill-in light parameter, the camera angle parameter, ISO, and the like, which are not limited herein, wherein the screen fill-in light parameter may be at least one of the following: the camera angle parameter may be at least one of the following parameters, without limitation, screen brightness, screen wallpaper, screen color temperature, screen light emitting area, and the like: the camera rotation angle, the camera focusing angle, the camera shooting range angle (e.g., wide-angle mode or non-wide-angle mode), the camera rotation direction, the camera rotation angle, the camera rotation speed, and the like, which are not limited herein. The preset distance can be set by the user or the default of the system.
In a specific implementation, a target distance and a walking speed between a target user and a camera can be detected through a double camera or a ranging sensor, a mapping relation between the speed and a shooting time can be stored in advance, further, when the target distance is less than a preset distance, the shooting time corresponding to the walking speed is determined according to the mapping relation between the speed and the shooting time, after the shooting time is determined, which position the user may walk to when the shooting time arrives, namely, the shooting position is estimated, different positions, different ambient light and different angles are estimated, therefore, the shooting parameters between the target user corresponding to the shooting position and the camera can be determined, the target shooting parameters are obtained, specifically, target environment parameters corresponding to the shooting position are obtained, according to the mapping relation between the environment parameters and the shooting parameters, and determining target shooting parameters corresponding to the target environment parameters according to the mapping relation, and shooting according to the target shooting parameters to obtain a first face image.
Based on above-mentioned this application embodiment, can detect children's rubbish delivery action, certainly, can also report to police when the wrong delivery action takes place and remind, perhaps, report the backstage and push and guide for head of a family APP suggestion, deliver and provide forward encouragement mechanism under the correct condition, if head of a family APP sets up child's exclusive account, report correct delivery record back, child's exclusive account environmental protection gold increases, this account is relevant children's exclusive product, including amusement, education all kinds such as, perhaps, can associate school education tracking system etc. do not limit here.
Optionally, after the step 104, the following steps may be further included:
b1, sending a garbage delivery request to a server, wherein the garbage delivery request carries target position information;
b2, receiving load state information of a plurality of garbage cans corresponding to the target position information sent by the server;
b3, acquiring the delivered garbage amount of the target user;
b4, determining to select a target garbage can from the plurality of garbage cans according to the garbage amount;
and B5, generating a navigation route between the target position information and the target garbage can.
Wherein the amount of the garbage can be at least one of the following: waste volume, waste weight, waste type, etc., without limitation. In a specific implementation, a spam delivery request may be sent to the server, where the spam delivery request carries the target location information, and further, load status information of a plurality of trash cans near the target location information sent by the server may be received, where the load status information may be understood as a trash holding degree, for example, the load status information may be that the trash cans are full, or that the trash cans are loaded in 50% of space, and so on. Furthermore, the garbage amount delivered by the target user can be acquired, the garbage can capable of containing the garbage amount in the garbage cans can be used as the target garbage can, a navigation route between the target position information and the target garbage can be generated, and the user can conveniently and quickly find the garbage can.
Further optionally, in the step B4, determining to select the target trash can from the plurality of trash cans according to the trash amount includes:
b41, determining the cleaning time of each garbage can in the plurality of garbage cans to obtain a plurality of cleaning time points;
b42, determining a navigation route between the target position information and each garbage bin in the plurality of garbage bins, and determining a route corresponding to each navigation route to obtain a plurality of routes;
b43, determining the arrival time point of the target user to each garbage can according to the multiple routes to obtain multiple arrival time points;
b44, estimating target load state information of the target user reaching each of the plurality of garbage cans according to the plurality of cleaning time points and the plurality of arrival time points to obtain a plurality of target load state information;
and B45, selecting a garbage can corresponding to the target load state information which can contain the garbage amount and has the closest arrival time point from the target load state information as a target garbage can.
Wherein, in the concrete implementation, the cleaning time of each garbage can in the plurality of garbage cans can be determined to obtain a plurality of cleaning time points, in general, the garbage can be cleaned at certain time intervals in consideration of the full state of the garbage can, the time point corresponding to the cleaning can be understood as the cleaning time point, further, a navigation route between the target position information and each garbage can in the plurality of garbage cans can be determined, a route corresponding to each navigation route can be determined to obtain a plurality of routes, since the average estimation of the user walking can be obtained by the user APP, the arrival time point of the target user reaching each garbage can be determined according to the plurality of routes to obtain a plurality of arrival times, the target load state information of the target user reaching each garbage can in the plurality of garbage cans can be estimated according to the plurality of cleaning time points and the plurality of arrival time points to obtain a plurality of target load state information, can estimate very easily that the user arrives the rubbish that each garbage bin corresponds and clear up or not clear up yet, know the load condition that the user corresponds when falling rubbish promptly to a certain garbage bin, and then, can select from a plurality of target load state information and can hold the rubbish volume and the nearest garbage bin that a target load state information corresponds of arrival time point that corresponds as the target garbage bin, so, can let the user swiftly and realize rubbish delivery fast.
Optionally, after the step 104, the following steps may be further included:
c1, photographing the garbage to be delivered to obtain a target garbage image;
c2, carrying out image segmentation on the target garbage image to obtain a garbage area image;
c3, analyzing the garbage area image to obtain target garbage characteristic parameters;
c4, determining target control parameters corresponding to the target characteristic garbage parameters according to the mapping relation between preset garbage characteristic parameters and the control parameters of the garbage degradation equipment;
c5, sending the target control parameters to a server, and guiding the garbage degradation equipment to process the garbage to be delivered according to the target control parameters by the server.
The mapping relationship between preset garbage characteristic parameters and control parameters of the garbage degradation equipment can be prestored, and the garbage characteristic parameters can be one of the following parameters: the type of garbage, the volume of the garbage and the like are not limited herein, the type of garbage can be dry garbage or wet garbage, and the control parameter of the garbage degradation equipment can be at least one of the following: the type of the degradation agent, the capacity of the degradation agent, the degradation temperature, the degradation power, the degradation temperature, the degradation stirring speed, the degradation mode, and the like, which are not limited herein. The method comprises the following steps of shooting garbage to be delivered to obtain a target garbage image, carrying out image segmentation on the target garbage image to obtain a garbage area image, analyzing the garbage area image, wherein the analysis mode can be at least one of the following modes: the method comprises the steps of extracting characteristic points, identifying substances and the like to finally obtain target garbage characteristic parameters, determining target control parameters corresponding to the target characteristic garbage parameters according to a mapping relation between preset garbage characteristic parameters and control parameters of garbage degradation equipment, sending the target control parameters to a server, and guiding the garbage degradation equipment to process the garbage to be delivered according to the target control parameters by the server, so that the garbage can be efficiently processed.
It can be seen that the method for detecting spam delivery and the related product described in the embodiments of the present application obtain a video sequence when a target user delivers spam, perform behavior extraction on the video sequence to obtain target delivery behavior characteristic parameters, determine the target delivery behavior characteristic parameters, and confirm that the target user has an unreasonable spam delivery behavior when the target delivery behavior does not meet preset requirements, thereby being capable of detecting the spam delivery behavior.
It can be seen that the method for detecting the garbage delivery described in the embodiment of the application obtains the video sequence when the target user delivers the garbage, performs behavior extraction on the video sequence to obtain the characteristic parameter of the target delivery behavior, determines the characteristic parameter of the target delivery behavior, and determines that the garbage delivery behavior of the target user is unreasonable when the target delivery behavior does not meet the preset requirement, so that the detection of the garbage delivery behavior can be realized.
Consistent with the above, fig. 2 is a schematic flow chart of a method for detecting spam delivery disclosed in the embodiments of the present application. The method for detecting the garbage delivery comprises the following steps 201-209.
201. Shooting a target user to obtain a first face image;
202. carrying out image segmentation on the first face image to obtain a face region image;
203. extracting the features of the face region image to obtain a feature point set;
204. inputting the feature point set into a preset artificial neural network model to obtain a target value;
205. when the target value is in a preset range, confirming that the target user is a child;
206. when the target user is a child, acquiring a video sequence of the target user when delivering the garbage;
207. behavior extraction is carried out on the video sequence to obtain target delivery behavior characteristic parameters;
208. judging the characteristic parameters of the target delivery behaviors;
209. and when the target delivery behavior does not meet the preset requirement, confirming that the target user garbage delivery behavior is unreasonable.
The detailed description of the steps 201 to 209 may refer to the corresponding description of the garbage delivery detection method described in fig. 1A, and is not repeated herein.
The method for detecting the garbage delivery described in the embodiment of the application includes the steps of shooting a target user to obtain a first face image, carrying out image segmentation on the first face image to obtain a face area image, carrying out feature extraction on the face area image to obtain a feature point set, inputting the feature point set into a preset artificial neural network model to obtain a target value, confirming that the target user is a child when the target value is within a preset range, obtaining a video sequence of the target user when the target user delivers garbage when the target user is the child, carrying out behavior extraction on the video sequence to obtain target delivery behavior feature parameters, judging the target delivery behavior feature parameters, and confirming that the garbage delivery behavior of the target user is unreasonable when the target delivery behavior does not meet preset requirements, so that the garbage delivery behavior can be detected.
Referring to fig. 3, fig. 3 is a schematic structural diagram of another electronic device disclosed in the embodiment of the present application, and as shown in the drawing, the electronic device includes a processor, a memory, a communication interface, and one or more programs, where the one or more programs are stored in the memory and configured to be executed by the processor, and the program includes instructions for performing the following steps:
acquiring a video sequence of a target user when delivering garbage;
behavior extraction is carried out on the video sequence to obtain target delivery behavior characteristic parameters;
judging the characteristic parameters of the target delivery behaviors;
and when the target delivery behavior does not meet the preset requirement, confirming that the target user garbage delivery behavior is unreasonable.
It can be seen that, the electronic device described in the embodiment of the present application obtains a video sequence when a target user delivers garbage, performs behavior extraction on the video sequence to obtain target delivery behavior characteristic parameters, determines the target delivery behavior characteristic parameters, and confirms that the target user has an unreasonable garbage delivery behavior when the target delivery behavior does not meet preset requirements, so that detection of the garbage delivery behavior can be achieved.
In one possible example, in the performing behavior extraction on the video sequence to obtain target delivery behavior feature parameters, the program includes instructions for performing the following steps:
analyzing the video sequence to obtain a plurality of video images;
determining the number of the garbage bags carried by the target user according to the plurality of video images to obtain N garbage bags, wherein N is a positive integer;
performing target segmentation on the plurality of video images to obtain a plurality of garbage bag images;
dividing the garbage bag images into N types to obtain N types of garbage bag images, wherein each type of garbage bag image corresponds to a garbage bag;
extracting the characteristics of the N types of garbage bag images to obtain N characteristic sets, wherein each garbage bag image corresponds to one characteristic set;
and screening and integrating the N characteristic sets to obtain target delivery behavior characteristic parameters.
In one possible example, in the obtaining of the video sequence when the target user delivers spam, the program includes instructions for:
acquiring a target face image of the target user;
acquiring target environment parameters corresponding to the target face image;
determining a target matching threshold corresponding to the target environment parameter according to a mapping relation between a preset environment parameter and the matching threshold;
extracting the contour of the target face image to obtain a first contour;
extracting feature points of the target face image to obtain a first feature point set;
acquiring the brightness of a target environment;
determining a target weight distribution factor corresponding to the target ambient brightness according to a preset mapping relation between the ambient brightness and the weight distribution factor, wherein the target weight distribution factor comprises a target contour weight factor and a target characteristic point weight factor;
determining a contour matching threshold value according to the target contour weight factor and the target matching threshold value;
determining a feature point matching threshold according to the target feature point weight factor and the target matching threshold;
acquiring a second contour and the second feature point set corresponding to the preset face template, wherein the preset face template is any one face template in the preset database;
matching the first contour with the second contour to obtain a first matching value;
matching the first characteristic point set and the second characteristic point set to obtain a second matching value;
when the first matching value is larger than the contour matching threshold and the second matching value is larger than the feature point matching threshold, determining a target matching value according to the first matching value, the second matching value and the target weight distribution factor;
and when the target matching value is larger than the target matching threshold value, acquiring a video sequence when delivering rubbish within a preset time period corresponding to the preset face template.
In one possible example, the program further includes instructions for performing the steps of:
shooting the target user to obtain a first face image;
carrying out image segmentation on the first face image to obtain a face region image;
extracting the features of the face region image to obtain a feature point set;
inputting the feature point set into a preset artificial neural network model to obtain a target value;
when the target value is in a preset range, confirming that the target user is a child;
and when the target user is a child, executing the step of acquiring the video sequence of the target user when delivering the garbage.
In one possible example, in the capturing of the target user to obtain the first face image, the program includes instructions for:
detecting a target distance and a walking speed between the target user and a camera;
when the target distance is smaller than a preset distance, determining a shooting moment according to the walking speed;
estimating a shooting position according to the shooting time;
estimating shooting parameters between the target user and the camera according to the shooting position to obtain target shooting parameters;
shooting according to the target shooting parameters to obtain the first face image.
The above description has introduced the solution of the embodiment of the present application mainly from the perspective of the method-side implementation process. It is understood that the electronic device comprises corresponding hardware structures and/or software modules for performing the respective functions in order to realize the above-mentioned functions. Those of skill in the art will readily appreciate that the present application is capable of hardware or a combination of hardware and computer software implementing the various illustrative elements and algorithm steps described in connection with the embodiments provided herein. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiment of the present application, the electronic device may be divided into the functional units according to the method example, for example, each functional unit may be divided corresponding to each function, or two or more functions may be integrated into one processing unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit. It should be noted that the division of the unit in the embodiment of the present application is schematic, and is only a logic function division, and there may be another division manner in actual implementation.
Referring to fig. 4A, fig. 4A is a schematic structural diagram of a garbage delivery detecting device disclosed in the embodiment of the present application, where the garbage delivery detecting device 400 includes: an acquisition unit 401, an extraction unit 402, a decision unit 403, and a determination unit 404, wherein,
an obtaining unit 401, configured to obtain a video sequence when a target user delivers spam;
an extracting unit 402, configured to perform behavior extraction on the video sequence to obtain target delivery behavior feature parameters;
a judging unit 403, configured to judge the target delivery behavior characteristic parameter;
a determining unit 404, configured to determine that the target user spam delivery behavior is unreasonable when the target delivery behavior does not meet preset requirements.
It can be seen that the garbage delivery detection device described in the embodiment of the present application obtains a video sequence when a target user delivers garbage, performs behavior extraction on the video sequence to obtain target delivery behavior characteristic parameters, determines the target delivery behavior characteristic parameters, and determines that the target user has an unreasonable garbage delivery behavior when the target delivery behavior does not meet preset requirements, so that detection of the garbage delivery behavior can be achieved.
In a possible example, in the aspect of performing behavior extraction on the video sequence to obtain target delivery behavior feature parameters, the extraction unit 402 is specifically configured to:
analyzing the video sequence to obtain a plurality of video images;
determining the number of the garbage bags carried by the target user according to the plurality of video images to obtain N garbage bags, wherein N is a positive integer;
performing target segmentation on the plurality of video images to obtain a plurality of garbage bag images;
dividing the garbage bag images into N types to obtain N types of garbage bag images, wherein each type of garbage bag image corresponds to a garbage bag;
extracting the characteristics of the N types of garbage bag images to obtain N characteristic sets, wherein each garbage bag image corresponds to one characteristic set;
and screening and integrating the N characteristic sets to obtain target delivery behavior characteristic parameters.
In one possible example, in terms of the acquiring the video sequence when the target user delivers the spam, the acquiring unit 401 is specifically configured to:
acquiring a target face image of the target user;
acquiring target environment parameters corresponding to the target face image;
determining a target matching threshold corresponding to the target environment parameter according to a mapping relation between a preset environment parameter and the matching threshold;
extracting the contour of the target face image to obtain a first contour;
extracting feature points of the target face image to obtain a first feature point set;
acquiring the brightness of a target environment;
determining a target weight distribution factor corresponding to the target ambient brightness according to a preset mapping relation between the ambient brightness and the weight distribution factor, wherein the target weight distribution factor comprises a target contour weight factor and a target characteristic point weight factor;
determining a contour matching threshold value according to the target contour weight factor and the target matching threshold value;
determining a feature point matching threshold according to the target feature point weight factor and the target matching threshold;
acquiring a second contour and the second feature point set corresponding to the preset face template, wherein the preset face template is any one face template in the preset database;
matching the first contour with the second contour to obtain a first matching value;
matching the first characteristic point set and the second characteristic point set to obtain a second matching value;
when the first matching value is larger than the contour matching threshold and the second matching value is larger than the feature point matching threshold, determining a target matching value according to the first matching value, the second matching value and the target weight distribution factor;
and when the target matching value is larger than the target matching threshold value, acquiring a video sequence when delivering rubbish within a preset time period corresponding to the preset face template.
In one possible example, as shown in fig. 4B, fig. 4B is a further modified structure of the garbage delivery detecting device depicted in fig. 4A, which may further include, compared with fig. 4A: the shooting unit 405, the dividing unit 406, and the input unit 407 are specifically as follows:
the shooting unit is used for shooting the target user to obtain a first face image;
the segmentation unit is used for carrying out image segmentation on the first face image to obtain a face region image;
the extracting unit 402 is configured to perform feature extraction on the face region image to obtain a feature point set;
the input unit is used for inputting the feature point set into a preset artificial neural network model to obtain a target value;
the determining unit 404 is configured to determine that the target user is a child when the target value is within a preset range;
when the target user is a child, the obtaining unit 401 executes the step of obtaining the video sequence when the target user delivers the garbage.
In one possible example, in the aspect of capturing the target user to obtain the first face image, the capturing unit 405 is specifically configured to:
detecting a target distance and a walking speed between the target user and a camera;
when the target distance is smaller than a preset distance, determining a shooting moment according to the walking speed;
estimating a shooting position according to the shooting time;
estimating shooting parameters between the target user and the camera according to the shooting position to obtain target shooting parameters;
shooting according to the target shooting parameters to obtain the first face image.
It should be noted that the electronic device described in the embodiments of the present application is presented in the form of a functional unit. The term "unit" as used herein is to be understood in its broadest possible sense, and objects used to implement the functions described by the respective "unit" may be, for example, an integrated circuit ASIC, a single circuit, a processor (shared, dedicated, or chipset) and memory that execute one or more software or firmware programs, a combinational logic circuit, and/or other suitable components that provide the described functionality.
Embodiments of the present application also provide a computer-readable storage medium, wherein the computer-readable storage medium stores a computer program for electronic data exchange, and the computer program enables a computer to execute part or all of the steps of any one of the spam delivery detection methods as recited in the above method embodiments.
Embodiments of the present application also provide a computer program product comprising a non-transitory computer readable storage medium storing a computer program operable to cause a computer to perform some or all of the steps of any one of the spam delivery detection methods as recited in the above method embodiments.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions when actually implementing, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some interfaces, devices or units, and may be an electric or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit may be implemented in the form of hardware, or may be implemented in the form of a software program module.
The integrated units, if implemented in the form of software program modules and sold or used as stand-alone products, may be stored in a computer readable memory. Based on such understanding, the technical solution of the present application may be substantially implemented or a part of or all or part of the technical solution contributing to the prior art may be embodied in the form of a software product stored in a memory, and including several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method described in the embodiments of the present application. And the aforementioned memory comprises: various media capable of storing program codes, such as a usb disk, a read-only memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and the like.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable memory, which may include: flash disk, ROM, RAM, magnetic or optical disk, and the like.
The foregoing detailed description of the embodiments of the present application has been presented to illustrate the principles and implementations of the present application, and the above description of the embodiments is only provided to help understand the method and the core concept of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (10)

1. A method of spam delivery detection, the method comprising:
acquiring a video sequence of a target user when delivering garbage;
behavior extraction is carried out on the video sequence to obtain target delivery behavior characteristic parameters;
judging the characteristic parameters of the target delivery behaviors;
when the target delivery behavior does not meet the preset requirement, confirming that the target user garbage delivery behavior is unreasonable;
sending a garbage delivery request to a server, wherein the garbage delivery request carries target position information;
receiving load state information of a plurality of garbage cans corresponding to the target position information sent by the server;
acquiring the garbage amount delivered by the target user;
determining to select a target garbage can from the plurality of garbage cans according to the garbage amount;
generating a navigation route between the target location information and the target trash can;
wherein, the determining to select the target garbage can from the plurality of garbage cans according to the garbage amount comprises:
determining the cleaning time of each garbage can in the plurality of garbage cans to obtain a plurality of cleaning time points;
determining a navigation route between the target position information and each of the plurality of trash cans, and determining a corresponding route of each navigation route to obtain a plurality of routes;
determining the arrival time point of the target user to each garbage can according to the plurality of routes to obtain a plurality of arrival time points;
estimating target load state information of the target user reaching each of the plurality of garbage cans according to the plurality of cleaning time points and the plurality of arrival time points to obtain a plurality of target load state information;
and selecting a garbage can which can contain the garbage amount and corresponds to the target load state information with the closest arrival time point from the target load state information as a target garbage can.
2. The method of claim 1, wherein the performing behavior extraction on the video sequence to obtain target delivery behavior feature parameters comprises:
analyzing the video sequence to obtain a plurality of video images;
determining the number of the garbage bags carried by the target user according to the plurality of video images to obtain N garbage bags, wherein N is a positive integer;
performing target segmentation on the plurality of video images to obtain a plurality of garbage bag images;
dividing the garbage bag images into N types to obtain N types of garbage bag images, wherein each type of garbage bag image corresponds to a garbage bag;
extracting the characteristics of the N types of garbage bag images to obtain N characteristic sets, wherein each garbage bag image corresponds to one characteristic set;
and screening and integrating the N characteristic sets to obtain target delivery behavior characteristic parameters.
3. The method according to claim 1 or 2, wherein the obtaining of the video sequence of the target user when delivering the garbage comprises:
acquiring a target face image of the target user;
acquiring target environment parameters corresponding to the target face image;
determining a target matching threshold corresponding to the target environment parameter according to a mapping relation between a preset environment parameter and the matching threshold;
extracting the contour of the target face image to obtain a first contour;
extracting feature points of the target face image to obtain a first feature point set;
acquiring the brightness of a target environment;
determining a target weight distribution factor corresponding to the target ambient brightness according to a preset mapping relation between the ambient brightness and the weight distribution factor, wherein the target weight distribution factor comprises a target contour weight factor and a target characteristic point weight factor;
determining a contour matching threshold value according to the target contour weight factor and the target matching threshold value;
determining a feature point matching threshold according to the target feature point weight factor and the target matching threshold;
acquiring a second contour and a second feature point set corresponding to a preset face template, wherein the preset face template is any one face template in a preset database;
matching the first contour with the second contour to obtain a first matching value;
matching the first characteristic point set and the second characteristic point set to obtain a second matching value;
when the first matching value is larger than the contour matching threshold and the second matching value is larger than the feature point matching threshold, determining a target matching value according to the first matching value, the second matching value and the target weight distribution factor;
and when the target matching value is larger than the target matching threshold value, acquiring a video sequence when delivering rubbish within a preset time period corresponding to the preset face template.
4. The method according to claim 1 or 2, characterized in that the method further comprises:
shooting the target user to obtain a first face image;
carrying out image segmentation on the first face image to obtain a face region image;
extracting the features of the face region image to obtain a feature point set;
inputting the feature point set into a preset artificial neural network model to obtain a target value;
when the target value is in a preset range, confirming that the target user is a child;
and when the target user is a child, executing the step of acquiring the video sequence of the target user when delivering the garbage.
5. The method according to claim 1 or 2, wherein the photographing the target user to obtain a first face image comprises:
detecting a target distance and a walking speed between the target user and a camera;
when the target distance is smaller than a preset distance, determining a shooting moment according to the walking speed;
estimating a shooting position according to the shooting time;
estimating shooting parameters between the target user and the camera according to the shooting position to obtain target shooting parameters;
shooting according to the target shooting parameters to obtain the first face image.
6. A waste delivery detection device, the device comprising:
the acquisition unit is used for acquiring a video sequence when a target user delivers garbage;
the extraction unit is used for carrying out behavior extraction on the video sequence to obtain target delivery behavior characteristic parameters;
the judging unit is used for judging the target delivery behavior characteristic parameters;
the determining unit is used for determining that the target user garbage delivery behavior is unreasonable when the target delivery behavior does not meet the preset requirement;
sending a garbage delivery request to a server, wherein the garbage delivery request carries target position information;
receiving load state information of a plurality of garbage cans corresponding to the target position information sent by the server;
acquiring the garbage amount delivered by the target user;
determining to select a target garbage can from the plurality of garbage cans according to the garbage amount;
generating a navigation route between the target location information and the target trash can;
wherein, the determining to select the target garbage can from the plurality of garbage cans according to the garbage amount comprises:
determining the cleaning time of each garbage can in the plurality of garbage cans to obtain a plurality of cleaning time points;
determining a navigation route between the target position information and each of the plurality of trash cans, and determining a corresponding route of each navigation route to obtain a plurality of routes;
determining the arrival time point of the target user to each garbage can according to the plurality of routes to obtain a plurality of arrival time points;
estimating target load state information of the target user reaching each of the plurality of garbage cans according to the plurality of cleaning time points and the plurality of arrival time points to obtain a plurality of target load state information;
and selecting a garbage can which can contain the garbage amount and corresponds to the target load state information with the closest arrival time point from the target load state information as a target garbage can.
7. The apparatus according to claim 6, wherein in the performing behavior extraction on the video sequence to obtain target delivery behavior feature parameters, the extracting unit is specifically configured to:
analyzing the video sequence to obtain a plurality of video images;
determining the number of the garbage bags carried by the target user according to the plurality of video images to obtain N garbage bags, wherein N is a positive integer;
performing target segmentation on the plurality of video images to obtain a plurality of garbage bag images;
dividing the garbage bag images into N types to obtain N types of garbage bag images, wherein each type of garbage bag image corresponds to a garbage bag;
extracting the characteristics of the N types of garbage bag images to obtain N characteristic sets, wherein each garbage bag image corresponds to one characteristic set;
and screening and integrating the N characteristic sets to obtain target delivery behavior characteristic parameters.
8. The apparatus according to claim 6 or 7, wherein, in said obtaining the video sequence when the target user delivers the spam, the obtaining unit is specifically configured to:
acquiring a target face image of the target user;
acquiring target environment parameters corresponding to the target face image;
determining a target matching threshold corresponding to the target environment parameter according to a mapping relation between a preset environment parameter and the matching threshold;
extracting the contour of the target face image to obtain a first contour;
extracting feature points of the target face image to obtain a first feature point set;
acquiring the brightness of a target environment;
determining a target weight distribution factor corresponding to the target ambient brightness according to a preset mapping relation between the ambient brightness and the weight distribution factor, wherein the target weight distribution factor comprises a target contour weight factor and a target characteristic point weight factor;
determining a contour matching threshold value according to the target contour weight factor and the target matching threshold value;
determining a feature point matching threshold according to the target feature point weight factor and the target matching threshold;
acquiring a second contour and a second feature point set corresponding to a preset face template, wherein the preset face template is any one face template in a preset database;
matching the first contour with the second contour to obtain a first matching value;
matching the first characteristic point set and the second characteristic point set to obtain a second matching value;
when the first matching value is larger than the contour matching threshold and the second matching value is larger than the feature point matching threshold, determining a target matching value according to the first matching value, the second matching value and the target weight distribution factor;
and when the target matching value is larger than the target matching threshold value, acquiring a video sequence when delivering rubbish within a preset time period corresponding to the preset face template.
9. An electronic device comprising a processor, a memory, a communication interface, and one or more programs stored in the memory and configured to be executed by the processor, the programs comprising instructions for performing the steps in the method of any of claims 1-5.
10. A computer-readable storage medium, characterized in that a computer program for electronic data exchange is stored, wherein the computer program causes a computer to perform the method according to any one of claims 1-5.
CN201910290934.6A 2019-04-11 2019-04-11 Garbage delivery detection method and related product Active CN111814517B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910290934.6A CN111814517B (en) 2019-04-11 2019-04-11 Garbage delivery detection method and related product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910290934.6A CN111814517B (en) 2019-04-11 2019-04-11 Garbage delivery detection method and related product

Publications (2)

Publication Number Publication Date
CN111814517A CN111814517A (en) 2020-10-23
CN111814517B true CN111814517B (en) 2021-10-22

Family

ID=72844191

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910290934.6A Active CN111814517B (en) 2019-04-11 2019-04-11 Garbage delivery detection method and related product

Country Status (1)

Country Link
CN (1) CN111814517B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114640606A (en) * 2020-12-01 2022-06-17 中移物联网有限公司 Abnormity processing method and controller for Internet of things card terminal
JP7107401B1 (en) 2021-02-22 2022-07-27 日本電気株式会社 Object measuring device, object measuring method and computer program
CN113111769A (en) * 2021-04-09 2021-07-13 平安国际智慧城市科技股份有限公司 Garbage illegal putting behavior monitoring method and device and computer equipment
CN117292207B (en) * 2023-11-24 2024-03-15 杭州臻善信息技术有限公司 Garbage identification method and system based on big data image processing

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI511056B (en) * 2011-09-20 2015-12-01 Altek Corp Feature data compression apparatus, multi-directional face detection system and detection method thereof
CN106904385B (en) * 2017-04-21 2019-04-12 杭州轻松互连科技发展有限公司 Garbage classification canonical system and system
CN109151375B (en) * 2017-06-16 2020-07-24 杭州海康威视数字技术股份有限公司 Target object snapshot method and device and video monitoring equipment
CN107640480A (en) * 2017-10-19 2018-01-30 广东拜登网络技术有限公司 The method and apparatus and storage medium and terminal device of refuse classification retrospect
CN107914996A (en) * 2017-12-19 2018-04-17 北京星锐智能科技有限公司 Separate waste collection system
CN108706246A (en) * 2018-05-31 2018-10-26 深圳市零度智控科技有限公司 Intelligent refuse classification reclaimer, control method, device and storage medium
CN109146498A (en) * 2018-09-04 2019-01-04 深圳市宇墨科技有限公司 Face method of payment and relevant apparatus
CN108861238A (en) * 2018-09-14 2018-11-23 金威建设集团有限公司 A kind of dustbin with identification garbage classification function

Also Published As

Publication number Publication date
CN111814517A (en) 2020-10-23

Similar Documents

Publication Publication Date Title
CN111814517B (en) Garbage delivery detection method and related product
CN112100461B (en) Questionnaire data processing method, device, server and medium based on data analysis
CN107808358B (en) Automatic detection method for image watermark
CN108520196B (en) Luxury discrimination method, electronic device, and storage medium
KR101942219B1 (en) Apparatus and method for waste image identification using convolution neural network
CN105787133B (en) Advertisement information filtering method and device
FR2958062A1 (en) METHOD FOR CATEGORIZATION AND SEPARATION OF DOCUMENTS IN ONE STEP
CN111178147B (en) Screen crushing and grading method, device, equipment and computer readable storage medium
CN111339420A (en) Image processing method, image processing device, electronic equipment and storage medium
CN111814913A (en) Training method and device for image classification model, electronic equipment and storage medium
CN110647895B (en) Phishing page identification method based on login box image and related equipment
CN111753608A (en) Information processing method and device, electronic device and storage medium
CN110647896A (en) Fishing page identification method based on logo image and related equipment
CN109948521A (en) Image correcting error method and device, equipment and storage medium
CN114170435A (en) Method and device for screening appearance images for recovery detection
CN108647570B (en) Zebra crossing detection method and device and computer readable storage medium
CN112278636B (en) Garbage classification recycling method, device, system and storage medium
CN110490022A (en) A kind of bar code method and device in identification picture
WO2017069741A1 (en) Digitized document classification
CN112215147A (en) Tracing method, device, equipment and storage medium for garbage throwing position
CN111832750B (en) User registration method of garbage classification platform and related products
CN112446850A (en) Adaptation test method and device and electronic equipment
CN109993165A (en) The identification of tablet plate medicine name and tablet plate information acquisition method, device and system
CN115018783A (en) Video watermark detection method and device, electronic equipment and storage medium
CN108228113A (en) Information processing unit, recording medium, printing equipment and print system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant