CN110040394A - A kind of interactive intelligent rubbish robot and its implementation - Google Patents
A kind of interactive intelligent rubbish robot and its implementation Download PDFInfo
- Publication number
- CN110040394A CN110040394A CN201910283134.1A CN201910283134A CN110040394A CN 110040394 A CN110040394 A CN 110040394A CN 201910283134 A CN201910283134 A CN 201910283134A CN 110040394 A CN110040394 A CN 110040394A
- Authority
- CN
- China
- Prior art keywords
- gesture
- image
- target gesture
- initial
- chip microcomputer
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 239000010813 municipal solid waste Substances 0.000 title claims abstract description 48
- 230000002452 interceptive effect Effects 0.000 title claims abstract description 21
- 238000000034 method Methods 0.000 claims abstract description 42
- 238000000605 extraction Methods 0.000 claims abstract description 29
- 238000007781 pre-processing Methods 0.000 claims abstract description 10
- 238000001514 detection method Methods 0.000 claims description 20
- 230000006698 induction Effects 0.000 claims description 14
- 238000001914 filtration Methods 0.000 claims description 9
- 230000011218 segmentation Effects 0.000 claims description 9
- 239000011159 matrix material Substances 0.000 claims description 7
- 238000013139 quantization Methods 0.000 claims description 7
- 230000007306 turnover Effects 0.000 claims description 7
- 230000008859 change Effects 0.000 claims description 5
- 238000010586 diagram Methods 0.000 claims description 5
- 238000003708 edge detection Methods 0.000 claims description 5
- 230000001960 triggered effect Effects 0.000 claims description 3
- 238000004364 calculation method Methods 0.000 description 3
- 238000009434 installation Methods 0.000 description 3
- 230000033001 locomotion Effects 0.000 description 3
- 230000003993 interaction Effects 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 230000003116 impacting effect Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000005192 partition Methods 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B65—CONVEYING; PACKING; STORING; HANDLING THIN OR FILAMENTARY MATERIAL
- B65F—GATHERING OR REMOVAL OF DOMESTIC OR LIKE REFUSE
- B65F1/00—Refuse receptacles; Accessories therefor
- B65F1/14—Other constructional features; Accessories
- B65F1/1468—Means for facilitating the transport of the receptacle, e.g. wheels, rolls
- B65F1/1473—Receptacles having wheels
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B65—CONVEYING; PACKING; STORING; HANDLING THIN OR FILAMENTARY MATERIAL
- B65F—GATHERING OR REMOVAL OF DOMESTIC OR LIKE REFUSE
- B65F1/00—Refuse receptacles; Accessories therefor
- B65F1/14—Other constructional features; Accessories
- B65F1/16—Lids or covers
- B65F1/1623—Lids or covers with means for assisting the opening or closing thereof, e.g. springs
- B65F1/1638—Electromechanically operated lids
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/30—Noise filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
- G06V10/443—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
- G06V40/28—Recognition of hand or arm movements, e.g. recognition of deaf sign language
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B65—CONVEYING; PACKING; STORING; HANDLING THIN OR FILAMENTARY MATERIAL
- B65F—GATHERING OR REMOVAL OF DOMESTIC OR LIKE REFUSE
- B65F2210/00—Equipment of refuse receptacles
- B65F2210/168—Sensing means
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Mechanical Engineering (AREA)
- Social Psychology (AREA)
- Human Computer Interaction (AREA)
- Psychiatry (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Manipulator (AREA)
Abstract
The invention discloses a kind of interactive intelligent rubbish robot and its implementation, method includes: to obtain initial pictures by camera;Image preprocessing is carried out to initial pictures, obtains target images of gestures;Feature extraction is carried out to target images of gestures, obtains gesture feature;Based on preset gesture identification model, gesture identification is carried out to gesture feature, obtains user gesture instruction;It is instructed according to user gesture, first control signal is sent to driving device by single-chip microcontroller, and second control signal is sent to steering engine by single-chip microcontroller;According to first control signal, drive wheel works are controlled by driving device;According to second control signal, overturn by servo driving bung.Robot body of the invention is removable and intelligence degree is high, greatly reduces the workload of sanitationman, improves work efficiency, can be widely applied to technical field of intelligent equipment.
Description
Technical Field
The invention relates to the technical field of intelligent equipment, in particular to an interactive intelligent garbage robot and an implementation method thereof.
Background
In order to maintain the environmental sanitation outside the train station or in the waiting hall, a plurality of trash cans are generally disposed on the front square and the waiting hall of the train station. Every holiday, the passenger flow of the railway station is increased sharply, and the garbage amount is increased accordingly. For this reason, sanitation personnel need to frequently clean up the trash on the ground and on the seats to prevent the trash from affecting the passenger experience. However, since the areas of the front square and the waiting hall of the train station are large, the separation distance of the trash can is also large. Therefore, the consciousness of people for throwing garbage is reduced, more garbage can be thrown on the floor or the seat without any intention, the garbage picking work is time-consuming and labor-consuming, the working efficiency of sanitation personnel is low, and other sanitation works are occupied.
Traditional trash cans have not been able to meet the needs of people today. Therefore, it has become a trend in the market to design an intelligent trash can that can be moved and can perform an interactive experience with people.
Disclosure of Invention
In view of this, the embodiment of the present invention provides an interactive intelligent garbage robot with a high degree of intelligence and an implementation method thereof, so as to reduce the workload of sanitation workers.
On one hand, the embodiment of the invention provides an interactive intelligent garbage robot which comprises a robot body, wherein the robot body comprises a shell, a barrel body and a barrel cover, a driving chassis is arranged at the bottom of the shell, a single chip microcomputer and a battery module are installed in the driving chassis, a driving wheel and a driving device are arranged at the lower part of the driving chassis, a camera is arranged on the shell, and a steering engine is arranged on the barrel cover;
the camera is used for shooting a gesture image signal of a user and sending the shot gesture image signal to the single chip microcomputer;
the single chip microcomputer is used for sending a triggered control signal to the driving device and the steering engine according to the gesture image signal sent by the camera;
the driving device is used for driving the driving wheel to move according to a control signal of the single chip microcomputer;
the steering engine is used for controlling the barrel cover to turn over according to a control signal of the single chip microcomputer;
and the battery module is used for providing a working power supply for the camera, the singlechip, the driving device and the steering engine.
Further, still include: the obstacle avoidance module is used for acquiring an obstacle avoidance detection signal in real time and sending the obstacle avoidance detection signal to the single chip microcomputer;
and/or the positioning system is used for acquiring the position signal of the robot body in real time and sending the position signal to the single chip microcomputer.
Further, still include:
the voice recognition and broadcast system is used for recognizing the voice of the user and broadcasting the voice according to the control signal of the singlechip;
and/or the infrared sensor is used for acquiring infrared induction signals around the robot body in real time and sending the infrared induction signals to the single chip microcomputer.
On the other hand, the embodiment of the invention also provides an implementation method of the interactive intelligent garbage robot, which comprises the following steps:
acquiring an initial image through a camera;
carrying out image preprocessing on the initial image to obtain a target gesture image, wherein the image preprocessing comprises gesture tracking operation and gesture segmentation operation;
extracting features of the target gesture image to obtain gesture features;
performing gesture recognition on the gesture features based on a preset gesture recognition model to obtain a user gesture instruction;
according to the gesture command of the user, a first control signal is sent to the driving device through the single chip microcomputer, and a second control signal is sent to the steering engine through the single chip microcomputer;
controlling the driving wheel to work through the driving device according to the first control signal;
and driving the barrel cover to turn over through the steering engine according to a second control signal.
Further, the step of performing image preprocessing on the initial image to obtain the target gesture image includes the following steps:
carrying out target detection on the initial image by adopting a frame difference method to obtain a target gesture;
tracking the target gesture by adopting a CamShiit target tracking algorithm to obtain an initial target gesture graph;
and carrying out segmentation processing on the initial target gesture image to obtain a target gesture image.
Further, the step of tracking the target gesture by using a CamShiit target tracking algorithm to obtain an initial target gesture graph includes the following steps:
determining an initial search window of the target gesture;
calculating an initial search window zeroth order matrix;
calculating the centroid of the initial search window according to the initial search window zeroth-order matrix;
dynamically adjusting the initial search window according to the centroid of the initial search window;
and acquiring an initial target gesture graph according to the adjusted search window.
Further, the step of segmenting the initial target gesture image to obtain the target gesture image includes the following steps:
constructing a color space between the initial target gesture graph and the color attribute;
carrying out binarization processing on the initial target gesture image according to the color space to obtain a target gesture image;
the step of constructing a color space between the initial target gesture diagram and the color attribute specifically includes:
calculating the brightness, hue and saturation of the color space;
and constructing a color space between the initial target gesture graph and the color attribute according to the calculated brightness, hue and saturation.
Further, the step of segmenting the initial target gesture image to obtain the target gesture image further includes the following steps:
carrying out first noise processing on the target gesture image by a mean filtering method;
performing second noise processing on the result of the first noise processing by a median filtering method;
detecting the neighborhood of each pixel in the second noise processing result through an edge detection operator, and carrying out quantization processing on the gray change rate of the neighborhood;
and carrying out image contour extraction on the result of the quantization processing through a Laplace edge extraction algorithm to obtain an optimized target gesture image.
Further, the step of extracting the features of the target gesture image to obtain the gesture features comprises the following steps:
performing first feature extraction on a target gesture image based on a Hu invariant moment feature extraction method;
and performing second feature extraction on the result of the first feature extraction by using a feature extraction method based on the Fourier descriptor to obtain the gesture feature.
Further, the method also comprises the following steps:
acquiring an obstacle avoidance detection signal in real time through an obstacle avoidance module, and sending the obstacle avoidance detection signal to the single chip microcomputer;
and/or acquiring a position signal of the robot body in real time through a positioning system, and sending the position signal to the single chip microcomputer;
and/or recognizing the voice of the user, and performing voice broadcast through a voice recognition and broadcast system according to the control signal of the singlechip;
and/or acquiring infrared induction signals around the robot body in real time through an infrared sensor, and sending the infrared induction signals to the single chip microcomputer.
One or more of the above-described embodiments of the present invention have the following advantages: the embodiment of the invention comprises a robot body, a camera is used for acquiring gesture image signals, a singlechip is used for triggering control signals to control a driving device to move the robot body to a user position, and a steering engine is controlled to open a barrel cover; the robot body is movable and has high intelligent degree, the workload of sanitation workers is greatly reduced, and the working efficiency is improved.
Drawings
FIG. 1 is a flow chart of the steps of an embodiment of the present invention;
FIG. 2 is a schematic structural diagram of an intelligent garbage robot according to an embodiment of the present invention;
fig. 3 is a block diagram of an intelligent garbage robot according to an embodiment of the present invention.
Detailed Description
The invention will be further explained and explained with reference to the drawings and the embodiments in the description. The step numbers in the embodiments of the present invention are set for convenience of illustration only, the order between the steps is not limited at all, and the execution order of each step in the embodiments can be adaptively adjusted according to the understanding of those skilled in the art.
The embodiment of the invention provides an interactive intelligent garbage robot which comprises a robot body, wherein the robot body comprises a shell, a barrel body and a barrel cover, a driving chassis is arranged at the bottom of the shell, a single chip microcomputer and a battery module are installed in the driving chassis, a driving wheel and a driving device are arranged at the lower part of the driving chassis, a camera is arranged on the shell, and a steering engine is arranged on the barrel cover;
the camera is used for shooting a gesture image signal of a user and sending the shot gesture image signal to the single chip microcomputer;
the single chip microcomputer is used for sending a triggered control signal to the driving device and the steering engine according to the gesture image signal sent by the camera;
the driving device is used for driving the driving wheel to move according to a control signal of the single chip microcomputer;
the steering engine is used for controlling the barrel cover to turn over according to a control signal of the single chip microcomputer;
and the battery module is used for providing a working power supply for the camera, the singlechip, the driving device and the steering engine.
Further, as a preferred embodiment, the method further comprises: the obstacle avoidance module is used for acquiring an obstacle avoidance detection signal in real time and sending the obstacle avoidance detection signal to the single chip microcomputer; so that the singlechip triggers a corresponding control signal to control the work of the driving device and avoid the robot body from impacting an obstacle.
And/or the positioning system is used for acquiring the position signal of the robot body in real time and sending the position signal to the single chip microcomputer.
Further, as a preferred embodiment, the method further comprises:
the voice recognition and broadcast system is used for recognizing the voice of the user and broadcasting the voice according to the control signal of the singlechip;
and/or the infrared sensor is used for acquiring infrared induction signals around the robot body in real time and sending the infrared induction signals to the single chip microcomputer.
The embodiment of the invention also provides an implementation method of the interactive intelligent garbage robot, which comprises the following steps:
acquiring an initial image through a camera;
carrying out image preprocessing on the initial image to obtain a target gesture image, wherein the image preprocessing comprises gesture tracking operation and gesture segmentation operation;
extracting features of the target gesture image to obtain gesture features;
performing gesture recognition on the gesture features based on a preset gesture recognition model to obtain a user gesture instruction;
according to the gesture command of the user, a first control signal is sent to the driving device through the single chip microcomputer, and a second control signal is sent to the steering engine through the single chip microcomputer;
controlling the driving wheel to work through the driving device according to the first control signal;
and driving the barrel cover to turn over through the steering engine according to a second control signal.
Further, as a preferred embodiment, the step of performing image preprocessing on the initial image to obtain the target gesture image includes the following steps:
carrying out target detection on the initial image by adopting a frame difference method to obtain a target gesture;
tracking the target gesture by adopting a CamShiit target tracking algorithm to obtain an initial target gesture graph;
and carrying out segmentation processing on the initial target gesture image to obtain a target gesture image.
Further as a preferred embodiment, the step of tracking the target gesture by using a camshi target tracking algorithm to obtain an initial target gesture map includes the following steps:
determining an initial search window of the target gesture;
calculating an initial search window zeroth order matrix;
calculating the centroid of the initial search window according to the initial search window zeroth-order matrix;
dynamically adjusting the initial search window according to the centroid of the initial search window;
and acquiring an initial target gesture graph according to the adjusted search window.
Further, as a preferred embodiment, the step of segmenting the initial target gesture image to obtain the target gesture image includes the following steps:
constructing a color space between the initial target gesture graph and the color attribute;
carrying out binarization processing on the initial target gesture image according to the color space to obtain a target gesture image;
the step of constructing a color space between the initial target gesture diagram and the color attribute specifically includes:
calculating the brightness, hue and saturation of the color space;
and constructing a color space between the initial target gesture graph and the color attribute according to the calculated brightness, hue and saturation.
Further, as a preferred embodiment, the step of segmenting the initial target gesture image to obtain the target gesture image further includes the following steps:
carrying out first noise processing on the target gesture image by a mean filtering method;
performing second noise processing on the result of the first noise processing by a median filtering method;
detecting the neighborhood of each pixel in the second noise processing result through an edge detection operator, and carrying out quantization processing on the gray change rate of the neighborhood;
and carrying out image contour extraction on the result of the quantization processing through a Laplace edge extraction algorithm to obtain an optimized target gesture image.
Further, as a preferred embodiment, the step of extracting features of the target gesture image to obtain the gesture features includes the following steps:
performing first feature extraction on a target gesture image based on a Hu invariant moment feature extraction method;
and performing second feature extraction on the result of the first feature extraction by using a feature extraction method based on the Fourier descriptor to obtain the gesture feature.
Further as a preferred embodiment, the method further comprises the following steps:
acquiring an obstacle avoidance detection signal in real time through an obstacle avoidance module, and sending the obstacle avoidance detection signal to the single chip microcomputer;
and/or acquiring a position signal of the robot body in real time through a positioning system, and sending the position signal to the single chip microcomputer;
and/or recognizing the voice of the user, and performing voice broadcast through a voice recognition and broadcast system according to the control signal of the singlechip;
and/or acquiring infrared induction signals around the robot body in real time through an infrared sensor, and sending the infrared induction signals to the single chip microcomputer.
The following describes in detail the specific implementation steps of the implementation method of the interactive intelligent garbage robot of the present invention with reference to the attached drawing 1 of the specification:
s1, acquiring an image of the gesture through the camera;
s2, performing target detection on the target gesture by adopting a frame difference method, obtaining a difference image by only subtracting two or more adjacent frames in the video, and performing binarization on the difference image to obtain a result, wherein the formula is as follows:
Dk(x,y)=|Ik(x,y)-Ik-1(x,y)|,
wherein, Ik(x, y) is a grayscale image acquired in the k-th frame, (x, y) represents image coordinates; dk(x, y) represents a difference image between two adjacent frames at time k, T0For binary threshold, FkAnd (x, y) is a binary difference map.
S3, tracking the target gesture by adopting a CamShiit target tracking algorithm, wherein the method mainly comprises the following steps:
s31, calculating the zeroth order matrix of the initial search window
Wherein the first moment of x and y is as follows:
wherein, I (x)i,yi) Is the image coordinate (x)i,yi) (x) is calculatedi,yi) Is the image area siI represents the selected tracking target of the ith target and has the size of siThe search window of (1).
S32, calculating the centroid of the initial search window, namely the centroid of the target, and the calculation formula is as follows:
wherein (x)ic,yic) Representing the coordinates of the centroid of the object.
And S33, adaptively adjusting a search window according to the calculated centroid, and obtaining an initial target gesture graph.
And S4, carrying out segmentation processing on the initial target gesture image to obtain a target gesture image.
Specifically, in a certain color space, human skin tones can be grouped into a densely distributed class. To improve the resolution based on color segmentation, skin tones can be clustered with two degrees of freedom of color. The present embodiment establishes a mapping between RGB image data and color attributes using an LHS (lighting, hue, failure) color space.
The step S4 specifically includes the following steps:
s41, calculate brightness (luminance):
l(c)=ωmin·min(c)+ωmid·mid(c)+ωmax·max(c),
wherein: min (c) min (R, G, B);
mid(c)=mid(R,G,B);
max(c)=max(R,G,B);
min (c), mid (c), max (c) represent R, G, B values from small to large in sequence; min (R, G, B) represents the minimum of R, G, B; mid (R, G, B) represents the median value in R, G, B; max (R, G, B) represents the maximum value of R, G, B.
S42, hue (hue) h (c):
h(c)=(k(c)+f(c))×60
where k (c) represents the number of color cells, f (c) represents the calculated angle for within a color cell:
s43, calculation saturation (saturation):
wherein, the calculation formula of saturation is as follows:
wherein l (c) represents luminance; x represents a color defined in GLHS that is the same hue as c but different in lightness; l (c) -min (c) represents the minimum of luminance and R, G, B; max (c) -l (c) R, G, B difference between maximum and brightness;
and S44, constructing a color space between the initial target gesture graph and the color attribute according to the calculated brightness, hue and saturation.
Specifically, the Hue and Satisfailure values of each pixel are calculated from the RGB values; judging whether the Hue and Satisfailure values of the pixel are both in the skin color interval of the user; if yes, changing the pixel to black; otherwise, the pixel is changed into white;
in a further preferred embodiment, the step S4 further includes the steps of:
s45, performing first noise processing on the target gesture image through a mean filtering method;
specifically, the mean filtering formula is:
wherein S is a set of points in the field of points (x, y), and M is the total number of points in the set of points S; g (i, j) represents the gray value of the pixel; g (x, y) represents the output corresponding to pixel g (i, j).
S46, performing second noise processing on the result of the first noise processing by a median filtering method;
specifically, the median filter formula is:
Vout=median{a1,a2,...,an}
wherein, a1,a2,...,anIs the gray value of each point in the neighborhood.
S47, detecting the neighborhood of each pixel in the second noise processing result through an edge detection operator, and carrying out quantization processing on the gray change rate of the neighborhood;
specifically, the edge detection operator checks the neighborhood of each pixel and quantizes the gray change rate, and the robot adopts a Sobel operator and a laplacian of gaussian operator.
And S48, performing image contour extraction on the quantized result through a Laplace edge extraction algorithm to obtain an optimized target gesture image.
S5, extracting the features of the target gesture image to obtain gesture features;
the step S5 includes the steps of:
s51, carrying out first feature extraction on the target gesture image based on the Hu invariant moment feature extraction method;
specifically, the present embodiment employs seven moment invariants, and the specific expression is:
φ1=μ20+μ02,
φ2=(μ20-μ02)2+(2μ11)2,
φ3=(μ30-3μ12)2+(3μ21-μ03)2,
φ4=(μ30+μ12)2+(μ21+μ03)2,
φ5=(μ30-3μ12)(μ30+μ12)[(μ30+μ12)2-3(μ21+μ03)2]+(3μ21-μ03)(μ21+μ033μ30+μ122-μ21+μ032,
φ6=(μ20-μ02)[(μ30+μ12)2-(μ21+μ03)2]+4μ11(μ30+μ12)(μ21+μ03),
φ7=(3μ21-μ03)(μ30+μ12)[(μ30+μ12)2-3(μ21+μ03)2]-(μ30-3μ12)(μ21+μ033μ30+μ122-μ21+μ032,
the normalized central moments are:
wherein,p + q 2, 3, …, p + q representing the geometric and central moments of order p + q, μxyRepresenting the central moment.
And S52, performing second feature extraction on the result of the first feature extraction by using a feature extraction method based on Fourier descriptors to obtain the gesture features.
Specifically, a two-dimensional image f (x, y) is set as M rows and N columns, a one-dimensional discrete fourier transform with a length of N is performed according to a row queue variable y, and then a fourier transform with a length of M is performed on a variable x according to a column direction to obtain a fourier transform result of the image, as shown in the formula:
wherein F (u, v) represents the Fourier transform of F (x, y).
S6, performing gesture recognition on the gesture features based on a preset gesture recognition model to obtain a user gesture instruction;
specifically, in the embodiment, gesture modeling based on a 3D model is first adopted, and the method first models the motion and posture of the palm and the arm by using three-dimensional structural features, and then estimates the gesture model parameters from the motion and posture model parameters.
And then, recognizing the gesture based on image feature matching by using the geometric features (including Hu moment, convexity and the like) of the gesture and the relative position features of the gesture fingertips after the gesture segmentation, the gesture feature processing and the gesture modeling are carried out according to the above.
S7, sending a first control signal to the driving device through the single chip microcomputer according to the gesture command of the user, and sending a second control signal to the steering engine through the single chip microcomputer;
s8, controlling the driving wheel to work through the driving device according to the first control signal;
and S9, driving the barrel cover to turn over through the steering engine according to the second control signal.
Preferably, this embodiment further comprises the steps of:
acquiring an obstacle avoidance detection signal in real time through an obstacle avoidance module, and sending the obstacle avoidance detection signal to the single chip microcomputer;
acquiring a position signal of the robot body in real time through a positioning system, and sending the position signal to a single chip microcomputer;
according to the control signal of the singlechip, voice broadcasting is carried out through a voice recognition and broadcasting system;
the method comprises the steps of acquiring infrared induction signals around a robot body in real time through an infrared sensor, and sending the infrared induction signals to a single chip microcomputer.
The following describes in detail the specific working principle of the interactive intelligent garbage robot of the present invention with reference to the attached drawing 2 of the specification:
as shown in fig. 2, the robot body of the present embodiment includes a housing 4 and a tub cover 1 for closing the housing 4, and a driving chassis 6 is fixed to the bottom of the housing 4. The outer casing 4 may be designed in an external shape as required, for example, it may be designed in a conventional cylindrical shape, and in this case, the lid 1 and the base may also be configured in a circular structure for matching with the outer casing, which is not limited herein.
The driving chassis 6 is arranged in the lower end of the shell 4, can be fixed in the shell 4 through bolts or other fixing modes, can be detached, and is convenient for replacing and maintaining the electrical equipment arranged on the driving chassis 6. The driving chassis 6 can be separated from the space for containing garbage in the shell 4 by a partition plate and the like, so that the leaked moisture is prevented from damaging electrical equipment.
The driving wheels 7 are connected with a driving device by adopting the prior art, and each driving wheel 7 can be provided with a driving motor 8; and the driving wheels 7 are at least provided with 3 driving the interactive intelligent garbage robot to move stably. Specifically, the specific installation number and installation position of the driving wheels 7 can be selected according to the bearing and moving range of the interactive intelligent garbage robot, which is not limited herein.
The MG996R steering engine 12 that makes bung 1 take place to overturn is disposed on bung 1, and steering engine 12 can be installed in the upset department that bung 1 is connected with shell 4 for steering engine 12 is convenient for control bung 1 and overturns. A steering gear is arranged on the steering gear 12 and meshed with a barrel cover gear arranged on the barrel cover 1, so that the barrel cover 1 can receive the overturning force provided by the steering gear 12.
The inner container is arranged in the shell 4, and a gap is formed between the shell 4 and the inner container, so that the detachable inner container has good detachability.
As shown in fig. 3, the interactive intelligent garbage robot of the present invention includes a driving device, a single chip, a camera, a voice recognition and broadcast system, an obstacle avoidance module, an infrared sensor, and a positioning system, which are connected to each other to form a control system of the interactive intelligent garbage robot.
The built-in discernment chip that has of the RER-USB8MP02G camera of this embodiment, the robot constantly removes through drive arrangement at the during operation, and at the in-process that removes, the camera constantly picks up video and image from the external world, then transmits and obtains gesture identification signal for built-in main control chip, and main control chip sends the signal to the singlechip, and afterwards, the singlechip just sends control signal and gives steering wheel, drive arrangement and speech recognition and report system etc. makes it carry out work on next step.
As shown in fig. 2, the RER-USB8MP02G camera 3 is embedded in a gap between the shell 4 and the inner container barrel 11 by screws, and is located at the lower part of the SR602 pyroelectric human body infrared sensor 2.
The voice recognition and broadcasting system is composed of an LD3320 voice recognition module, a WT588D voice module and a loudspeaker 5, wherein the voice recognition module sends a signal to the singlechip after recognizing the voice of a user, and performs control actions such as S6-S9; the voice module controls a loudspeaker according to a control signal sent by the stm32 single-chip microcomputer, so that voice broadcasting is carried out.
As shown in fig. 2, the obstacle avoidance module of this embodiment is an ultrasonic sensor 13 distributed around the robot, and when the ultrasonic sensor 13 detects that the robot has approached a person, a PWM control signal is output to a motor driving circuit through an stm32 single chip microcomputer, and the driving circuit drives the wheel motors to rotate, so as to control the two motors to automatically stop the trash can or avoid the obstacle.
This embodiment has set up SR602 human infrared sensor 2 of pyroelectric above the bung, thereby human infrared sensor 2 of pyroelectric can detect whether there is the object to block the infrared ray and whether the discernment people is to putting the top at the robot, when people's handle or object are put and are prepared to throw away rubbish in lid 1 top, human infrared sensor 2 of pyroelectric can obtain infrared detection signal, thereby pass through stm32 singlechip 10 transmission PWM control signal to MG996R steering wheel 12 according to infrared detection signal and make its work, thereby open trash can lid 1 and automatic shut.
The ATK1218-BD GPS + beidou positioning device in the positioning system of this embodiment is used for positioning and planning a route for the robot working outdoors, and when the robot works outdoors, the singlechip 10 sends a control signal to the driving device after the positioning system 9 has planned the route, and then the robot is controlled to travel on the established route.
In summary, the interactive intelligent garbage robot and the implementation method thereof of the invention have the following advantages:
1. the invention has the function of autonomous movement. The garbage bin can move automatically indoors by means of the ultrasonic sensor, and can automatically avoid obstacles and navigate outdoors by means of the GPS/Beidou positioning system and the infrared sensor. The garbage can moving on the fixed route can reduce the installation number of the fixed garbage cans, and the embarrassment that people can not find the garbage can when people want to throw garbage is solved.
2. The invention has the function of man-machine interaction. When the robot camera recognizes that a person makes a hand-calling or other gestures, or a voice recognition system of the robot recognizes related control voices sent by a user, such as 'come, i need to throw garbage', the robot can control the driving device to move through the single chip microcomputer, and garbage throwing experience of the user is improved.
3. The invention has the function of automatically opening the cover. When the garbage can recognizes that a person calls the garbage can, the garbage can automatically open the can cover when the garbage can moves to the front of the person, so that the pollution of hands caused by manually opening the can cover can be avoided, and the garbage can is more environment-friendly and safer.
4. The invention has an intelligent voice recognition and broadcast system. When the robot finishes each work, or after the people carried out different actions, the robot can carry out intelligent broadcasting according to the voice library of the robot, and the experience of human-computer interaction is enhanced.
While the preferred embodiments of the present invention have been illustrated and described, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.
Claims (10)
1. An interactive intelligent garbage robot is characterized in that: the robot comprises a robot body, wherein the robot body comprises a shell, a barrel body and a barrel cover, a driving chassis is arranged at the bottom of the shell, a single chip microcomputer and a battery module are installed in the driving chassis, a driving wheel and a driving device are arranged at the lower part of the driving chassis, a camera is arranged on the shell, and a steering engine is arranged on the barrel cover;
the camera is used for shooting a gesture image signal of a user and sending the shot gesture image signal to the single chip microcomputer;
the single chip microcomputer is used for sending a triggered control signal to the driving device and the steering engine according to the gesture image signal sent by the camera;
the driving device is used for driving the driving wheel to move according to a control signal of the single chip microcomputer;
the steering engine is used for controlling the barrel cover to turn over according to a control signal of the single chip microcomputer;
and the battery module is used for providing a working power supply for the camera, the singlechip, the driving device and the steering engine.
2. An interactive intelligent garbage robot as claimed in claim 1, characterized in that: further comprising:
the obstacle avoidance module is used for acquiring an obstacle avoidance detection signal in real time and sending the obstacle avoidance detection signal to the single chip microcomputer;
and/or the positioning system is used for acquiring the position signal of the robot body in real time and sending the position signal to the single chip microcomputer.
3. An interactive intelligent garbage robot as claimed in claim 1, characterized in that: further comprising:
the voice recognition and broadcast system is used for recognizing the voice of the user and broadcasting the voice according to the control signal of the singlechip;
and/or the infrared sensor is used for acquiring infrared induction signals around the robot body in real time and sending the infrared induction signals to the single chip microcomputer.
4. An implementation method of an interactive intelligent garbage robot is characterized in that: the method comprises the following steps:
acquiring an initial image through a camera;
carrying out image preprocessing on the initial image to obtain a target gesture image, wherein the image preprocessing comprises gesture tracking operation and gesture segmentation operation;
extracting features of the target gesture image to obtain gesture features;
performing gesture recognition on the gesture features based on a preset gesture recognition model to obtain a user gesture instruction;
according to the gesture command of the user, a first control signal is sent to the driving device through the single chip microcomputer, and a second control signal is sent to the steering engine through the single chip microcomputer;
controlling the driving wheel to work through the driving device according to the first control signal;
and driving the barrel cover to turn over through the steering engine according to a second control signal.
5. The method of claim 4, wherein the method comprises the following steps: the step of carrying out image preprocessing on the initial image to obtain the target gesture image comprises the following steps:
carrying out target detection on the initial image by adopting a frame difference method to obtain a target gesture;
tracking the target gesture by adopting a CamShiit target tracking algorithm to obtain an initial target gesture graph;
and carrying out segmentation processing on the initial target gesture image to obtain a target gesture image.
6. The method of claim 5, wherein the method comprises the following steps: the step of tracking the target gesture by adopting a CamShiit target tracking algorithm to obtain an initial target gesture graph comprises the following steps:
determining an initial search window of the target gesture;
calculating an initial search window zeroth order matrix;
calculating the centroid of the initial search window according to the initial search window zeroth-order matrix;
dynamically adjusting the initial search window according to the centroid of the initial search window;
and acquiring an initial target gesture graph according to the adjusted search window.
7. The method of claim 5, wherein the method comprises the following steps: the step of segmenting the initial target gesture image to obtain the target gesture image comprises the following steps:
constructing a color space between the initial target gesture graph and the color attribute;
carrying out binarization processing on the initial target gesture image according to the color space to obtain a target gesture image;
the step of constructing a color space between the initial target gesture diagram and the color attribute specifically includes:
calculating the brightness, hue and saturation of the color space;
and constructing a color space between the initial target gesture graph and the color attribute according to the calculated brightness, hue and saturation.
8. The method of claim 5, wherein the method comprises the following steps: the step of segmenting the initial target gesture image to obtain the target gesture image further comprises the following steps:
carrying out first noise processing on the target gesture image by a mean filtering method;
performing second noise processing on the result of the first noise processing by a median filtering method;
detecting the neighborhood of each pixel in the second noise processing result through an edge detection operator, and carrying out quantization processing on the gray change rate of the neighborhood;
and carrying out image contour extraction on the result of the quantization processing through a Laplace edge extraction algorithm to obtain an optimized target gesture image.
9. The method of claim 4, wherein the method comprises the following steps: the step of extracting the features of the target gesture image to obtain the gesture features comprises the following steps:
performing first feature extraction on a target gesture image based on a Hu invariant moment feature extraction method;
and performing second feature extraction on the result of the first feature extraction by using a feature extraction method based on the Fourier descriptor to obtain the gesture feature.
10. The method of claim 4, wherein the method comprises the following steps: further comprising the steps of:
acquiring an obstacle avoidance detection signal in real time through an obstacle avoidance module, and sending the obstacle avoidance detection signal to the single chip microcomputer;
and/or acquiring a position signal of the robot body in real time through a positioning system, and sending the position signal to the single chip microcomputer;
and/or recognizing the voice of the user, and performing voice broadcast through a voice recognition and broadcast system according to the control signal of the singlechip;
and/or acquiring infrared induction signals around the robot body in real time through an infrared sensor, and sending the infrared induction signals to the single chip microcomputer.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910283134.1A CN110040394A (en) | 2019-04-10 | 2019-04-10 | A kind of interactive intelligent rubbish robot and its implementation |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910283134.1A CN110040394A (en) | 2019-04-10 | 2019-04-10 | A kind of interactive intelligent rubbish robot and its implementation |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110040394A true CN110040394A (en) | 2019-07-23 |
Family
ID=67276597
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910283134.1A Pending CN110040394A (en) | 2019-04-10 | 2019-04-10 | A kind of interactive intelligent rubbish robot and its implementation |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110040394A (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110525845A (en) * | 2019-08-30 | 2019-12-03 | 北京星和众工设备技术股份有限公司 | A kind of intelligence recycling dustbin ends-opening method and intelligent recycling dustbin |
CN111470204A (en) * | 2020-04-20 | 2020-07-31 | 东华大学 | Intelligent environment-friendly classification garbage can |
CN111674759A (en) * | 2020-06-02 | 2020-09-18 | 江苏联翼环境科技有限公司 | Intelligent garbage recycling bin with action recognition function |
CN112173497A (en) * | 2020-11-10 | 2021-01-05 | 珠海格力电器股份有限公司 | Control method and device of garbage collection equipment |
CN113303735A (en) * | 2020-02-27 | 2021-08-27 | 佛山市云米电器科技有限公司 | Maintenance station and sweeping robot |
CN113303708A (en) * | 2020-02-27 | 2021-08-27 | 佛山市云米电器科技有限公司 | Control method for maintenance device, and storage medium |
CN113753443A (en) * | 2021-08-17 | 2021-12-07 | 奥谱毫芯(深圳)科技有限公司 | Trash can capable of being automatically controlled to be opened and closed based on gesture recognition |
CN113796783A (en) * | 2021-09-26 | 2021-12-17 | 黄福平 | Household environment control management system based on artificial intelligence |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20110004015A (en) * | 2009-07-07 | 2011-01-13 | 송세경 | Intelligent mobile restaurant robot for serving custom and counting money |
CN102339379A (en) * | 2011-04-28 | 2012-02-01 | 重庆邮电大学 | Gesture recognition method and gesture recognition control-based intelligent wheelchair man-machine system |
CN103927016A (en) * | 2014-04-24 | 2014-07-16 | 西北工业大学 | Real-time three-dimensional double-hand gesture recognition method and system based on binocular vision |
KR101465896B1 (en) * | 2013-09-26 | 2014-11-26 | 성균관대학교산학협력단 | Mobile terminal for generating control commands using front side camera and rear side camera |
CN104326195A (en) * | 2014-11-10 | 2015-02-04 | 安徽省新方尊铸造科技有限公司 | Intelligent garbage can with automatic demand judgment function |
CN108502404A (en) * | 2018-04-04 | 2018-09-07 | 广州艾可机器人有限公司 | A kind of public space intelligent garbage collecting apparatus |
-
2019
- 2019-04-10 CN CN201910283134.1A patent/CN110040394A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20110004015A (en) * | 2009-07-07 | 2011-01-13 | 송세경 | Intelligent mobile restaurant robot for serving custom and counting money |
CN102339379A (en) * | 2011-04-28 | 2012-02-01 | 重庆邮电大学 | Gesture recognition method and gesture recognition control-based intelligent wheelchair man-machine system |
KR101465896B1 (en) * | 2013-09-26 | 2014-11-26 | 성균관대학교산학협력단 | Mobile terminal for generating control commands using front side camera and rear side camera |
CN103927016A (en) * | 2014-04-24 | 2014-07-16 | 西北工业大学 | Real-time three-dimensional double-hand gesture recognition method and system based on binocular vision |
CN104326195A (en) * | 2014-11-10 | 2015-02-04 | 安徽省新方尊铸造科技有限公司 | Intelligent garbage can with automatic demand judgment function |
CN108502404A (en) * | 2018-04-04 | 2018-09-07 | 广州艾可机器人有限公司 | A kind of public space intelligent garbage collecting apparatus |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110525845A (en) * | 2019-08-30 | 2019-12-03 | 北京星和众工设备技术股份有限公司 | A kind of intelligence recycling dustbin ends-opening method and intelligent recycling dustbin |
CN113303735A (en) * | 2020-02-27 | 2021-08-27 | 佛山市云米电器科技有限公司 | Maintenance station and sweeping robot |
CN113303708A (en) * | 2020-02-27 | 2021-08-27 | 佛山市云米电器科技有限公司 | Control method for maintenance device, and storage medium |
CN111470204A (en) * | 2020-04-20 | 2020-07-31 | 东华大学 | Intelligent environment-friendly classification garbage can |
CN111674759A (en) * | 2020-06-02 | 2020-09-18 | 江苏联翼环境科技有限公司 | Intelligent garbage recycling bin with action recognition function |
CN112173497A (en) * | 2020-11-10 | 2021-01-05 | 珠海格力电器股份有限公司 | Control method and device of garbage collection equipment |
CN113753443A (en) * | 2021-08-17 | 2021-12-07 | 奥谱毫芯(深圳)科技有限公司 | Trash can capable of being automatically controlled to be opened and closed based on gesture recognition |
CN113796783A (en) * | 2021-09-26 | 2021-12-17 | 黄福平 | Household environment control management system based on artificial intelligence |
CN113796783B (en) * | 2021-09-26 | 2022-10-25 | 黄福平 | Household environment control management system based on artificial intelligence |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110040394A (en) | A kind of interactive intelligent rubbish robot and its implementation | |
US10810456B2 (en) | Apparatus and methods for saliency detection based on color occurrence analysis | |
US11363929B2 (en) | Apparatus and methods for programming and training of robotic household appliances | |
CN107139179B (en) | Intelligent service robot and working method | |
CN106020227A (en) | Control method and device for unmanned aerial vehicle | |
JP6942177B2 (en) | Systems and methods for initializing the robot to autonomously follow the learned path | |
CN114080583B (en) | Visual teaching and repetitive movement manipulation system | |
Natarajan et al. | Hand gesture controlled drones: An open source library | |
WO2011002654A1 (en) | Panoramic attention for humanoid robots | |
MohaimenianPour et al. | Hands and faces, fast: mono-camera user detection robust enough to directly control a UAV in flight | |
CN110939351A (en) | Visual intelligent control method and visual intelligent control door | |
JP2004030629A (en) | Face detection apparatus, face detection method, robotic device, program, and recording medium | |
Ismail et al. | Vision-based system for line following mobile robot | |
Nüchter et al. | Automatic classification of objects in 3d laser range scans | |
CN116363693A (en) | Automatic following method and device based on depth camera and vision algorithm | |
CN118385157A (en) | Visual classified garbage automatic sorting system based on deep learning and self-adaptive grabbing | |
Christensen et al. | Integrating vision based behaviours with an autonomous robot | |
EP2336948A1 (en) | A method for multi modal object recognition based on self-referential classification strategies | |
CN114800615A (en) | Robot real-time scheduling system and method based on multi-source perception | |
Durdu et al. | Morphing estimated human intention via human-robot interactions | |
CN113723475B (en) | Design and implementation method of intelligent shoe management system of robot based on vision | |
Moy | Gesture-based interaction with a pet robot | |
CN213965069U (en) | Table tennis entertainment service system based on target detection and single-view-angle score judgment | |
Neo et al. | A natural language instruction system for humanoid robots integrating situated speech recognition, visual recognition and on-line whole-body motion generation | |
CN116612531A (en) | Human-computer interaction method based on human skeleton key point detection |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190723 |