CN110561442B - Intelligent toy distribution and arrangement robot and working method thereof - Google Patents

Intelligent toy distribution and arrangement robot and working method thereof Download PDF

Info

Publication number
CN110561442B
CN110561442B CN201910709628.1A CN201910709628A CN110561442B CN 110561442 B CN110561442 B CN 110561442B CN 201910709628 A CN201910709628 A CN 201910709628A CN 110561442 B CN110561442 B CN 110561442B
Authority
CN
China
Prior art keywords
module
identification
user
label
target object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910709628.1A
Other languages
Chinese (zh)
Other versions
CN110561442A (en
Inventor
李弇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Leerle Toys Co.,Ltd.
Original Assignee
Suzhou Zhaoxuan Digital Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Zhaoxuan Digital Technology Co ltd filed Critical Suzhou Zhaoxuan Digital Technology Co ltd
Priority to CN201910709628.1A priority Critical patent/CN110561442B/en
Publication of CN110561442A publication Critical patent/CN110561442A/en
Application granted granted Critical
Publication of CN110561442B publication Critical patent/CN110561442B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • B25J11/0005Manipulators having means for high-level communication with users, e.g. speech generator, face recognition means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • B25J11/008Manipulators for service tasks

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Toys (AREA)

Abstract

An intelligent toy distribution and arrangement robot and a working method thereof comprise the following steps: the organism still includes: the system comprises a control processor, a label module, a user identification module and an operation module; the tag module includes: the system comprises a label generating module, a marking module and a label identifying module; the label generating module is used for generating a label containing specific information; the marking module is used for marking the generated label on the target object; the label identification module is used for identifying a label; the subscriber identity module comprises: the system comprises a face recognition module, a feature recognition module and a recognition comprehensive module; the face recognition and feature recognition module outputs a face recognition result and a feature recognition result to the recognition integration module; the identification comprehensive module is used for synthesizing a face identification result and a feature identification result and outputting comprehensive identification features of a target user, and the operation module comprises a moving module and a grabbing module.

Description

Intelligent toy distribution and arrangement robot and working method thereof
Technical Field
The invention relates to the field of service robots, in particular to an intelligent toy distribution and arrangement robot and a working method thereof.
Background
With the development of science and technology, the robot is more and more intelligent, and the range of application is more and more wide. Robots are divided into two broad categories, namely industrial robots and special robots. The special robot comprises a service robot, the service robot can be used for assisting production and life of people to improve convenience to a great extent, help users to reduce a plurality of repetitive tedious works, and improve the efficiency of production and life.
In a kindergarten or children's recreation ground, there are a great variety of toys of various shapes. Due to the fact that the number of people using the toy is large, and children cannot take the problem of homing the toy into consideration, the toy is scattered in the places, and arrangement of the toy is a big problem for workers. In addition, children who often use toys have fixed favorite toys, and finding favorite toys among a wide variety of toys can be time consuming and labor intensive, and can be difficult for children. There is therefore a need for a device that can organize and dispense toys based on a user's preference.
Disclosure of Invention
The purpose of the invention is as follows:
in order to solve the problems mentioned in the background art, the invention provides an intelligent toy distribution and arrangement robot and a working method thereof.
The technical scheme is as follows:
an intelligent toy dispensing grooming robot comprising: the organism still includes: the system comprises a control processor, a label module, a user identification module and an operation module;
the tag module includes: the system comprises a label generating module, a marking module and a label identifying module;
the label generation module is used for generating labels containing specific information, and the specifications of the labels are consistent; the label generation module integrates prefabricated content in the label;
the marking module is used for marking the generated label on the target object;
the label identification module is used for identifying the label and acquiring the content in the label;
the subscriber identity module comprises: the system comprises a face recognition module, a feature recognition module and a recognition comprehensive module;
the face recognition module is used for carrying out face recognition, and the face recognition module acquires an image of a user and recognizes the facial image of the user;
the feature identification module is configured to identify user features, where the user features include: characteristic objects, apparel characteristics, morphological characteristics;
the face recognition module and the feature recognition module respectively output face recognition results and feature recognition results to the recognition integration module; the recognition integration module is used for integrating the face recognition result and the feature recognition result and outputting the integrated recognition feature of the target user; the comprehensive identification features comprise face identification and feature identification;
the operation module comprises a moving module and a grabbing module;
the moving module is combined with the machine body and used for moving the machine body;
the grabbing module is used for grabbing the target object, and the grabbing module is used for grabbing and placing the target object;
the control processor is respectively connected with the label identification module, the label module and the identification comprehensive module;
the identification comprehensive module outputs an identification result to the control processor, and the label identification module outputs an identification result to the control processor; the control processor outputs a marking signal to the label module;
the control processor is also connected with the moving module and the grabbing module; the control processor outputs a movement signal to the movement module, and the movement module moves according to the movement signal; the control processor outputs a grabbing signal and/or a placing signal to the grabbing module, the grabbing module grabs the target object according to the grabbing signal, and the grabbing module places the target object according to the placing signal.
A preferred embodiment of the present invention includes:
a target identification module; the target identification module is used for identifying a target object, and the target object is a toy; the target identification module is connected with the control processor, and the target identification module outputs the identification result of the target object to the control processor.
A preferred embodiment of the present invention includes:
the label comprises an adhesive label; the label is an RFID label, and the label identification module is an RFID identifier; the identification distance of the label identification module is a preset identification distance.
A preferred embodiment of the present invention includes:
a receiving module for temporarily receiving a target object; the storage module comprises a plurality of storage units, and the storage units are used for storing different types of targets.
A preferred embodiment of the present invention includes:
the characteristic identification of the user is time-efficient; and the identification integration module pairs the result of the face identification with the result of the feature identification module within preset time, and after the pairing, the identification integration module judges the identity of the current user according to any one of the result of the face identification and the result of the feature identification.
A working method of an intelligent toy distribution and arrangement robot comprises the following steps:
if the strange user appears, the face recognition module acquires and recognizes a face image of the user, and carries out first marking on the user;
in a preset time, the characteristic identification module acquires and identifies the user characteristics of the user and carries out second marking on the user;
the identification integration module acquires a face identification result and a feature identification result, matches the user with the first mark with the user with the second mark, and numbers the user who is successfully matched;
if the face recognition module successfully recognizes the face image of the user, the recognition comprehensive module extracts the number of the user;
if the characteristic identification module successfully identifies the user characteristic of the user, the identification comprehensive module extracts the number of the user.
The method comprises the following steps:
the control processor outputs a marking signal to a label generating module, and the label generating module generates a label with a target number;
the marking module attaches the generated label to the target object;
the tag identification module identifies a tag on a target object and reads tag information;
and after the identification is successful, the label identification module outputs the read label information to the control processor.
A preferred embodiment of the present invention includes:
the target identification module identifies the primarily identified target object and captures the characteristics of the target object;
the control processor outputs an identification signal to the tag identification module, and the tag identification module identifies tag information of the target object;
and the control processor is used for pairing the label information of the target object with the characteristics of the target object.
A preferred embodiment of the present invention includes:
the control processor acquires the user identity and a target object used by the user;
if the contact time between the user and the target object exceeds the preset contact time, the processor is controlled to pair the target object and the user;
the control processor sorts the contact degree of the target object according to the contact time of the user and the target object;
the control processor provides the user with the target object according to the contact degree sequence.
A preferred embodiment of the present invention includes:
if the separation of the target object and the user exceeds the preset separation time, controlling the processor to output a moving signal to the moving module;
the moving module drives the machine body to move towards the target object;
the control processor outputs a grabbing signal to the grabbing module, and the grabbing device grabs the target object according to the grabbing signal;
after the target object is successfully grabbed, the control processor outputs a reset signal to the moving module, and the moving module drives the machine body to move to the accommodating position;
the control processor outputs a placing signal to the grabbing device, and the grabbing device places the target object at the accommodating position.
The invention realizes the following beneficial effects:
through the identification to user and toy, acquire the user to the liking degree of toy. The toys are assigned according to the user's historical liking.
The user is identified according to the facial features of the user and other features of the user, so that the limitation of face identification is avoided, and the identification success rate is improved.
The toy is identified according to the label of the toy and the characteristics of the toy, so that the influence of the label on identification due to shielding or shielding is avoided, and the success rate of identification is improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
Fig. 1 is a system block diagram of an intelligent toy distribution and organization robot provided by the invention.
Fig. 2 is a block diagram of a tag module of an intelligent toy distribution and organization robot provided by the invention.
Fig. 3 is a block diagram of a user identification module of an intelligent toy distribution and organization robot provided by the invention.
Fig. 4 is a block diagram of an operation module of an intelligent toy distribution and organization robot provided by the invention.
Fig. 5 is a working flow chart of a label module of the intelligent toy distribution and organization robot provided by the invention.
Fig. 6 is a flow chart of the work flow of the identification module of the intelligent toy distribution and organization robot provided by the invention.
Fig. 7 is a working flow chart of an operation module of the intelligent toy distribution and organization robot provided by the invention.
Fig. 8 is a user preference operation diagram of the intelligent toy distribution and organization robot provided by the invention.
Fig. 9 is a target object distribution flow chart of the intelligent toy distribution and organization robot provided by the invention.
Wherein: 1. the system comprises a control processor, 2, a label module, 21, a label generation module, 22, a marking module, 23, a label identification module, 3, a user identification module, 31, a face identification module, 32, a feature identification module, 33, an identification integration module, 4, an operation module, 41, a moving module and 42, and a grabbing module.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments.
Example one
The reference figures are 1-9 examples.
An intelligent toy dispensing grooming robot comprising: the organism still includes: the system comprises a control processor 1, a label module 2, a user identification module 3 and an operation module 4.
The tag module 2 includes: a label generation module 21, a marking module 22 and a label identification module 23.
The label generating module 21 is configured to generate a label containing specific information, and the label specifications are consistent. The tag generation module 21 integrates the prefabricated content within the tag.
The label may be a sticker label or a connection label, and is connected to the target object, specifically, the label may include an identification code such as a barcode or a two-dimensional code, or a radio frequency label such as an RFID label. Integrating prefabricated information within the tag, such as: the number of the object, the name of the object, the storage position of the object, and the like.
If the tag is an RFID tag radio frequency tag, an information entry device is disposed in the tag generation module 21, and is used to enter information into the tag.
If the label is an identification code, a device for printing the identification code is provided in the label generation module 21, and is used for printing the identification code on the label.
The marking module 22 is used for marking the generated label on the target object. The marking module 22 connects the label of the input information with the target object, and the connection mode includes pasting or connection.
The tag identification module 23 is used for identifying the tag and obtaining the content in the tag. If the tag is an identification code, the tag identification module 23 may be a code scanning device, and if the tag is a radio frequency tag, the tag identification module 23 may be a radio frequency tag reading device.
The subscriber identity module 3 comprises: a face recognition module 31, a feature recognition module 32 and a recognition integration module 33.
The face recognition module 31 is configured to perform face recognition, and the face recognition module 31 acquires an image of a user and recognizes a facial image of the user. The face recognition module 31 is a general face recognition device, and is configured to acquire the influence of the user and perform recognition according to facial features.
The feature identification module 32 is configured to identify user features, where the user features include: characteristic objects, clothing characteristics, morphological characteristics.
The feature recognition module 32 obtains an image of a user, and extracts features in the image of the user, for example: and (4) clothing features. The facial recognition of the user is supplemented by the extraction of the user features.
And combining face recognition and feature recognition to synthesize recognition modes of all angles of the same user. The identification of the user can be realized under the non-positive condition.
The identification integration module 33 is respectively connected with the face identification module 31 and the feature identification module 32, and the face identification module 31 and the feature identification module 32 respectively output a face identification result and a feature identification result to the identification integration module 33. The recognition integration module 33 is configured to integrate the face recognition result and the feature recognition result, and output an integrated recognition feature of the target user. The comprehensive identification features comprise face identification and feature identification.
The operation module 4 includes a moving module 41 and a grasping module 42.
The moving module 41 is combined with the body for movement of the body. The moving module 41 drives the machine body to move, and according to the instruction of the control processor 1, the moving module 41 realizes the movement of various routes.
The grabbing module 42 is used for grabbing the object, and the grabbing module 42 is used for grabbing and placing the object. The grasping module 42 grasps and places the object according to an instruction of the control processor 1, for collecting the object for sorting and distributing the object to the user.
The control processor 1 is connected with the tag identification module 23, the tag module 2 and the identification integration module 33 respectively.
The identification integration module 33 outputs an identification result to the control processor 1, and the tag identification module 23 outputs an identification result to the control processor 1. The control processor 1 outputs a tag signal to the tag module 2.
The control processor 1 is also connected to the moving module 41 and the grasping module 42. The control processor 1 outputs a movement signal to the movement module 41, and the movement module 41 moves according to the movement signal. The control processor 1 outputs a grabbing signal and/or a placing signal to the grabbing module 42, the grabbing module 42 grabs the object according to the grabbing signal, and the grabbing module 42 places the object according to the placing signal.
As a preferable mode of the present embodiment, the method includes:
and an object identification module. The target identification module is used for identifying a target object, and the target object is a toy. The target identification module is connected with the control processor 1, and the target identification module outputs an identification result of a target object to the control processor 1.
The target identification module acquires the information of the target object by identifying the characteristics of the target object. The characteristics of the target object are matched with the label information of the target object, and the information of the target object can be acquired when the label information cannot be identified.
As a preferable mode of the present embodiment, the method includes:
the label comprises an adhesive label. The tag is an RFID tag, and the tag identification module 23 is an RFID identifier. The identification distance of the tag identification module 23 is a preset identification distance.
The preset identification distance may be set to 1-3m, within which the tag identification module 23 can identify the tag.
As a preferable mode of the present embodiment, the method includes:
a receiving module for temporarily receiving a target. The storage module comprises a plurality of storage units, and the storage units are used for storing different types of targets.
The storage module is a storage device on the machine body and used for temporarily placing the target object when the machine body collects and arranges the target object.
The object may be classified into categories such as: hard toys, soft toys, movable toys, immovable toys, etc. Different kinds of objects are placed in different storage units, so that subsequent arrangement is facilitated.
As a preferable mode of the present embodiment, the method includes:
the characteristic identification of the user is time-efficient. The recognition integration module 33 pairs the result of the face recognition with the result of the feature recognition module 32 within a predetermined time, and after the pairing, the recognition integration module 33 determines the current user identity according to any one of the result of the face recognition and the result of the feature recognition.
The predetermined time may be within the same day, or within 6 hours, etc., and may be specifically set to 6 to 24 hours. Since the user characteristics of the user may change, the entry of the user characteristics needs to be time-efficient.
Example two
Reference is made to fig. 5-9 for example.
A working method of an intelligent toy distribution and arrangement robot comprises the following steps:
if the strange user appears, the face recognition module 31 acquires and recognizes a face image of the user, and performs first marking on the user.
Within a predetermined time, the feature identification module 32 obtains and identifies a user feature of the user and second tags the user.
The identification integration module 33 obtains the face identification result and the feature identification result, pairs the user with the first mark with the user with the second mark, and numbers the user who is successfully paired.
If the face recognition module 31 successfully recognizes the facial image of the user, the recognition integration module 33 extracts the number of the user.
If the feature identification module 32 successfully identifies the user feature of the user, the identification integration module 33 extracts the number of the user.
As a preferable mode of the present embodiment, the method includes the steps of:
the control processor 1 outputs a tag signal to the tag generation module 21, which tag generation module 21 generates a tag with the object number.
The labeling module 22 attaches the generated label to the target object.
The tag identification module 23 identifies a tag on the object and reads tag information.
After the identification is successful, the tag identification module 23 outputs the read tag information to the control processor 1.
As a preferable mode of the present embodiment, the method includes:
and the target identification module identifies the primarily identified target object and captures the characteristics of the target object.
The control processor 1 outputs an identification signal to the tag identification module 23, and the tag identification module 23 identifies tag information of the object.
The control processor 1 pairs the tag information of the object with the object characteristics.
As a preferable mode of the present embodiment, the method includes:
the control processor 1 obtains the identity of the user and the target object used by the user.
And if the contact time between the user and the target object exceeds the preset contact time, controlling the processor 1 to pair the target object and the user.
The control processor 1 sorts the contact degree of the target object according to the contact time of the user with the target object.
The control processor 1 provides the user with the target object according to the contact degree ranking.
As a preferable mode of the present embodiment, the method includes:
if the separation of the target object from the user exceeds the preset separation time, the control processor 1 outputs a movement signal to the movement module 41.
The moving module 41 drives the body to move toward the target object.
The control processor 1 outputs a grasping signal to the grasping module 42, and the grasping apparatus grasps the object according to the grasping signal.
After the object is successfully grabbed, the control processor 1 outputs a reset signal to the moving module 41, and the moving module 41 drives the machine body to move to the storage position.
The control processor 1 outputs a placing signal to the grasping apparatus, and the grasping apparatus places the object at the storage position.
The above embodiments are merely illustrative of the technical ideas and features of the present invention, and are intended to enable those skilled in the art to understand the contents of the present invention and implement the present invention, and not to limit the scope of the present invention. All equivalent changes or modifications made according to the spirit of the present invention should be covered within the protection scope of the present invention.

Claims (7)

1. An intelligent toy dispensing grooming robot comprising: organism, its characterized in that still includes: the system comprises a control processor (1), a label module (2), a user identification module (3) and an operation module (4);
the label module (2) comprises: a label generation module (21), a marking module (22) and a label identification module (23);
the label generation module (21) is used for generating labels containing specific information, and the specifications of the labels are consistent; the tag generation module (21) integrates prefabricated content within the tag;
the marking module (22) is used for marking the generated label on the target object;
the label identification module (23) is used for identifying the label and acquiring the content in the label;
the subscriber identity module (3) comprises: a face recognition module (31), a feature recognition module (32) and a recognition integration module (33);
the face recognition module (31) is used for carrying out face recognition, and the face recognition module (31) acquires an image of a user and recognizes a facial image of the user;
the feature identification module (32) is configured to identify user features, the user features including: characteristic objects, apparel characteristics, morphological characteristics;
the identification integration module (33) is respectively connected with the face identification module (31) and the feature identification module (32), and the face identification module (31) and the feature identification module (32) respectively output a face identification result and a feature identification result to the identification integration module (33); the recognition integration module (33) is used for integrating the face recognition result and the feature recognition result and outputting the integrated recognition feature of the target user; the comprehensive identification features comprise face identification and feature identification;
the operation module (4) comprises a moving module (41) and a grabbing module (42);
the moving module (41) is combined with the machine body and used for moving the machine body;
the grabbing module (42) is used for grabbing an object, and the grabbing module (42) is used for grabbing and placing the object;
the control processor (1) is respectively connected with the label identification module (23), the label module (2) and the identification comprehensive module (33);
the identification integration module (33) outputs an identification result to the control processor (1), and the tag identification module (23) outputs an identification result to the control processor (1); the control processor (1) outputs a marking signal to the tag module (2);
the control processor (1) is also connected with the moving module (41) and the grabbing module (42); the control processor (1) outputs a movement signal to the movement module (41), and the movement module (41) moves according to the movement signal; the control processor (1) outputs grabbing signals and/or placing signals to the grabbing module (42), the grabbing module (42) grabs the target object according to the grabbing signals, and the grabbing module (42) places the target object according to the placing signals;
the characteristic identification of the user is time-efficient; the identification integration module (33) pairs the result of the face identification with the result of the feature identification module (32) within a preset time, and after the pairing, the identification integration module (33) judges the identity of the current user according to any one of the result of the face identification and the result of the feature identification;
the working method of the intelligent toy distribution and arrangement robot comprises the following steps: the control processor (1) acquires the identity of a user and a target object used by the user; if the contact time of the user and the target object exceeds the preset contact time, the processor (1) is controlled to pair the target object and the user; the control processor (1) sequences the contact degree of the target object according to the contact time of the user and the target object; the control processor (1) provides the user with the target objects according to the contact degree sequence.
2. The intelligent toy distribution and organization robot as claimed in claim 1, comprising:
a target identification module; the target identification module is used for identifying a target object, and the target object is a toy; the target identification module is connected with the control processor (1), and the target identification module outputs the identification result of the target object to the control processor (1).
3. The intelligent toy distribution and organization robot as claimed in claim 1, comprising:
the label comprises an adhesive label; the label is an RFID label, and the label identification module (23) is an RFID identifier; the identification distance of the label identification module (23) is a preset identification distance.
4. The intelligent toy distribution and organization robot as claimed in claim 1, comprising:
a receiving module for temporarily receiving a target object; the storage module comprises a plurality of storage units, and the storage units are used for storing different types of targets.
5. An operating method of an intelligent toy distribution and organization robot, which uses the intelligent toy distribution and organization robot as claimed in claim 1, wherein the operating method comprises the following steps:
if the strange user appears, the face recognition module (31) acquires and recognizes a face image of the user, and carries out first marking on the user;
within preset time, the characteristic identification module (32) acquires and identifies the user characteristics of the user and marks the user for the second time;
the identification integration module (33) acquires a face identification result and a feature identification result, matches the user with the first mark with the user with the second mark, and numbers the user who is successfully matched;
if the face recognition module (31) successfully recognizes the face image of the user, the recognition integration module (33) extracts the number of the user;
if the feature identification module (32) successfully identifies the user features of the user, the identification integration module (33) extracts the number of the user;
if the separation of the target object and the user exceeds the preset separation time, controlling the processor (1) to output a moving signal to the moving module (41);
the moving module (41) drives the machine body to move towards the target object;
the control processor (1) outputs a grabbing signal to the grabbing module (42), and the grabbing device grabs the target object according to the grabbing signal;
after the target object is successfully grabbed, the control processor (1) outputs a reset signal to the moving module (41), and the moving module (41) drives the machine body to move to the accommodating position;
the control processor (1) outputs a placing signal to the grabbing device, and the grabbing device places the target object at the accommodating position.
6. The working method of the intelligent toy distribution and organization robot as claimed in claim 5, characterized by comprising the following steps:
the control processor (1) outputs a marking signal to a label generating module (21), and the label generating module (21) generates a label with a target number;
the marking module (22) attaches the generated label to the target object;
the tag identification module (23) identifies a tag on a target object and reads tag information;
after the identification is successful, the tag identification module (23) outputs the read tag information to the control processor (1).
7. An operating method of an intelligent toy distribution and organization robot using the intelligent toy distribution and organization robot as claimed in claim 2, the operating method comprising:
the target identification module identifies the primarily identified target object and captures the characteristics of the target object;
the control processor (1) outputs an identification signal to the tag identification module (23), and the tag identification module (23) identifies tag information of the target object;
the control processor (1) pairs the tag information of the object with the object characteristics.
CN201910709628.1A 2019-08-02 2019-08-02 Intelligent toy distribution and arrangement robot and working method thereof Active CN110561442B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910709628.1A CN110561442B (en) 2019-08-02 2019-08-02 Intelligent toy distribution and arrangement robot and working method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910709628.1A CN110561442B (en) 2019-08-02 2019-08-02 Intelligent toy distribution and arrangement robot and working method thereof

Publications (2)

Publication Number Publication Date
CN110561442A CN110561442A (en) 2019-12-13
CN110561442B true CN110561442B (en) 2021-06-01

Family

ID=68774459

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910709628.1A Active CN110561442B (en) 2019-08-02 2019-08-02 Intelligent toy distribution and arrangement robot and working method thereof

Country Status (1)

Country Link
CN (1) CN110561442B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101515391A (en) * 2009-03-20 2009-08-26 北京理工大学 Automatic book management system based on closed stacks
CN106919971A (en) * 2017-03-03 2017-07-04 广州市幼儿师范学校 Quick Response Code intelligent archive cabinet and archive management system
CN106981119A (en) * 2017-05-05 2017-07-25 江苏速度信息科技股份有限公司 Entrance guard management system and method based on body shape
CN107000208A (en) * 2014-12-16 2017-08-01 亚马逊技术股份有限公司 The robot crawl of article in inventory system
CN108062622A (en) * 2017-12-08 2018-05-22 歌尔股份有限公司 A kind of children collect method of discrimination, methods of marking and system to toy fancy grade
CN109157845A (en) * 2018-08-31 2019-01-08 北京小米移动软件有限公司 A kind of toy storage method and device, user terminal, toy combination
CN109376621A (en) * 2018-09-30 2019-02-22 北京七鑫易维信息技术有限公司 A kind of sample data generation method, device and robot

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101515391A (en) * 2009-03-20 2009-08-26 北京理工大学 Automatic book management system based on closed stacks
CN107000208A (en) * 2014-12-16 2017-08-01 亚马逊技术股份有限公司 The robot crawl of article in inventory system
CN106919971A (en) * 2017-03-03 2017-07-04 广州市幼儿师范学校 Quick Response Code intelligent archive cabinet and archive management system
CN106981119A (en) * 2017-05-05 2017-07-25 江苏速度信息科技股份有限公司 Entrance guard management system and method based on body shape
CN108062622A (en) * 2017-12-08 2018-05-22 歌尔股份有限公司 A kind of children collect method of discrimination, methods of marking and system to toy fancy grade
CN109157845A (en) * 2018-08-31 2019-01-08 北京小米移动软件有限公司 A kind of toy storage method and device, user terminal, toy combination
CN109376621A (en) * 2018-09-30 2019-02-22 北京七鑫易维信息技术有限公司 A kind of sample data generation method, device and robot

Also Published As

Publication number Publication date
CN110561442A (en) 2019-12-13

Similar Documents

Publication Publication Date Title
CN116600947A (en) Multi-mode comprehensive information identification mobile double-arm robot device, system and method
CN109961053A (en) Reminding method, apparatus and system
CN109300351B (en) Associating a tool with a pick gesture
CN111145257B (en) Article grabbing method and system and article grabbing robot
CN105512700A (en) RFID and video technology-based analysis method and analysis system thereof
Hoang et al. A solution based on combination of RFID tags and facial recognition for monitoring systems
Maekawa et al. WristSense: wrist-worn sensor device with camera for daily activity recognition
CN111507246A (en) Method, device, system and storage medium for selecting marked object through gesture
CN110561442B (en) Intelligent toy distribution and arrangement robot and working method thereof
CN111402036A (en) Customer track obtaining method in bank outlets and outlet management center system
CN109034008A (en) A kind of show ground management system based on feature identification
CN108197563A (en) For obtaining the method and device of information
CN106484109A (en) A kind of gesture detecting method docking close-target object based on back-scattered signal
Ohba et al. Facial expression communication with FES
CN113709364B (en) Camera identifying equipment and object identifying method
CN108133160A (en) Safe swimming monitoring system based on RFID
CN202221573U (en) Personal belongings management system
CN114022905A (en) Attribute-aware domain expansion pedestrian re-identification method and system
CN112232219A (en) Face recognition check-in system based on LBP (local binary pattern) feature algorithm
Martínez-Zarzuela et al. Action recognition system based on human body tracking with depth images
CN114255321A (en) Method and device for collecting pet nose print, storage medium and electronic equipment
CN109492719B (en) Device and method for assisting alzheimer's patient in positioning object
CN113894050A (en) Logistics piece sorting method, sorting equipment and storage medium
CN109084750A (en) A kind of air navigation aid and electronic equipment
Lömker et al. A multimodal system for object learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20230804

Address after: 276000 Shengli Town, Tancheng County, Linyi City, Shandong Province

Patentee after: Shandong Leerle Toys Co.,Ltd.

Address before: 215400 Building 1, group 9, niqiao village, Shaxi Town, Taicang City, Suzhou City, Jiangsu Province

Patentee before: SUZHOU ZHAOXUAN DIGITAL TECHNOLOGY Co.,Ltd.