CN111390921A - Library uses multi-functional robot - Google Patents

Library uses multi-functional robot Download PDF

Info

Publication number
CN111390921A
CN111390921A CN202010183484.3A CN202010183484A CN111390921A CN 111390921 A CN111390921 A CN 111390921A CN 202010183484 A CN202010183484 A CN 202010183484A CN 111390921 A CN111390921 A CN 111390921A
Authority
CN
China
Prior art keywords
robot
sub
information
main body
book
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010183484.3A
Other languages
Chinese (zh)
Other versions
CN111390921B (en
Inventor
阮光册
郭欣欣
李双彤
柳思如
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Southern Power Grid Internet Service Co ltd
Original Assignee
East China Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by East China Normal University filed Critical East China Normal University
Priority to CN202010183484.3A priority Critical patent/CN111390921B/en
Publication of CN111390921A publication Critical patent/CN111390921A/en
Application granted granted Critical
Publication of CN111390921B publication Critical patent/CN111390921B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Molecular Biology (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Computing Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Manipulator (AREA)

Abstract

The invention relates to a multifunctional robot for a library, which comprises: the robot comprises a mounting bracket, a robot main body, a containing mechanism, a control mechanism and a moving mechanism, wherein the containing mechanism is arranged on the robot main body; the robot main body comprises two mechanical arms, an information input mechanism, an information output mechanism, a first image acquisition mechanism and a first information acquisition mechanism; the storage mechanism comprises a storage platform, a second image acquisition mechanism and a second information acquisition mechanism. The intelligent book picking robot has the advantages that on the basis of the functions of book sorting, book classification, book grabbing and placing, the robot body interacts with deaf-mute users through image recognition and the mechanical arm, interacts with blind users, common users and workers through the information input mechanism and the information output mechanism, is convenient to serve various groups, improves intelligent management capacity and intelligent service capacity of a library, reduces use difficulty of the deaf-mute, the blind and the like, and is convenient to obtain required information.

Description

Library uses multi-functional robot
Technical Field
The invention relates to the technical field of library intelligent robots, in particular to a multifunctional robot for a library.
Background
In the library management process, books are generally borrowed and manually managed, namely, the books need to be manually classified, manually sorted and manually put on shelves. This kind of management mode, the operating procedure is loaded down with trivial details, needs to consume the staff's of library a large amount of working energy.
In order to facilitate the book retrieval of readers, a plurality of computers are generally arranged in a specific area of a library for the readers to use. However, these computers are only suitable for normal use by the reader. For some deaf-mutes, sign language is generally used for normal communication and reading, and some deaf-mutes can only read but cannot use pinyin, so that the computers cannot be used by the deaf-mutes, and staff in a library cannot communicate with the deaf-mutes by using the sign language generally, and further the deaf-mutes cannot acquire required information in the library. For the blind, although the blind can communicate with other people in the language, the blind cannot communicate with other people in the language generally, and the computer does not have braille input and output, so that the blind cannot effectively and quickly acquire the information required by the blind.
In addition, because the computer is only placed at a specific position for use, some people cannot acquire the position information of the books in real time when searching the books, and only can search the books like a sea fishing needle, the efficiency is very low, and the emotion of a reader is very influenced.
Therefore, need for a management device for library, can automize ground book letter sorting, classification, arrangement and put on the shelf to can communicate with special crowd, and the reader of being convenient for acquires books information in real time, improves the managerial efficiency in library, makes things convenient for the reader.
Disclosure of Invention
The invention aims to provide a multifunctional robot for a library, aiming at the defects in the prior art.
In order to achieve the purpose, the invention adopts the technical scheme that:
a multi-function robot for a library, comprising:
mounting a bracket;
a robot main body provided at an upper portion of the mounting bracket;
a storage mechanism disposed above the mounting bracket and located at a side of the robot main body;
a control mechanism disposed between the robot main body and the mounting bracket;
the moving mechanism is arranged at the lower part of the mounting bracket;
wherein the robot main body includes:
the two mechanical arms are symmetrically arranged at the left part and the right part of the robot main body;
an information input mechanism provided at the front and rear of the robot main body;
the information output mechanism is arranged at the front part and the back part of the robot main body and is positioned at the upper part of the information input mechanism;
a first image acquisition mechanism rotatably mounted on a top of the robot main body;
a first information acquisition mechanism mounted to the mechanical arm near the storage mechanism;
the storage mechanism includes:
a storage platform;
the third image acquisition mechanism is arranged at the upper part of the accommodating platform;
a second information acquisition mechanism mounted to the storage platform.
Preferably, the robot main body further includes:
a third image acquisition mechanism mounted to the mechanical arm.
Preferably, the information input mechanism includes:
the keyboard unit comprises a plurality of keys, and the upper surfaces of the keys comprise characters and Braille corresponding to the characters.
Preferably, the information output mechanism includes:
a display panel;
the Braille display panel is arranged at the lower part of the display panel.
Preferably, the information input mechanism further comprises:
and the handwriting unit comprises a pen and a handwriting area, and the handwriting area is arranged on the side part of the keyboard unit or is integrated in the information output mechanism.
Preferably, the receiving platform further comprises:
a mobile carriage, the mobile carriage comprising:
the first end of the first sub-bracket is slidably arranged on the accommodating platform;
a second sub-mount having a first end slidably nested with a second end of the first sub-mount;
the first end of the third sub-bracket is fixedly connected with the second end of the second sub-bracket;
a fourth sub-mount, a first end of the fourth sub-mount slidably nested with a second end of the third sub-mount;
the second image acquisition mechanism is rotatably mounted at the second end of the fourth sub-bracket.
Preferably, the robot arm includes:
a first sub-arm in a first rotational connection with the robot body;
a second sub-arm in a second rotational connection with the first sub-arm;
the manipulator is in third rotating connection with the second sub-arm;
wherein the first rotational connection, the second rotational connection and the third rotational connection have different rotational directions.
Preferably, the robot arm further comprises:
a third sub-arm in a fourth rotational connection with the first sub-arm and in a fifth rotational connection with the second sub-arm;
wherein the first rotational connection, the third rotational connection and the fifth rotational connection have different rotational directions; the fourth rotational connection has the same rotational direction as the first rotational connection or the fifth rotational connection.
Preferably, the storage platform comprises a book returning unclassified area, a book returning classified area and a book borrowing area;
the storage platform further comprises:
the bookshelf is arranged in the book returning unclassified region, the book returning classified region and the book borrowing region, and the bookshelf comprises a plurality of accommodating cavities with upward openings and arranged regularly.
Preferably, the receiving chamber includes:
a first chamber located at an upper side of the accommodating chamber;
a second chamber located at a lower side of the receiving cavity;
a baffle movably disposed between the first chamber and the second chamber;
and the push plate is arranged in the second chamber and reciprocates between the bottom of the second chamber and the partition plate.
Preferably, the method further comprises the following steps:
and the lifting mechanism is arranged at the lower part of the mounting bracket, or the lifting mechanism is arranged between the mounting bracket and the robot main body.
By adopting the technical scheme, compared with the prior art, the invention has the following technical effects:
on the basis of the functions of book sorting, book classification, book grabbing and placing, the robot body interacts with deaf-mute users by means of image recognition and the mechanical arm, interacts with blind users, common users and workers by means of the information input mechanism and the information output mechanism, is convenient to serve various groups, improves intelligent management capacity and intelligent service capacity of the library, reduces using difficulty of the deaf-mute, the blind and the like, and facilitates the acquisition of required information.
Drawings
FIG. 1 is a schematic diagram of an exemplary embodiment of the present invention.
Fig. 2 is a schematic view of a use state of an exemplary embodiment of the present invention.
Fig. 3 is a schematic view of the front side of the robot main body of one exemplary embodiment of the present invention.
Fig. 4 is a schematic view of the rear side of the robot main body of one exemplary embodiment of the present invention.
Fig. 5 is a schematic view of the front side of one embodiment of the robot main body of the present invention.
Fig. 6 is a schematic view of the rear side of one embodiment of the robot main body of the present invention.
Fig. 7 is a schematic view of the front side of another embodiment of the robot main body of the present invention.
Fig. 8 is a schematic view of the rear side of another embodiment of the robot main body of the present invention.
Fig. 9 is a schematic view of a receiving platform of an exemplary embodiment of the present invention.
Fig. 10 is a partial sectional view of a bookcase according to an exemplary embodiment of the present invention.
Fig. 11 is a schematic view of a usage of a bookshelf according to an exemplary embodiment of the present invention.
FIG. 12 is a circuit connection block diagram of an exemplary embodiment of the present invention.
Wherein the reference numerals are: the robot includes a mounting bracket 100, a robot main body 200, a robot arm 210, a first sub-arm 211, a second sub-arm 212, a third sub-arm 213, a robot hand 214, an information input mechanism 220, an information output mechanism 230, a first image acquisition mechanism 240, a first information acquisition mechanism 250, a third image acquisition mechanism 260, a storage mechanism 300, a storage platform 310, a second image acquisition mechanism 320, a second information acquisition mechanism 330, a moving bracket 340, a first sub-bracket 341, a second sub-bracket 342, a third sub-bracket 343, a fourth sub-bracket 344, a bookshelf 350, a storage cavity 351, a first chamber 352, a second chamber 353, a partition 354, a push plate 355, a restriction element 356, a control mechanism 400, a moving mechanism 500, a lifting mechanism 600, and a book 700.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the embodiments and features of the embodiments may be combined with each other without conflict.
The invention is further described with reference to the following drawings and specific examples, which are not intended to be limiting.
Example 1
An exemplary embodiment of the present invention, as shown in fig. 1 to 2, is a multi-function robot for a library, comprising a mounting bracket 100, a robot main body 200, a storage mechanism 300, a control mechanism 400 and a moving mechanism 500, wherein the robot main body 200, the storage mechanism 300, the control mechanism 400 and the moving mechanism 500 are respectively mounted on the upper side and the lower side of the mounting bracket 100.
As shown in fig. 3, 4 and 12, the robot main body 200 includes two robot arms 210, an information input mechanism 220, an information output mechanism 230, a first image acquisition mechanism 240 and a first information acquisition mechanism 250, the two robot arms 210 are symmetrically disposed at left and right sides of the robot main body 200, the information input mechanism 220 is disposed at front and rear sides of the robot main body 200, the information output mechanism 230 is disposed at front and rear sides of the robot main body 200 and at an upper side of the corresponding information input mechanism 220, the first image acquisition mechanism 240 is rotatably mounted at a top of the robot main body 200, and the first information acquisition mechanism 250 is mounted at one of the robot arms 210.
As shown in fig. 3 and 4, the robot arm 210 includes a first sub-arm 211, a second sub-arm 212, and a robot arm 214, a first end of the first sub-arm 211 is connected to the robot main body 200 in a first rotation, a first end of the second sub-arm 212 is connected to a second end of the first sub-arm 211 in a second rotation, and a first end of the robot arm 214 is connected to a second end of the second sub-arm 212 in a third rotation, wherein a rotation direction of the first rotation, a rotation direction of the second rotation, and a rotation direction of the third rotation are different.
Specifically, in the initial state of the robot arm 210, that is, the robot arm 210 does not rotate any more and is perpendicular to the horizontal plane, a three-dimensional space is constructed by XYZ axes, the rotation direction of the first rotational connection is parallel to the plane of the Y-Z axis, the rotation direction of the second rotational connection is parallel to the plane of the X-Z axis, and the rotation direction of the third rotational connection is parallel to the plane of the X-Y axis. That is, the first sub-arm 211 drives the robot arm 210 to move in the front-rear direction, and the second sub-arm 211 drives the robot arm 210 to move in the left-right direction.
The robot 214 comprises a first finger and a second finger, wherein the first finger comprises two fingers connected in a rotatable manner, the second finger comprises three fingers connected in a rotatable manner, and the first finger and the second finger are respectively connected in a rotatable manner with the robot 214. Wherein the first finger component is a thumb component, and the second finger component is an index finger element, a middle finger element, a ring finger element and a little finger element. The manipulator 214 is a bionic hand capable of performing normal human hand movements such as grasping, pointing, etc., and is capable of flexibly performing sign language movements to interact with deaf-mute users in sign language.
In a specific embodiment, as shown in fig. 5 and 6, the robot arm 210 further includes a third sub-arm 213, a first end of the third sub-arm 213 is in a fourth rotational connection with a second end of the first sub-arm 211, and a second end of the third sub-arm 213 is in a fifth rotational connection with a first end of the second sub-arm 212, wherein a rotational direction of the first rotational connection, a rotational direction of the fifth rotational connection, and a rotational direction of the third rotational connection are different, and a rotational direction of the first rotational connection is the same as a rotational direction of the fourth rotational connection.
Specifically, in the initial state of the robot arm 210, that is, the robot arm 210 does not rotate any more and is perpendicular to the horizontal plane, a three-dimensional space is created by XYZ axes, the rotation direction of the first rotational connection is parallel to the plane of the Y-Z axis, the rotation direction of the fourth rotational connection is parallel to the plane of the Y-Z axis, the rotation direction of the fifth rotational connection is parallel to the plane of the X-Z axis, and the rotation direction of the third rotational connection is parallel to the plane of the X-Y axis. That is, the first sub-arm 211 drives the robot 210 to move in the front-rear direction, the third sub-arm 213 drives the robot 210 to move in the front-rear direction, and the second sub-arm 211 drives the robot 210 to move in the left-right direction.
In another specific embodiment, as shown in fig. 7 and 8, the mechanical arm 210 further includes a third sub-arm 213, a first end of the third sub-arm 213 is in a fourth rotational connection with a second end of the first sub-arm 211, and a second end of the third sub-arm 213 is in a fifth rotational connection with a first end of the second sub-arm 212, wherein a rotational direction of the first rotational connection, a rotational direction of the fifth rotational connection, and a rotational direction of the third rotational connection are different, and a rotational direction of the fifth rotational connection is the same as a rotational direction of the fourth rotational connection.
Specifically, in the initial state of the robot arm 210, that is, the robot arm 210 does not rotate any more and is perpendicular to the horizontal plane, a three-dimensional space is created by XYZ axes, the rotation direction of the first rotational connection is parallel to the plane of the Y-Z axis, the rotation direction of the fourth rotational connection is parallel to the plane of the X-Z axis, the rotation direction of the fifth rotational connection is parallel to the plane of the X-Z axis, and the rotation direction of the third rotational connection is parallel to the plane of the X-Y axis. That is, the first sub-arm 211 drives the robot 210 to move in the front-rear direction, the third sub-arm 213 drives the robot 210 to move in the front-rear direction, and the second sub-arm 211 drives the robot 210 to move in the left-right direction.
By providing the third sub-arm 213, the working range of the robot arm 210 can be expanded.
As shown in fig. 3 and 4, the information input mechanism 220 is rotatably provided at the front and rear sides of the robot main body 200. Specifically, in a case of non-use, the information input mechanism 220 is disposed in close proximity to the robot main body 200 (the information input mechanism 220 is perpendicular to the horizontal plane); in a case of use, the information input mechanism 220 is rotated such that the information input mechanism 220 is disposed perpendicular to the robot main body 200 (the information input mechanism 220 is parallel to a horizontal plane), facilitating information input by a user.
By providing the information input mechanisms 220 on the front side and the rear side of the robot main body 200, the user can conveniently input information in most cases without adjusting his or her position or posture.
Specifically, the information input mechanism 220 includes a keyboard unit (not shown in the drawings) including a plurality of keys, wherein upper surfaces of the keys include characters (such as english characters, function characters) and braille corresponding to the characters. The keyboard unit is convenient for common users to use, is also convenient for blind users to use, and improves the convenience of user operation.
The default input state of the keyboard unit is a Braille input state, and when the keyboard unit is used by a common user, Braille input needs to be switched to a Chinese input state or an English input state.
Further, the information input mechanism 220 includes a handwriting unit (not shown in the drawings) including a pen and a handwriting area. The pen is connected with the robot main body through the constraint unit, so that the pen is prevented from being lost; the handwriting area is disposed at a side portion (e.g., left, right, or both sides) of the keyboard unit, or the handwriting area is integrated with the information output mechanism 230.
By utilizing the keyboard unit and the handwriting unit, the input modes of the user are enriched, the method is suitable for user groups with different culture levels, the use difficulty is reduced, and the input efficiency is improved.
As shown in fig. 3 and 4, the robot main body 200 includes a head part, and information output mechanisms 230 are provided on front and rear sides of the head part. The information output mechanism 230 includes a display panel and a braille display panel, wherein the braille display panel is located at a lower side of the display panel. The display panel is used for displaying information input by the general user through the information input mechanism 220 or information input by the deaf-mute user through the first image acquisition mechanism 240, and displaying relevant feedback (such as a retrieval result) for the information. The braille display panel is used to display information input by the blind user through the information input mechanism 220 and to display relevant feedback for the information.
Under the condition that the keyboard unit is in a Braille input state, the display panel and the Braille display panel both display information input by the blind user and relevant feedback aiming at the information, so that the blind user can conveniently obtain contents through the Braille display panel, and library staff can conveniently obtain the use requirements of the blind user; in the case that the keyboard unit is in a Chinese input state or an English input state, only the display panel displays information input by a user and relevant feedback aiming at the information.
The Braille display panel comprises a plurality of holes and a plurality of bulges arranged in the holes. Under the state that the projection is not shown, the projection is hidden in the hole; in the display state, the protrusions move upwards and protrude out of the surface of the hole to form braille points according to the content to be displayed, and therefore the blind users can read the braille points conveniently.
As shown in fig. 3 and 4, the first image capturing mechanism 240 is rotatably disposed on the top of the robot main body 200, that is, on the top of the head of the robot main body 200, and is used for capturing the environmental conditions around the robot main body 200 and capturing the sign language actions of the deaf and dumb user, so that the control mechanism 400 can perform the actions of sign language analysis, sign language conversion characters, character analysis, semantic analysis, character conversion sign language, and the like.
Specifically, the first image acquisition mechanism 240 is a camera that can acquire still image information and moving image information. Further, the first image acquisition mechanism 240 may acquire color image information and black-and-white image information for performing depth processing
As shown in fig. 12, the first information acquiring mechanism 250 is mounted on the robot arm 214 near the robot arm 210 of the accommodating mechanism 300, and is configured to read tag information (e.g., a radio frequency identification tag) built in the book when the robot arm 214 grasps the book, so as to identify relevant information (e.g., a book classification number and a book name) of the book, thereby determining a position where the book needs to be placed.
Specifically, the first information acquiring mechanism 250 may be a radio frequency identification reader.
Further, as shown in fig. 3 and 4, the robot main body 200 further includes a third image capturing mechanism 260, and a manipulator 214 mounted on the robot arm 210, for positioning the book and capturing spine image information of the book, and comparing the spine image information with the label information, so as to avoid placing the book at an incorrect position.
Specifically, the third image acquisition mechanism 260 is a camera that is also capable of acquiring still image information and moving image information.
As shown in fig. 9 and 12, the receiving mechanism 300 includes a receiving platform 310, a second image obtaining mechanism 320, a second information obtaining mechanism 330, a moving bracket 340, and a bookshelf 350. Wherein, the bookshelf 350 is installed above the receiving platform 310; the moving bracket 340 is slidably installed on the receiving platform 310, and the moving bracket 340 is located at one side of the bookshelf 350; a second image acquiring mechanism 320 is arranged at one end of the movable bracket 340, and the second image acquiring mechanism 320 is used for acquiring image information of a book in the bookshelf 350; the second information acquiring mechanism 350 is mounted on the moving bracket 340, and is used for reading tag information built in a book in the bookshelf 350.
The movable bracket 340 includes a first sub-bracket 341, a second sub-bracket 342, a third sub-bracket 343, and a fourth sub-bracket 344, wherein a first end of the first sub-bracket 341 is slidably installed in the sliding track of the receiving platform 310, a first end of the second sub-bracket 342 is slidably nested at a second end of the first sub-bracket 341, a first end of the third sub-bracket 343 is vertically and fixedly connected with a second end of the second sub-bracket 342, a first end of the fourth sub-bracket 344 is slidably nested at a second end of the third sub-bracket 343, and the second image capturing mechanism 320 is installed at a second end of the fourth sub-bracket 344.
Specifically, the entire movable frame 340 is L-shaped, the second sub-frame 342 reciprocates along the axial direction of the first sub-frame 341 to adjust the height (or length) of the movable frame 340, and the fourth sub-frame 344 reciprocates along the axial direction of the third sub-frame 343 to adjust the width of the movable frame 340.
By sliding the movable bracket 340 and adjusting the movable bracket 340, the second image capturing mechanism 320 captures an image of a book placed in the bookshelf 350 to locate the book.
The storage platform 300 includes a book returning unclassified region, a book returning classified region, and a book borrowing region, and the bookshelf 350 is disposed in the book returning unclassified region, the book returning classified region, and the book borrowing region. The book returning unclassified area is arranged close to the movable support 340, the book returning classified area is arranged far from the movable support 340 and close to the robot main body 200, and the book borrowing area is arranged far from the movable support 340 and the robot main body 200.
As shown in fig. 10, the bookshelf 350 includes several regularly arranged accommodating cavities 351 with upward openings for placing books. In order to increase the number of books accommodated in the bookshelf 350, each accommodating chamber 351 includes a first chamber 352, a second chamber 353, a partition 354 and a push plate 355, wherein the first chamber 352 and the second chamber 353 are spaced apart by the partition 354, and the push plate 355 is disposed at the bottom of the second chamber 353.
Specifically, the partition 354 is two rotating plates rotatably disposed with the accommodating chamber 351, and the two rotating plates are symmetrically disposed. In the case where the book 700 moves from the first chamber 352 to the second chamber 353, both the rotation plates rotate downward; in the case where the book 700 is moved from the second chamber 353 to the first chamber 352, both the rotation plates are rotated upward.
The push plate 355 reciprocates linearly up and down in the second chamber 353, and may reciprocate under the action of electric force (e.g., a motor) or mechanical force (e.g., hydraulic pressure). In the present embodiment, the movement of the push rod 355 under the elastic restriction of the restriction member 356 is exemplified.
Specifically, the constraining member 356 is a spring having one end constrained to the push plate 355 and the other end connected to the bottom of the second chamber 353. In the case where the accommodating chamber 351 does not accommodate the book 700, the pusher 355 abuts against the spacer 354; in the case that the accommodating cavity 351 accommodates a book 700, the book 700 is located inside the first accommodating cavity 352, and the push plate 355 still abuts against the partition 354; under the condition that the accommodating cavity 351 accommodates two books 700, the first book 700 is located inside the first accommodating cavity 352, the second book 700 exerts a downward acting force on the first book 700, so that the first book 700 exerts a downward acting force on the push plate 355 to press the restriction element 356, the first book 700 enters the second accommodating cavity 353, the second book 700 enters the first accommodating cavity 352, and the push plate 355 abuts against the bottom of the second accommodating cavity 353 under the action of gravity of the two books 700.
As shown in fig. 11, when the book 700 in the first receiving chamber 352 is removed, the book 700 in the second receiving chamber 353 is moved upward by the elastic force of the restriction member 356 and finally positioned in the first receiving chamber 352.
As shown in fig. 12, the control mechanism 400 is coupled to the robot main body 200, the receiving platform 300 and the moving mechanism 500 respectively, and is used for controlling the robot main body 200, the receiving platform 300 and the moving mechanism 500 to perform corresponding actions respectively.
The control mechanism 400 is a central control system, and can receive data information and transmit control commands to different components.
The control means 400 acquires data information transmitted from the information input means 220, the first image acquiring means 240, the first information acquiring means 250, and the third image acquiring means 260 of the robot main body 200, and correspondingly controls the operation of the robot arm 210 of the robot main body 200 or transmits data information to the information output means 230 so that the information output means 230 displays the corresponding information.
The control mechanism 400 controls the movable support 340 to slide and adjust, and acquires the data information transmitted by the second image acquisition mechanism 320 and the second information acquisition mechanism 330, and transmits the processed data information to the robot main body 200, so as to control the mechanical arm 210 of the robot main body 200 to perform book classification and taking actions.
The control means 400 controls the moving means 500 to move to a predetermined place based on the result of book classification.
Specifically, the moving mechanism 500 is a plurality of universal wheels or mecanum wheels, and can perform movement and direction change.
Further, in order to accommodate bookshelves of different heights in the library, the multi-function robot further includes a lifting mechanism 600. The lifting mechanism 600 may be provided at a lower portion of the mounting bracket 100, and may perform height adjustment of the robot main body 200 and the receiving platform 300 positioned at an upper side of the mounting bracket 100; alternatively, the elevating mechanism 600 may be provided between the mounting bracket 100 and the robot main body 200, and only the height of the robot main body 200 may be adjusted.
Specifically, the control mechanism 400 controls the lifting mechanism 600 to perform height adjustment, wherein the lifting mechanism 600 may be a hydraulic lifting system or a motor-driven articulated lifting system.
The use method of the invention comprises book management and intelligent interaction.
The use method of book management is as follows:
book classification: the control mechanism 400 controls the moving bracket 340 to move, the second image acquiring mechanism 320 and the second information acquiring mechanism 330 are utilized to photograph and read the book 700 in the first accommodating cavity 352 of the bookshelf 350 in the book returning unclassified area so as to acquire information such as spine information and book classification number of the book 700, the control mechanism 400 receives and processes the information, classifies the book 700 according to the classification number and the return position, and controls the manipulator 210 to move the book 700 in the first accommodating cavity 352 of the bookshelf 350 in the book returning unclassified area into the accommodating cavity 351 of the bookshelf 350 in the book returning classified area; the control mechanism 400 controls the moving bracket 340 to move again, the second image obtaining mechanism 320 and the second information obtaining mechanism 330 are used for photographing and reading the book 700 on the bookshelf 350 located in the yet-to-book unclassified region and moved from the second accommodating cavity 353 to the first accommodating cavity 352 to obtain information such as spine information and book classification number of the book 700, the control mechanism 400 receives and processes the information, classifies the book 700 according to the classification number and the return position, and controls the manipulator 210 to move the book 700 located in the first accommodating cavity 352 of the bookshelf 350 located in the yet-to-book unclassified region to the accommodating cavity 351 of the bookshelf 350 located in the yet-to-book classified region;
book returning: the control means 400 controls the movement means 500 to move the multi-function robot to a predetermined place; when the robot reaches a predetermined place, the control mechanism 400 controls the robot main body 200 to move the robot arm 210 close to the storage platform 300 to the book 700 accommodated in the bookshelf 350 located in the book returning classification area, and the third image acquisition mechanism 260 and the first information acquisition mechanism 250 acquire information of the grasped book 700; based on the height of the bookshelf in the library, the control mechanism 400 controls the lifting mechanism 600 to act so that the mechanical arm 210 of the robot main body 200 is aligned with the book returning position; the control mechanism 400 controls the robot arm 214 of the robot main body 200, which is far away from the robot arm 210 of the storage platform 300, to move the books in the bookshelf of the library so as to leave the positions of the books to be returned, for example, the third image acquisition mechanism 260 on the robot arm 210 aligns with the positions of the books to be returned, the first finger part and the second finger part of the robot arm 214 are inserted between the two books to move one of the books to the other side so as to leave the positions, so that the other robot arm 214 can conveniently place the books to be returned into the positions;
borrowing a book: the control mechanism 400 receives the book borrowing information, and the control mechanism 400 controls the moving mechanism 500 to move the multifunctional robot to a specified place; when the user arrives at a designated place, the control mechanism 400 controls the lifting mechanism 600 to move based on the height of the bookshelf in the library so that the mechanical arm 210 of the robot main body 200 is aligned with the book borrowing position, the control mechanism 400 controls the mechanical arm 210 of the robot main body 200 close to the storage platform 300 to move the book to be borrowed, the third image acquisition mechanism 260 and the first information acquisition mechanism 250 are used for acquiring information of the grasped book, and the book is placed in the accommodating cavity 351 of the bookshelf 350 in the book borrowing area.
The intelligent interaction use method comprises the following steps:
interacting with a deaf-mute user: the deaf-mute user can interact with the robot main body 200 through sign language, and can also interact with the robot main body 200 through the information input mechanism 220;
when the deaf mute user is located in front of the robot main body 200 and interacts with the sign language, the first image acquisition unit 240 of the robot main body 200 acquires the sign language action of the deaf mute user, after being processed by the control unit 400, displays information corresponding to the sign language on the information output unit 230, based on the information, the control unit 400 searches the database and finds an answer corresponding to the information, processes the answer, displays the answer on the information output unit 230 and makes a sign language action corresponding to the answer through the two arms 210 to complete the interaction; in the case where an answer corresponding to the information is not found, the control mechanism 400 issues a reminder to the worker who can remotely input the answer for the information or input the answer for the information through the information input mechanism 220 located at the rear side of the robot main body 200, the control mechanism 400 displays the answer on the information output mechanism 230 and makes a sign language action corresponding to the answer through the two robot arms 210 to complete the interaction;
in the case where sign language interaction is not used, the deaf-dumb user can input information through the information input means 220 on the front or rear side of the robot main body 200, such as a keyboard unit or a handwriting unit, the control means 400 retrieves the database and finds an answer corresponding to the information, processes the answer, and displays the answer on the information output means 230; in the case where no answer corresponding to the information is found, the control mechanism 400 issues a reminder to the staff, who may remotely input the answer for the information or input the answer for the information through the information input mechanism 220 located on the opposite side of the deaf-mute user, the control mechanism 400 displays the answer on the information output mechanism 230 to complete the interaction;
interacting with the blind: the blind user inputs information through the information input mechanism 220 at the front or rear side of the robot main body 200, i.e., inputs information using braille of the keyboard unit, the control mechanism 400 searches the database and finds a response corresponding to the information, processes the response, and displays the response on the braille display panel of the information output mechanism 230; in the case where no answer corresponding to the information is found, the control mechanism 400 issues a prompt to the worker who may remotely input the answer for the information or input the answer for the information through the information input mechanism 220 located on the opposite side of the braille user, the control mechanism 400 displays the answer on the braille display panel of the information output mechanism 230 to complete the interaction;
interaction with the ordinary user: a general user inputs information through the information input mechanism 220 on the front side or the rear side of the robot main body 200, such as a keyboard unit or a handwriting unit, and needs to switch input states when using the keyboard unit, and the control mechanism 400 searches the database and finds a response corresponding to the information, and displays the response on the information output mechanism 230 after processing the response; in the event that an answer corresponding to the information is not found, the control mechanism 400 issues a reminder to the worker, who may remotely input the answer to the information or input the answer to the information through the information input mechanism 220 located on the opposite side of the ordinary user, and the control mechanism 400 displays the answer at the information output mechanism 230 to complete the interaction.
The intelligent book sorting and placing robot has the advantages that on the basis of the functions of book sorting, book classification, book grabbing and placing, the robot body interacts with deaf-mute users through the image recognition and the mechanical arm, interacts with blind users, common users and workers through the information input mechanism and the information output mechanism, is convenient to serve various groups, improves the intelligent management capacity and the intelligent service capacity of a library, reduces the use difficulty of the deaf-mute, the blind and the like, and is convenient to obtain required information.
Example 2
The embodiment is a specific method for interaction between a multifunctional robot for a library and a deaf-mute user, which comprises the following steps:
constructing a recognition model:
acquiring sign language image information, wherein the sign language image information comprises color image information, black-and-white image information and sign language action information;
according to the semantics corresponding to the sign language image information, marking the sign language image information to form training data;
and inputting the training data into the neural network model for training to obtain the optimal model parameters, and constructing the recognition model.
Sign language translation is carried out by utilizing a recognition model:
acquiring sign language image information;
inputting the sign language image information to the recognition model to obtain a semantic meaning corresponding to the sign language image information, and outputting the semantic meaning.
Sign language conversion is carried out by utilizing a recognition model:
obtaining semantics;
and inputting the semantics into the recognition model to obtain sign language image information corresponding to the semantics, and outputting sign language action information of the sign language image information.
Specifically, the example in which the neural network model is a deep convolutional neural network model will be described.
And performing characteristic extraction on the sign language image information for multiple times by adopting a multilayer deep convolution neural network, performing time sequence modeling on the extracted characteristics by adopting a long-term and short-term memory neural network, outputting a time sequence of sign language labels, and finishing the identification of the sign language image information.
The first layer of the deep convolutional neural network is a convolutional layer, the size of a convolutional kernel of the deep convolutional neural network is [96, 11, 11, 3], the first dimension represents the number of the convolutional kernels, the second dimension represents the height of the convolutional kernels, the third row represents the width of the convolutional kernels, the fourth dimension represents the number of channels of the convolutional kernels, the convolutional kernels are respectively slid along the x direction and the y direction, step lengths are all 4, a relu function is adopted for activation after convolution, the relu function belongs to a linear piecewise function, and the calculation complexity of forward propagation and the calculation complexity of backward propagation gradient can be reduced at the same time. And performing pooling operation on the convolved result, wherein the pooling area is [3, 3], the pooling mode is maximum pooling, namely the maximum value in the [3, 3] area is selected as a new pixel, other pixels are deleted, the step length of the pooling is 3, and the number of channels of the image is unchanged and the length and the width are reduced through the pooling operation, so that the overfitting phenomenon is inhibited. And then, carrying out local response normalization processing on the pooled result, and further improving the generalization capability of the model.
The second layer of the deep convolutional neural network is a convolutional layer, the size of a convolutional kernel is [256, 7, 7, 3], the convolutional kernel slides along the x direction and the y direction respectively, the step length is 2, a relu function is adopted for activation after convolution, then a maximum pooling mode is used for pooling operation, the pooling area is [3, 3], the pooling step length is 2, and then local response normalization processing is carried out on the result after pooling.
The third layer of the deep convolutional neural network is a convolutional layer, the size of a convolutional kernel is [256, 5, 5, 3], the convolutional kernel slides along the x direction and the y direction respectively, the step length is 1, a relu function is adopted for activation, then pooling operation is carried out in a maximum pooling mode, the pooling area is [2, 2], the pooling step length is 2, and then local response normalization processing is carried out on the pooled result. After the three-layer convolution-pooling operation, four-layer pure convolution operation is carried out, the sizes of convolution kernels are all [384, 3, 3, 3], and the activation functions are all relu functions.
The method comprises the steps that images obtained through three times of convolution-pooling operations are connected to a full connection layer, the full connection layer is divided into two layers, Dropout processing is carried out after each layer is processed, namely random neglect of certain units does not participate in the next operation process, the random probability is set to be 0.5, the two full connection layers respectively comprise 4096 nerve units, an activation function is a tanh function, the obtained 4096-dimensional vectors are input into L STM units at the t moment for calculation, one direction of output of a L STM unit is used as P _ t (predicted value at the t moment), the other direction of output of the L STM unit is used as input of L STM units at the t +1 moment, prediction at the t +1 moment is carried out together with feature vectors obtained by CNN at the t +1 moment, the output result of the L STM unit is a probability vector, the vector dimension is the total number of all sign language image information, and sign language corresponding to the largest position in the probability vector value is selected as the predicted value at the t moment.
And when the L STM unit at the t-th moment finishes outputting the result, calling the next image and carrying out the CNN operation again.
In addition, a judgment function is set, and when no gesture motion is detected in 10 continuous sign language image information, the judgment is that a sentence is ended.
And adopting a cyclic neural network model to synthesize the sentences of the marked sign language image information, namely the semantics corresponding to the sign language image information, and displaying the sentences by using an information output mechanism.
Forming discrete words according to semantics, and selecting a template corpus with the maximum similarity to generate an initial sentence according to the input discrete words and the existing Chinese corpus database;
initializing the statement through a cyclic neural network model, correcting the statement through iteration, and increasing the accuracy and the continuity of the statement by adopting a similar word replacement mode in the correction process.
And searching in the sign language image information database according to the discrete vocabulary obtained by the information input mechanism, splicing the searched sign language image information according to the previous vocabulary sequence after the searching is finished, and demonstrating through the information output mechanism and the mechanical arm.
While the invention has been described with reference to a preferred embodiment, it will be understood by those skilled in the art that various changes in form and detail may be made therein without departing from the spirit and scope of the invention.

Claims (10)

1. A multi-function robot for a library, comprising:
mounting a bracket;
a robot main body provided at an upper portion of the mounting bracket;
a storage mechanism disposed above the mounting bracket and located at a side of the robot main body;
a control mechanism disposed between the robot main body and the mounting bracket;
the moving mechanism is arranged at the lower part of the mounting bracket;
wherein the robot main body includes:
the two mechanical arms are symmetrically arranged at the left part and the right part of the robot main body;
an information input mechanism provided at the front and rear of the robot main body;
the information output mechanism is arranged at the front part and the back part of the robot main body and is positioned at the upper part of the information input mechanism;
a first image acquisition mechanism rotatably mounted on a top of the robot main body;
a first information acquisition mechanism mounted to the mechanical arm near the storage mechanism;
the storage mechanism includes:
a storage platform;
the second image acquisition mechanism is arranged at the upper part of the accommodating platform;
a second information acquisition mechanism mounted to the storage platform.
2. The multi-function robot for libraries of claim 1, wherein the information input mechanism comprises:
the keyboard unit comprises a plurality of keys, and the upper surfaces of the keys comprise characters and Braille corresponding to the characters.
3. The multi-function robot for libraries of claim 2, wherein the information output mechanism comprises:
a display panel;
the Braille display panel is arranged at the lower part of the display panel.
4. The multi-function robot for libraries of claim 2, wherein the information input mechanism further comprises:
and the handwriting unit comprises a pen and a handwriting area, and the handwriting area is arranged on the side part of the keyboard unit or is integrated in the information output mechanism.
5. The multi-function robot for libraries of claim 1, wherein the stowing platform further comprises:
a mobile carriage, the mobile carriage comprising:
the first end of the first sub-bracket is slidably arranged on the accommodating platform;
a second sub-mount having a first end slidably nested with a second end of the first sub-mount;
the first end of the third sub-bracket is fixedly connected with the second end of the second sub-bracket;
a fourth sub-mount, a first end of the fourth sub-mount slidably nested with a second end of the third sub-mount;
the second image acquisition mechanism is rotatably mounted at the second end of the fourth sub-bracket.
6. The multi-function robot for libraries of claim 1, wherein the robot arm comprises:
a first sub-arm in a first rotational connection with the robot body;
a second sub-arm in a second rotational connection with the first sub-arm;
the manipulator is in third rotating connection with the second sub-arm;
wherein the first rotational connection, the second rotational connection and the third rotational connection have different rotational directions.
7. The multi-function robot for libraries of claim 6, wherein the robot arm further comprises:
a third sub-arm in a fourth rotational connection with the first sub-arm and in a fifth rotational connection with the second sub-arm;
wherein the first rotational connection, the third rotational connection and the fifth rotational connection have different rotational directions; the fourth rotational connection has the same rotational direction as the first rotational connection or the fifth rotational connection.
8. The multi-function robot for libraries of claim 1, wherein the housing platform comprises a book return unclassified area, a book return classified area and a book borrowing area;
the storage platform further comprises:
the bookshelf is arranged in the book returning unclassified region, the book returning classified region and the book borrowing region, and the bookshelf comprises a plurality of accommodating cavities with upward openings and arranged regularly.
9. The multi-function robot for libraries of claim 8, wherein the receiving cavity comprises:
a first chamber located at an upper side of the accommodating chamber;
a second chamber located at a lower side of the receiving cavity;
a baffle movably disposed between the first chamber and the second chamber;
and the push plate is arranged in the second chamber and reciprocates between the bottom of the second chamber and the partition plate.
10. The multi-function robot for libraries of claim 1, further comprising:
a lifting mechanism disposed on a lower portion of the mounting bracket, or disposed between the mounting bracket and the robotic device.
CN202010183484.3A 2020-03-16 2020-03-16 Library uses multi-functional robot Active CN111390921B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010183484.3A CN111390921B (en) 2020-03-16 2020-03-16 Library uses multi-functional robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010183484.3A CN111390921B (en) 2020-03-16 2020-03-16 Library uses multi-functional robot

Publications (2)

Publication Number Publication Date
CN111390921A true CN111390921A (en) 2020-07-10
CN111390921B CN111390921B (en) 2021-08-17

Family

ID=71424726

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010183484.3A Active CN111390921B (en) 2020-03-16 2020-03-16 Library uses multi-functional robot

Country Status (1)

Country Link
CN (1) CN111390921B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116861300A (en) * 2023-09-01 2023-10-10 中国人民解放军海军航空大学 Automatic auxiliary labeling method and device for complex maneuvering data set of military machine type

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20160032532A (en) * 2014-09-16 2016-03-24 경상대학교산학협력단 Arrangement robot
CN105798914A (en) * 2014-12-30 2016-07-27 希姆通信息技术(上海)有限公司 Intelligent robot for automatically classifying books of library
CN106181985A (en) * 2016-08-21 2016-12-07 张玉华 A kind of books pick and place robot
CN206480098U (en) * 2017-02-08 2017-09-08 广州市华标科技发展有限公司 A kind of self-service take pictures with Braille keyboard accepts equipment
CN208351632U (en) * 2018-03-29 2019-01-08 张�诚 A kind of intelligent self-service borrows book-return device
CN209514551U (en) * 2019-03-16 2019-10-18 上海萃钛智能科技有限公司 A kind of intelligent AC robot and AC system
CN209936932U (en) * 2019-05-13 2020-01-14 江苏大学 Book management robot based on RFID technology
CN110722577A (en) * 2019-10-23 2020-01-24 桂林电子科技大学 Intelligent book fetching device based on image recognition technology and use method
CN210025304U (en) * 2018-11-07 2020-02-07 重庆文理学院 Intelligent management robot for library

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20160032532A (en) * 2014-09-16 2016-03-24 경상대학교산학협력단 Arrangement robot
CN105798914A (en) * 2014-12-30 2016-07-27 希姆通信息技术(上海)有限公司 Intelligent robot for automatically classifying books of library
CN106181985A (en) * 2016-08-21 2016-12-07 张玉华 A kind of books pick and place robot
CN206480098U (en) * 2017-02-08 2017-09-08 广州市华标科技发展有限公司 A kind of self-service take pictures with Braille keyboard accepts equipment
CN208351632U (en) * 2018-03-29 2019-01-08 张�诚 A kind of intelligent self-service borrows book-return device
CN210025304U (en) * 2018-11-07 2020-02-07 重庆文理学院 Intelligent management robot for library
CN209514551U (en) * 2019-03-16 2019-10-18 上海萃钛智能科技有限公司 A kind of intelligent AC robot and AC system
CN209936932U (en) * 2019-05-13 2020-01-14 江苏大学 Book management robot based on RFID technology
CN110722577A (en) * 2019-10-23 2020-01-24 桂林电子科技大学 Intelligent book fetching device based on image recognition technology and use method

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116861300A (en) * 2023-09-01 2023-10-10 中国人民解放军海军航空大学 Automatic auxiliary labeling method and device for complex maneuvering data set of military machine type
CN116861300B (en) * 2023-09-01 2024-01-09 中国人民解放军海军航空大学 Automatic auxiliary labeling method and device for complex maneuvering data set of military machine type

Also Published As

Publication number Publication date
CN111390921B (en) 2021-08-17

Similar Documents

Publication Publication Date Title
Zhang et al. Efficient eye typing with 9-direction gaze estimation
Heidemann et al. Multimodal interaction in an augmented reality scenario
US7088861B2 (en) System and method for chinese input using a joystick
Starner Visual recognition of american sign language using hidden markov models
US20210182501A1 (en) Information processing method and apparatus, and storage medium
EP0254561B1 (en) Handwritten keyboardless-entry computer system
CN102246116B (en) Interface adaptation system
US5933526A (en) Handwritten keyboardless entry computer system
Schomaker From handwriting analysis to pen-computer applications
CN102999282A (en) Data object logic control system and method based on real-time stroke input
CN111390921B (en) Library uses multi-functional robot
EP4307096A1 (en) Key function execution method, apparatus and device, and storage medium
CN108647657A (en) A kind of high in the clouds instruction process evaluation method based on pluralistic behavior data
Nowosielski et al. Touchless typing with head movements captured in thermal spectrum
BERTRAND‐GASTALDY et al. Improved design of graphic displays in thesauri—through technology and ergonomics
Albertini et al. Designing natural gesture interaction for archaeological data in immersive environments
Khan et al. Use hand gesture to write in air recognize with computer vision
Tagniguchi et al. Unsupervised segmentation of human motion data using sticky HDP-HMM and MDL-based chunking method for imitation learning
CN114398858A (en) Document display method, related device, equipment and storage medium
Kwolek GAN-based data augmentation for visual finger spelling recognition
Dengel et al. Human-centered interaction with documents
Taniguchi et al. Constructive approach to role-reversal imitation through unsegmented interactions
Hanheide et al. Combining environmental cues & head gestures to interact with wearable devices
Cohen Dynamical system representation, generation, and recognition of basic oscillatory motion gestures, and application for the control of actuated mechanisms
Wu Vision and Learning for intelligent Human-Computer interaction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20231110

Address after: Room 606-609, Compound Office Complex Building, No. 757, Dongfeng East Road, Yuexiu District, Guangzhou, Guangdong Province, 510699

Patentee after: China Southern Power Grid Internet Service Co.,Ltd.

Address before: 200062 No. 3663, Putuo District, Shanghai, Zhongshan North Road

Patentee before: EAST CHINA NORMAL University