CN107993659A - Page turning method, robot page turning system and server applied to robot - Google Patents

Page turning method, robot page turning system and server applied to robot Download PDF

Info

Publication number
CN107993659A
CN107993659A CN201711220103.9A CN201711220103A CN107993659A CN 107993659 A CN107993659 A CN 107993659A CN 201711220103 A CN201711220103 A CN 201711220103A CN 107993659 A CN107993659 A CN 107993659A
Authority
CN
China
Prior art keywords
information
user
robot
voice
page turning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201711220103.9A
Other languages
Chinese (zh)
Inventor
李承敏
王文斌
包振毅
余倩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Yude Technology Co Ltd
Original Assignee
Shanghai Yude Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Yude Technology Co Ltd filed Critical Shanghai Yude Technology Co Ltd
Priority to CN201711220103.9A priority Critical patent/CN107993659A/en
Publication of CN107993659A publication Critical patent/CN107993659A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • B25J9/161Hardware, e.g. neural networks, fuzzy logic, interfaces, processor
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1679Programme controls characterised by the tasks executed
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B42BOOKBINDING; ALBUMS; FILES; SPECIAL PRINTED MATTER
    • B42DBOOKS; BOOK COVERS; LOOSE LEAVES; PRINTED MATTER CHARACTERISED BY IDENTIFICATION OR SECURITY FEATURES; PRINTED MATTER OF SPECIAL FORMAT OR STYLE NOT OTHERWISE PROVIDED FOR; DEVICES FOR USE THEREWITH AND NOT OTHERWISE PROVIDED FOR; MOVABLE-STRIP WRITING OR READING APPARATUS
    • B42D9/00Bookmarkers; Spot indicators; Devices for holding books open; Leaf turners
    • B42D9/04Leaf turners
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Computational Linguistics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Automation & Control Theory (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Fuzzy Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Toys (AREA)

Abstract

The present embodiments relate to artificial intelligence field, discloses a kind of applied to the page turning method of robot, robot page turning system and server.In the present invention, a kind of page turning method applied to robot, including:Voice messaging during collection user reading in real time, and according to the voice messaging collected, determine and the page number where the corresponding text information of the voice messaging;When the voice messaging for detecting the collection matches with last text information in the page number, page turn over operation is performed.Allow the robot to perform automatic page turning operation, without manually operated, improve reading efficiency.

Description

Page turning method applied to robot, robot page turning system and server
Technical Field
The embodiment of the invention relates to the field of artificial intelligence, in particular to a page turning method applied to a robot, a robot page turning system and a server.
Background
With the development of artificial intelligence, when various household robots are available on the market to go into our lives, such as cleaning robots, entertainment robots and the like, among which, some robots are popular among many families, that is, children robots. The child robot is an intelligent robot which can accompany a child and help the child to recognize things. Generally, the robot has the functions of video monitoring, remote call, video call and the like. The advent of children's robots is a good choice for some families that are busy working without excessive time and effort to accompany the child.
However, the inventors found that at least the following problems exist in the prior art: many robots in the market for accompanying users (such as children) to read books cannot automatically turn pages, manual page turning is needed, and reading efficiency is greatly reduced.
Disclosure of Invention
The invention aims to provide a page turning method applied to a robot, a robot page turning system and a server, so that the robot can execute automatic page turning operation without manual operation, and the reading efficiency is improved.
In order to solve the above technical problem, an embodiment of the present invention provides a page turning method applied to a robot, including: acquiring voice information of a user during reading in real time, and determining a page number of text information corresponding to the voice information according to the acquired voice information; and when the collected voice information is detected to be matched with the last sentence of character information in the page number, executing page turning operation.
An embodiment of the present invention also provides a robot page turning system, including: the device comprises a voice acquisition module, a matching module and a page turning module; the voice acquisition module is used for acquiring voice information when a user reads a book and determining the page number of the text information corresponding to the voice information according to the acquired voice information; the matching module is used for detecting whether the voice information is matched with the last sentence of character information in the page number; and the page turning module is used for executing page turning operation when the matching module detects that the voice information is matched with the last sentence of character information in the page number.
An embodiment of the present invention further provides a server, including: at least one processor; and a memory communicatively coupled to the at least one processor; the storage stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor to enable the at least one processor to execute the page turning method applied to the robot.
Compared with the prior art, the embodiment of the invention provides a page turning method applied to a robot, which is characterized in that voice information of a user during reading is collected in real time, and the page number of text information corresponding to the voice information is determined according to the collected voice information; and when the collected voice information is detected to be matched with the last sentence of character information in the page number, executing page turning operation. The text information of the book read by the user is stored in the robot page turning system, and the page number corresponds to the text information of each page number one by one, so that after the voice information of the user reading the book is collected in real time, the voice information is matched with the text information stored in the robot page turning system, the text information corresponding to the voice information is determined, the page number where the text information is located is obtained, the last sentence of text information of the page number is obtained, when the collected voice information is matched with the last sentence of text information in the page number, the automatic page turning operation is executed, manual operation is not needed, and the reading efficiency is improved.
In addition, after determining the page number of the text information corresponding to the voice information, the method further comprises: identifying action information of a user when reading a book; when the character information appointed by the user is identified, sending out voice information corresponding to the character information appointed by the user; when the image information appointed by the user is identified, the voice information corresponding to the image information appointed by the user is sent out, and the voice help is provided for the user while the automatic page turning function is provided, so that the user can more deeply know the character information or the image information.
In addition, in the process of collecting the voice information of the user during reading in real time, if instruction information sent by the user is received, page turning operation is executed according to the instruction information; the instruction information is used for indicating the robot to turn pages to the designated position, and the robot is indicated to turn pages to the designated position by sending the instruction information, so that the efficiency of reading is improved.
In addition, before the voice information of the user during reading is collected in real time, the method further comprises the following steps: and receiving book information input into the robot by a user in a scanning mode. The user can enter the book information in the robot by himself according to the needs, the robot is prevented from being capable of sending voice information to the fixed book information and executing page turning operation, and the use efficiency of the robot is improved.
In addition, after receiving the book information that the user entered to the robot through the scanning mode, still include: judging whether modification information of the input book information by the user is received or not; and if the modification information of the user is received, modifying the book information according to the modification information. Through can making the user revise the books information of typing into to the robot, be favorable to avoiding the error that appears in the books information identification process of typing into at the robot, improve the accuracy of the books information of typing into to the robot.
Drawings
One or more embodiments are illustrated by way of example in the accompanying drawings, which correspond to the figures in which like reference numerals refer to similar elements and which are not to scale unless otherwise specified.
Fig. 1 is a flowchart of a page turning method applied to a robot according to a first embodiment of the present invention;
fig. 2 is a flowchart of a page turning method applied to a robot according to a second embodiment of the present invention;
FIG. 3 is a flowchart of a page turning method applied to a robot according to a third embodiment of the present invention
FIG. 4 is a schematic diagram of a robotic page turning system according to a fourth embodiment of the present invention;
FIG. 5 is a schematic diagram of a robotic page turning system according to a fifth embodiment of the present invention;
FIG. 6 is a schematic diagram of a robotic page turning system according to a sixth embodiment of the present invention;
fig. 7 is a block diagram of a server according to a seventh embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention more apparent, embodiments of the present invention will be described in detail below with reference to the accompanying drawings. However, it will be appreciated by those of ordinary skill in the art that numerous technical details are set forth in order to provide a better understanding of the present application in various embodiments of the present invention. However, the technical solution claimed in the present application can be implemented without these technical details and various changes and modifications based on the following embodiments.
A first embodiment of the present invention relates to a page turning method applied to a robot. The method is particularly suitable for providing a page turning function for children, and the following example illustrates that a user using the page turning method applied to the robot is a child, and a specific flow is shown in fig. 1 and includes:
step 101, collecting voice information of a user during reading in real time.
Specifically, when a user needs to use the book turning function of the robot, the user (such as a child or a parent) may click a "start" button to start the book turning function, and at this time, the robot starts to collect voice information of the user when reading a book in real time. The collecting of the voice information when the user reads the book may be directly obtaining the existing audio in the prior art, or capture and intercept the sound by using audio processing software, or record the sound by using a microphone, which is not limited specifically here. In addition, the voice information of the user reading the book may be the voice information sent when reading the electronic book, or the voice information sent when reading the paper book, and the present disclosure is not limited thereto.
And step 102, determining the page number of the text information corresponding to the voice information according to the collected voice information.
Specifically, if the book read by the user is a language textbook of one year level, the collected voice information is "a problem that a group of birds are arguing for an interesting problem" in the field on page 6, since all the text information in the language textbook of one year level can be stored in the robot in advance, and the number of pages of each page corresponds to the text information of each page one to one. Therefore, the voice information of the user at this time can be determined to be the content of page 6 in the language textbook of the grade of the year by searching for the match.
And 103, detecting whether the collected voice information is matched with the last sentence of character information in the page number. If the collected voice information is detected to be matched with the last sentence of character information in the page number, entering step 104; otherwise, the process ends.
That is, after the page number where the text information corresponding to the voice information is located is determined in step 102, the last text information of the page number is obtained, still by way of example in step 102, after the voice information of the user is determined to be the content of the 6 th page in the first-year-level text, the last text information of the 6 th page is obtained, that is, "spring rain falls on willow tree, willow branches are also green", and whether the collected voice information and the last text information of the 6 th page, that is, "spring rain falls on willow tree, willow branches are also green" are detected. If the collected voice information is detected to be matched with the last sentence of character information in the page number, entering step 104; otherwise, the process ends.
And 104, executing page turning operation.
That is, when it is detected that the collected voice information matches the last text information in the page number, it indicates that the user has finished reading the current page, and the page turning continues to read the next page, so that a page turning operation is performed, where the performed page turning operation may be performed by a mechanical arm of the robot.
Compared with the prior art, the embodiment provides a page turning method applied to a robot, which is characterized in that voice information of a user during reading is collected in real time, and the page number of text information corresponding to the voice information is determined according to the collected voice information; and when the collected voice information is detected to be matched with the last sentence of character information in the page number, executing page turning operation. The text information of the book read by the user is stored in the robot page turning system, and the page number corresponds to the text information of each page number one by one, so that after the voice information of the user reading the book is collected in real time, the voice information is matched with the text information stored in the robot page turning system, the text information corresponding to the voice information is determined, the page number where the text information is located is obtained, the last sentence of text information of the page number is obtained, when the collected voice information is matched with the last sentence of text information in the page number, the automatic page turning operation is executed, manual operation is not needed, and the reading efficiency is improved.
A second embodiment of the present invention relates to a page turning method applied to a robot. The embodiment is further improved on the basis of the first embodiment, and the specific improvement is as follows: in the embodiment, after the page number of the text information corresponding to the voice information is determined, the action information of the user during reading is also identified, the operation corresponding to the action information is executed, and when the text information specified by the user is identified, the voice information corresponding to the text information specified by the user is sent out; when the image information appointed by the user is identified, the voice information corresponding to the image information appointed by the user is sent out, the automatic page turning function is provided, meanwhile, the voice help is provided for the user, and the user can know the character information or the image information more deeply. The specific flow chart is shown in fig. 2, and includes:
step 201, collecting voice information of a user during reading in real time.
Step 201 in the present embodiment is substantially the same as step 101 in the first embodiment, and step 201 in the present embodiment is different from step 101 in the first embodiment in that: in the process of collecting voice information of a user during reading in real time, if instruction information sent by the user is received, page turning operation is executed according to the instruction information; the instruction information is used for indicating the robot to turn pages to a specified position.
For example, in the process that the user is reading page 6 in the language textbook of the first grade, it is found that page 6 has been read many times, and then, when the user wants to see new content, the user can directly send instruction information to the robot (for example, turn to page 18), and then the robot will quickly turn pages until page 18 is turned. For another example, the instruction information sent by the user may also be any identification information existing in the book, for example, the user may send instruction information for turning to the second section of chapter ten, and upon receiving the instruction information sent by the user, the robot directly turns to the page number where the second section of chapter ten is located.
It should be noted that if the target page number is not much different from the current page number, for example, the current page number is page 4, and the target page number is page 10, the robot can rapidly turn pages page by page. And if the target page number is greatly different from the current page number, for example, the current page number is page 4, and the target page number is page 54, the robot can turn a plurality of pages each time according to the actual situation, and automatically calculate how many pages are different from the target page number by reading the page number turned each time until the target page number is turned. For example, 30 pages are turned for the first time, 20 pages which are different from the target page number are automatically calculated, the thickness of the page turning for the second time is smaller than that of the page turning for the first time, and if 21 pages are turned for the second time, one page is turned back to the target page number, so that the page turning efficiency under the condition that the difference between the target page number and the current page number is larger is improved.
Step 202, determining a page number of the text information corresponding to the voice information according to the collected voice information.
Since step 202 in this embodiment is substantially the same as step 102 in the first embodiment, it is not repeated here.
Step 203, judging whether the action information of the user reading is identified. If the action information of the user during reading is judged to be recognized, the step 204 is entered; otherwise, step 205 is entered, i.e. it is detected whether the collected voice information matches with the last sentence of text information in the page number.
It should be particularly noted that the recognizing of the action information when the user reads the book is performed after determining the page number where the text information corresponding to the voice information is located, and in order to better explain the present solution, the recognizing of the action information when the user reads the book is set before step 205, that is, whether the collected voice information matches with the last text information in the page number is detected, however, it can be understood by those skilled in the art that the recognizing of the action information when the user reads the book in the step may be performed after step 205, or may be performed simultaneously with step 205, and the execution sequence in the present embodiment should not be limited.
And step 204, executing the operation corresponding to the action information.
Specifically, when character information specified by a user is identified, voice information corresponding to the character information specified by the user is sent out; when the user-specified image information is recognized, voice information corresponding to the user-specified image information is uttered.
For example, when a child reads a book, if only the robot turns over the book, the reading function of the robot does not need to be started all the way, and only after the robot judges according to the character information or the image information recognized by the action of the child, the reading function is started and executed according to the action recognition of the child. For example, when a child points to image information on which a monkey is drawn, the robot reads the monkey, and if a corresponding introduction to the monkey in the image information is provided before or after the image information, the monkey is also read accordingly. For another example, if a child points to a segment, the robot can read the segment.
Step 205, detecting whether the collected voice information is matched with the last sentence of text information in the page number. If the collected voice information is detected to be matched with the last sentence of character information in the page number, entering step 206; otherwise, the process ends.
In step 206, a page turn operation is performed.
Since steps 205 to 206 in this embodiment are substantially the same as steps 103 to 104 in the first embodiment, the page turning operation is performed when it is detected that the collected voice information matches the last sentence of text information in the page number, and details are not described here.
Compared with the prior art, in the page turning method applied to the robot, after the page number of the text information corresponding to the voice information is determined, the action information of the user during reading is also identified, the operation corresponding to the action information is executed, and when the text information specified by the user is identified, the voice information corresponding to the text information specified by the user is sent out; when the image information appointed by the user is identified, the voice information corresponding to the image information appointed by the user is sent out, the automatic page turning function is provided, meanwhile, the voice help is provided for the user, and the user can know the character information or the image information more deeply.
The third embodiment of the present invention relates to a page turning method applied to a robot. The embodiment is further improved on the basis of the first embodiment, and the specific improvement is as follows: in the embodiment, before the voice information of the user is collected in real time when the user reads books, the book information input to the robot by the user in a scanning mode is received, so that the user can input the book information in the robot by himself or herself according to needs, the robot is prevented from sending the voice information to the fixed book information and executing page turning operation, and the service efficiency of the robot is improved. The specific flow is shown in FIG. 3
And 301, receiving book information input into the robot by a user in a scanning mode.
Specifically, the user can enter any book information in advance, and the book information is not limited to the specific book. The content of each page in the book can be directly scanned at the scanning part of the robot, so that the text information and the image information of each page in the scanned book can be input.
It is worth mentioning that after receiving the book information input to the robot by the user in a scanning manner, whether the modification information of the input book information by the user is received can be judged; and if receiving the modification information of the user, modifying the book information according to the modification information. For example, there is an image of a monkey, after the action of the user is recognized to identify the image information, the voice information sent by the robot should match with the pronunciation of the monkey, if the voice information sent by the robot does not match with the pronunciation of the monkey, the book information can be modified, wherein, during the modification, the text information and the image information in the page can be recorded again, the correct pronunciation can be manually input for modification, and the book information can be directly modified by voice, which is not limited in this respect.
Step 302, collecting voice information of the user during reading in real time.
Step 303, determining a page number of the text information corresponding to the voice information according to the collected voice information.
And 304, detecting whether the collected voice information is matched with the last sentence of character information in the page number. If the collected voice information is detected to be matched with the last sentence of text information in the page number, entering step 305; otherwise, the process ends.
In step 305, a page flip operation is performed.
Since steps 302 to 104 in this embodiment are substantially the same as steps 101 to 104 in the first embodiment, the method aims to collect voice information of a user during reading in real time, and determine a page number of text information corresponding to the voice information according to the collected voice information; and when the collected voice information is detected to be matched with the last sentence of text information in the page number, executing page turning operation, which is not described herein again.
Compared with the prior art, the page turning method applied to the robot provided by the embodiment receives the book information input to the robot by the user in a scanning mode before the voice information of the user during reading is collected in real time, so that the user can input the book information into the robot by himself according to needs, the robot is prevented from sending the voice information only to the fixed book information and executing page turning operation, and the use efficiency of the robot is improved.
The steps of the above methods are divided for clarity, and the implementation may be combined into one step or split some steps, and the steps are divided into multiple steps, so long as the same logical relationship is included, which are all within the protection scope of the present patent; it is within the scope of the patent to add insignificant modifications to the algorithms or processes or to introduce insignificant design changes to the core design without changing the algorithms or processes.
A fourth embodiment of the present invention relates to a page turning system for a robot, and specifically, as shown in fig. 4, the page turning system includes: a voice acquisition module 401, a matching module 402 and a page turning module 403;
specifically, the voice collecting module 401 is configured to collect voice information of a user during reading, and determine a page number of text information corresponding to the voice information according to the collected voice information; a matching module 402, configured to detect whether the voice information matches with the last sentence of text information in the page number; a page turning module 403, configured to execute a page turning operation when the matching module 402 detects that the voice information matches the last sentence of text information in the page number.
It should be understood that this embodiment is a system example corresponding to the first embodiment, and may be implemented in cooperation with the first embodiment. The related technical details mentioned in the first embodiment are still valid in this embodiment, and are not described herein again in order to reduce repetition. Accordingly, the related-art details mentioned in the present embodiment can also be applied to the first embodiment.
It should be noted that each module referred to in this embodiment is a logical module, and in practical applications, one logical unit may be one physical unit, may be a part of one physical unit, and may be implemented by a combination of multiple physical units. In addition, in order to highlight the innovative part of the present invention, elements that are not so closely related to solving the technical problems proposed by the present invention are not introduced in the present embodiment, but this does not indicate that other elements are not present in the present embodiment.
A fifth embodiment of the present invention relates to a robot page turning system. The embodiment is further improved on the basis of the fourth embodiment, and the specific improvement is as follows: in the embodiment, after the page number of the text information corresponding to the voice information is determined, the action information of the user during reading is identified through the action identification module, the operation corresponding to the action information is executed, and when the action identification module identifies the text information specified by the user, the voice information corresponding to the text information specified by the user is sent out through the voice output module; when the action recognition module recognizes the image information appointed by the user, the voice output module sends out the voice information corresponding to the image information appointed by the user, the automatic page turning function is provided, meanwhile, voice help is provided for the user, and the user can learn the character information or the image information more deeply.
In this embodiment, the page turning system for a robot further includes an action recognition module 501 and a voice output module 502, which are specifically shown in fig. 5.
The action recognition module 501 is configured to recognize action information of a user when reading a book after the voice acquisition module 401 determines a page number of text information corresponding to the voice information; a voice output module 502, configured to send out voice information corresponding to the text information specified by the user when the action recognition module 501 recognizes the text information specified by the user; the voice output module 502 is further configured to, when the action recognition module 501 recognizes the image information specified by the user, send out voice information corresponding to the image information specified by the user.
The voice acquisition module 401 is further configured to receive instruction information sent by the user in the process of acquiring voice information of the user during reading in real time; the page turning module 403 is further configured to execute a page turning operation according to instruction information sent by the user and received by the voice acquisition module 401; the instruction information is used for indicating the robot to turn pages to a specified position.
Since the second embodiment corresponds to the present embodiment, the present embodiment can be implemented in cooperation with the second embodiment. The related technical details mentioned in the second embodiment are still valid in this embodiment, and the technical effects that can be achieved in the second embodiment can also be achieved in this embodiment, and are not described herein again in order to reduce the repetition. Accordingly, the related-art details mentioned in the present embodiment can also be applied to the second embodiment.
A sixth embodiment of the present invention relates to a robot page turning system. The embodiment is further improved on the basis of the fourth embodiment, and the specific improvement is as follows: in the embodiment, before the voice information of the user during reading is collected in real time, the book information input to the robot by the user in a scanning mode is received through the information input module, so that the user can input the book information in the robot by himself or herself according to needs, the robot is prevented from sending the voice information to the fixed book information and executing page turning operation, and the service efficiency of the robot is improved.
In this embodiment, a page turning system for a robot further includes: the information entry module 601 is specifically shown in fig. 6.
The information input module 601 is configured to receive book information input to the robot by the user in a scanning manner before the voice acquisition module 401 acquires voice information of the user during reading in real time.
Since the third embodiment corresponds to the present embodiment, the present embodiment can be implemented in cooperation with the third embodiment. The related technical details mentioned in the third embodiment are still valid in this embodiment, and the technical effects that can be achieved in the third embodiment can also be achieved in this embodiment, and are not described herein again in order to reduce the repetition. Accordingly, the related-art details mentioned in the present embodiment can also be applied to the second embodiment.
The seventh embodiment of the present invention relates to a server, as shown in fig. 7, including at least one processor 701; and, a memory 702 communicatively coupled to the at least one processor 701; the memory 702 stores instructions executable by the at least one processor 701, and the instructions are executed by the at least one processor 701, so that the at least one processor 701 can execute the page turning method applied to the robot.
The memory 702 and the processor 701 are coupled by a bus, which may comprise any number of interconnecting buses and bridges that couple one or more of the various circuits of the processor 701 and the memory 702. The bus may also connect various other circuits such as peripherals, voltage regulators, power management circuits, and the like, which are well known in the art, and therefore, will not be described any further herein. A bus interface provides an interface between the bus and the transceiver. The transceiver may be one element or a plurality of elements, such as a plurality of receivers and transmitters, providing a means for communicating with various other apparatus over a transmission medium. The data processed by the processor 701 is transmitted over a wireless medium through an antenna, which receives the data and transmits the data to the processor 701.
The processor 701 is responsible for managing the bus and general processing and may also provide various functions including timing, peripheral interfaces, voltage regulation, power management, and other control functions. And the memory 702 may be used for storing data used by the processor 701 in performing operations.
That is, as can be understood by those skilled in the art, all or part of the steps in the method for implementing the embodiments described above may be implemented by a program instructing related hardware, where the program is stored in a storage medium and includes several instructions to enable a device (which may be a single chip, a chip, or the like) or a processor (processor) to execute all or part of the steps of the method described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
It will be understood by those of ordinary skill in the art that the foregoing embodiments are specific examples for carrying out the invention, and that various changes in form and details may be made therein without departing from the spirit and scope of the invention in practice.

Claims (10)

1. A page turning method applied to a robot is characterized by comprising the following steps:
acquiring voice information of a user during reading in real time, and determining a page number of text information corresponding to the voice information according to the acquired voice information;
and executing page turning operation when the collected voice information is detected to be matched with the last sentence of character information in the page number.
2. A page turning method applied to a robot according to claim 1, wherein after determining a page number where text information corresponding to the voice information is located, the method further comprises:
identifying action information of a user when reading a book;
when the character information specified by the user is identified, sending out voice information corresponding to the character information specified by the user;
and when the image information specified by the user is identified, sending out voice information corresponding to the image information specified by the user.
3. The page turning method applied to the robot according to claim 1, further comprising:
in the process of collecting the voice information of the user during reading in real time, if instruction information sent by the user is received, page turning operation is executed according to the instruction information; the instruction information is used for indicating the robot to turn pages to a specified position.
4. A page turning method applied to a robot as claimed in claim 1, further comprising, before the collecting voice information of a user reading a book in real time:
and receiving book information input to the robot by a user in a scanning mode.
5. A page turning method applied to a robot as claimed in claim 4, further comprising, after receiving book information entered into the robot by a user in a scanning manner:
judging whether modification information of the user on the input book information is received;
and if the modification information of the user is received, modifying the book information according to the modification information.
6. A robotic page turning system, comprising: the device comprises a voice acquisition module, a matching module and a page turning module;
the voice acquisition module is used for acquiring voice information when a user reads a book and determining a page number of text information corresponding to the voice information according to the acquired voice information;
the matching module is used for detecting whether the voice information is matched with the last sentence of character information in the page number;
and the page turning module is used for executing page turning operation when the matching module detects that the voice information is matched with the last sentence of character information in the page number.
7. The robotic page flipping system of claim 6, further comprising an action recognition module and a voice output module;
the action recognition module is used for recognizing action information of a user during reading after the voice acquisition module determines the page number of the text information corresponding to the voice information;
the voice output module is used for sending out voice information corresponding to the character information specified by the user when the action recognition module recognizes the character information specified by the user;
the voice output module is further configured to send out voice information corresponding to the image information specified by the user when the action recognition module recognizes the image information specified by the user.
8. The system for turning pages through robots of claim 6, wherein the voice collecting module is further configured to receive instruction information sent by a user during the process of collecting voice information of the user during reading a book in real time;
the page turning module is also used for executing page turning operation according to the instruction information sent by the user and received by the voice acquisition module; the instruction information is used for indicating the robot to turn pages to a specified position.
9. The robotic page turning system of claim 6, further comprising: an information input module;
and the information input module is used for receiving book information input to the robot by a user in a scanning mode before the voice acquisition module acquires the voice information of the user during reading in real time.
10. A server, comprising:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform a page turning method applied to a robot as claimed in any one of claims 1 to 5.
CN201711220103.9A 2017-11-28 2017-11-28 Page turning method, robot page turning system and server applied to robot Pending CN107993659A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711220103.9A CN107993659A (en) 2017-11-28 2017-11-28 Page turning method, robot page turning system and server applied to robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711220103.9A CN107993659A (en) 2017-11-28 2017-11-28 Page turning method, robot page turning system and server applied to robot

Publications (1)

Publication Number Publication Date
CN107993659A true CN107993659A (en) 2018-05-04

Family

ID=62033811

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711220103.9A Pending CN107993659A (en) 2017-11-28 2017-11-28 Page turning method, robot page turning system and server applied to robot

Country Status (1)

Country Link
CN (1) CN107993659A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109166572A (en) * 2018-09-11 2019-01-08 深圳市沃特沃德股份有限公司 The method and reading machine people that robot is read
CN112776505A (en) * 2020-12-18 2021-05-11 西安文理学院 Bed book turning frame based on voice control and control method thereof

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1562581A (en) * 2004-04-15 2005-01-12 上海交通大学 Reading robot
US20110191692A1 (en) * 2010-02-03 2011-08-04 Oto Technologies, Llc System and method for e-book contextual communication
CN102848396A (en) * 2012-09-29 2013-01-02 南京大五教育科技有限公司 Reading robot
CN203077886U (en) * 2012-12-27 2013-07-24 李婧娴 Voice automatic page-turning machine
CN104010014A (en) * 2013-02-27 2014-08-27 深圳好未来智能科技有限公司 Work learning application robot
US20150227263A1 (en) * 2014-02-12 2015-08-13 Kobo Inc. Processing a page-transition action using an acoustic signal input
CN105643634A (en) * 2016-04-05 2016-06-08 钦州萌娃机器人技术有限公司 Automatic reading robot and operation method thereof
CN205573392U (en) * 2016-04-29 2016-09-14 武汉大学 Acoustic control automatic page turning bookshelf
CN106941001A (en) * 2017-04-18 2017-07-11 何婉榕 Automatic page turning method and device
CN107256647A (en) * 2017-08-17 2017-10-17 重庆华凤衣道文化创意有限公司 A kind of Collapsible mobile exempts to see automatic page turning reader
CN206560160U (en) * 2016-12-04 2017-10-17 巩文萱 Senior middle school's literal arts reading stand

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1562581A (en) * 2004-04-15 2005-01-12 上海交通大学 Reading robot
US20110191692A1 (en) * 2010-02-03 2011-08-04 Oto Technologies, Llc System and method for e-book contextual communication
CN102848396A (en) * 2012-09-29 2013-01-02 南京大五教育科技有限公司 Reading robot
CN203077886U (en) * 2012-12-27 2013-07-24 李婧娴 Voice automatic page-turning machine
CN104010014A (en) * 2013-02-27 2014-08-27 深圳好未来智能科技有限公司 Work learning application robot
US20150227263A1 (en) * 2014-02-12 2015-08-13 Kobo Inc. Processing a page-transition action using an acoustic signal input
CN105643634A (en) * 2016-04-05 2016-06-08 钦州萌娃机器人技术有限公司 Automatic reading robot and operation method thereof
CN205573392U (en) * 2016-04-29 2016-09-14 武汉大学 Acoustic control automatic page turning bookshelf
CN206560160U (en) * 2016-12-04 2017-10-17 巩文萱 Senior middle school's literal arts reading stand
CN106941001A (en) * 2017-04-18 2017-07-11 何婉榕 Automatic page turning method and device
CN107256647A (en) * 2017-08-17 2017-10-17 重庆华凤衣道文化创意有限公司 A kind of Collapsible mobile exempts to see automatic page turning reader

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109166572A (en) * 2018-09-11 2019-01-08 深圳市沃特沃德股份有限公司 The method and reading machine people that robot is read
CN112776505A (en) * 2020-12-18 2021-05-11 西安文理学院 Bed book turning frame based on voice control and control method thereof

Similar Documents

Publication Publication Date Title
CN107919130B (en) Cloud-based voice processing method and device
CN107680589B (en) Voice information interaction method, device and equipment
CN107591155B (en) Voice recognition method and device, terminal and computer readable storage medium
CN107591152B (en) Voice control method, device and equipment based on earphone
CN101650779B (en) Character recognition device and method
CN109326305B (en) Method and system for batch testing of speech recognition and text synthesis
CN112151029A (en) Voice awakening and recognition automatic test method, storage medium and test terminal
CN103488384A (en) Voice assistant application interface display method and device
CN111326154B (en) Voice interaction method and device, storage medium and electronic equipment
CN111326140B (en) Speech recognition result discriminating method, correcting method, device, equipment and storage medium
CN103489444A (en) Speech recognition method and device
CN103488401A (en) Voice assistant activating method and device
CN111467074B (en) Method and device for detecting livestock status
CN108182270A (en) Search content transmission method, search content search method, smart pen, search terminal, and storage medium
CN109271503A (en) Intelligent answer method, apparatus, equipment and storage medium
CN107993659A (en) Page turning method, robot page turning system and server applied to robot
EP1085501A2 (en) Client-server based speech recognition
CN106023990A (en) Speech control method and device based on projector equipment
CN107205076A (en) The page turning method and device of a kind of e-book
CN107680598B (en) Information interaction method, device and equipment based on friend voiceprint address list
CN113129902B (en) Voice processing method and device, electronic equipment and storage medium
CN110795918A (en) Method, device and equipment for determining reading position
CN110415689B (en) Speech recognition device and method
CN108735214A (en) The sound control method and device of equipment
CN113849415A (en) Control testing method and device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20180504

WD01 Invention patent application deemed withdrawn after publication