CN107608618B - Interaction method and device for wearable equipment and wearable equipment - Google Patents

Interaction method and device for wearable equipment and wearable equipment Download PDF

Info

Publication number
CN107608618B
CN107608618B CN201710841899.3A CN201710841899A CN107608618B CN 107608618 B CN107608618 B CN 107608618B CN 201710841899 A CN201710841899 A CN 201710841899A CN 107608618 B CN107608618 B CN 107608618B
Authority
CN
China
Prior art keywords
information
wearable device
voice
characters
character
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710841899.3A
Other languages
Chinese (zh)
Other versions
CN107608618A (en
Inventor
裴曾妍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Genius Technology Co Ltd
Original Assignee
Guangdong Genius Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Genius Technology Co Ltd filed Critical Guangdong Genius Technology Co Ltd
Priority to CN201710841899.3A priority Critical patent/CN107608618B/en
Publication of CN107608618A publication Critical patent/CN107608618A/en
Application granted granted Critical
Publication of CN107608618B publication Critical patent/CN107608618B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The embodiment of the invention discloses an interaction method and device for wearable equipment and the wearable equipment, wherein the method comprises the following steps: when the touch operation of a touch area set in the wearable device is detected, starting a voice recognition function and an image acquisition function of the wearable device; collecting target information in the current environment; if the target information is voice information, converting the voice information into first character information, and searching first information matched with first characters in the first character information in a first character library; displaying the first information on a display screen of the wearable device or broadcasting the first information to a user through voice; if the target information is the picture information, identifying second characters in the picture information, and searching second information matched with the second characters in a second character library; and converting the second information into second voice information and broadcasting the second voice information to the user. The information such as pinyin and strokes of characters or words which cannot be read or written by the user is displayed to the user in a screen display or voice broadcast mode, so that the interestingness is enhanced.

Description

Interaction method and device for wearable equipment and wearable equipment
Technical Field
The embodiment of the invention relates to the field of wearable equipment, in particular to an interaction method and device for the wearable equipment and the wearable equipment.
Background
With the progress and development of science and technology, various electronic devices have more and more perfect functions and wider application range, and the application population is promoted from young people to old people and children. In real life, children of low ages can query by means of electronic equipment when reading a picture book or a book or meeting characters, words or sentences which cannot be read or written in other scenes. Although various application programs related to the dictionary and dictionary for learning words and new words exist on the mobile phone, the tablet and the computer, the operation and carrying are inconvenient in specific life.
Disclosure of Invention
The embodiment of the invention provides an interaction method and device for wearable equipment and the wearable equipment, wherein information such as pinyin and strokes of characters or words which cannot be read or written by a user is displayed to the user in a screen display or voice broadcast mode through the wearable equipment, so that interestingness is enhanced, and operation is simple.
In a first aspect, an embodiment of the present invention provides an interaction method for a wearable device, where the method includes:
when touch operation of a touch area set in wearable equipment is detected, starting a voice recognition function and an image acquisition function of the wearable equipment;
acquiring target information in the current environment, wherein the target information comprises voice information and/or picture information;
if the target information is voice information, converting the voice information into corresponding first character information, and searching first information matched with a first character in the first character information in a first character library, wherein the first information comprises pinyin and strokes matched with the first character, and a phrase and/or sentence formed by the first character;
displaying the first information on a display screen of the wearable device or broadcasting the first information to a user through voice;
if the target information is picture information, identifying second characters in the picture information, and searching second information matched with the second characters in a second character library, wherein the second information comprises the pronunciation and meaning of the second characters and phrases and/or sentences matched with the second characters;
and converting the second information into corresponding second voice information and broadcasting the second voice information to the user.
Further, show the first information on wearable device's display screen or voice broadcast gives the user, include:
judging whether the first information can be displayed on a display screen of the wearable device in a set font size;
if so, displaying the first information on a display screen of the wearable device;
otherwise, the first information is broadcasted to the user in a voice mode.
Further, the displaying the first information on a display screen of the wearable device includes:
displaying the strokes matched with the first characters in the first information on a display screen of the wearable device according to the sequence.
Further, with first information voice broadcast gives the user, include:
and broadcasting the pinyin matched with the first character information in the first information to a user according to the sequence of initial consonants, vowels and tones.
Further, the method also comprises the following steps:
acquiring picture information in the current environment through an image acquisition function in the wearable device; and/or
And acquiring voice information in the current environment through a voice recognition function of the wearable device.
In a second aspect, an embodiment of the present invention provides an interaction apparatus for a wearable device, where the apparatus includes:
the wearable device comprises a detection module, a processing module and a display module, wherein the detection module is used for starting a voice recognition function and an image acquisition function of the wearable device when detecting the touch operation of a set touch area in the wearable device;
the system comprises a target information acquisition module, a target information acquisition module and a target information processing module, wherein the target information acquisition module is used for acquiring target information in the current environment, and the target information comprises voice information and/or picture information;
the first information searching module is used for converting the voice information into corresponding first character information when the target information is the voice information, and searching first information matched with first characters in the first character information in a first character library, wherein the first information comprises pinyin and strokes matched with the first characters, and phrases and/or sentences formed by the first characters;
the first information display module is used for displaying the first information on a display screen of the wearable device or broadcasting the first information to a user in a voice mode;
the second information searching module is used for identifying second characters in the picture information and searching second information matched with the second characters in a second character library when the target information is the picture information, wherein the second information comprises the pronunciation and meaning of the second characters and phrases and/or sentences matched with the second characters;
and the second information display module is used for converting the second information into corresponding second voice information and broadcasting the second voice information to the user.
Further, the first information display module comprises:
the first judgment submodule is used for judging whether the first information can be displayed on a display screen of the wearable device in a set font size;
the first information display sub-module is used for displaying the first information on the display screen of the wearable device if the first information can be displayed on the display screen of the wearable device in a set font size;
the first information broadcasting submodule is used for broadcasting the first information to a user when the first information cannot be displayed on a display screen of the wearable device in a set font size.
Further, the first information display sub-module is specifically configured to:
displaying the strokes matched with the first characters in the first information on a display screen of the wearable device according to the sequence.
Further, the first information broadcasting submodule is specifically used for:
and broadcasting the pinyin matched with the first character information in the first information to a user according to the sequence of initial consonants, vowels and tones.
Further, the method also comprises the following steps:
the information acquisition module is specifically used for acquiring picture information in the current environment through an image acquisition function in the wearable device; and/or
And acquiring voice information in the current environment through a voice recognition function of the wearable device.
In a third aspect, an embodiment of the present invention further provides a wearable device, including a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor, when executing the program, implements the interaction method for the wearable device according to any one of the embodiments of the present invention.
In the embodiment of the invention, when the touch operation of a touch area set in the wearable device is detected, the voice recognition function and the image acquisition function of the wearable device are started, if the acquired target information in the current environment is voice information, the voice information is converted into first character information, the first information matched with first characters in the first character information is searched in a first character library, and the first information is displayed on a display screen of the wearable device or is broadcasted to a user in a voice mode; and if the collected target information in the current environment is picture information, identifying second characters in the picture information, searching second information matched with the second characters in a second character library, converting the second information into second voice information and broadcasting the second voice information to the user. Information such as pinyin and strokes of characters or words which cannot be read or written by a user is displayed to the user in a screen display or voice broadcast mode through the wearable device, interestingness is enhanced, and operation is simple.
Drawings
Fig. 1 is a flowchart of an interaction method for a wearable device according to a first embodiment of the present invention;
fig. 2 is a flowchart of an interaction method for a wearable device in the second embodiment of the present invention;
fig. 3 is a schematic structural diagram of an interaction apparatus for a wearable device in a third embodiment of the present invention;
fig. 4 is a schematic structural diagram of a wearable device in the fourth embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
Example one
Fig. 1 is a flowchart of an interaction method for a wearable device according to an embodiment of the present invention, where the present embodiment is applicable to a situation where a user of the wearable device encounters a word or a word that is not to be read or written, and the method may be performed by an interaction apparatus for a wearable device according to an embodiment of the present invention, where the apparatus may be implemented in software and/or hardware, and may be generally integrated in a wearable device. Referring to fig. 1, the method may specifically include the following steps:
s110, when the touch operation of a touch area set in the wearable device is detected, starting a voice recognition function and an image acquisition function of the wearable device.
Specifically, when the user encounters a word or a phrase that cannot be read or written, the user clicks a touch area set in the wearable device. Optionally, the set touch area may be located on a side of the wearable device or other areas that facilitate touch control by a wearer of the wearable device. When the touch operation of a touch area set in the wearable device is detected, a voice recognition function and an image acquisition function of the wearable device are started. For example, the voice recognition function may be implemented by a voice recognition module, wherein the voice recognition module is integrated in the set touch area; the image acquisition module is realized through the image acquisition module, wherein the image acquisition module is integrated in a set touch area. Optionally, the wearable device may be a smart watch or a smart bracelet.
S120, collecting target information in the current environment, wherein the target information comprises voice information and/or picture information.
Specifically, after the voice recognition function and the image acquisition function of the wearable device are started, voice information and picture information in the current environment are acquired. In a specific example, the application scenario may be that a child independently reads books such as a picture book, plays or purchases the book while traveling with a parent, and wants to write a letter to a mom to express love, when the child encounters a word that cannot be written, the word that cannot be written may be read aloud, and at this time, the target information collected in the wearable device is voice information. When a child encounters words that cannot be read, the wearable device can be used for photographing or scanning the words that cannot be read to acquire picture information containing the words that cannot be read by the child.
S130, judging whether the target information is voice information or picture information, if the target information is the voice information, executing S140, and if the target information is the picture information, executing S150.
After the target information in the current environment is acquired, whether the target information is voice information or picture information is judged, so that different subsequent operations can be performed on the voice information or the picture information.
S140, converting the voice information into corresponding first character information, and searching first information matched with a first character in the first character information in a first character library, wherein the first information comprises pinyin and strokes matched with the first character, and phrases and/or sentences formed by the first character.
Specifically, the obtained voice message is converted into corresponding first text message, for example, the voice message sent by the child is: how to write "kitchen" or "kitchen" of "kitchen"? After the voice information is recognized by the voice recognition module and converted into corresponding first character information, first information matched with first characters in the first character information is searched in a first character library. The first character library stores first characters and first information matched with the first characters, wherein the first information comprises pinyin and strokes matched with the first characters and phrases and/or sentences formed by the first characters.
S141, displaying the first information on a display screen of the wearable device or broadcasting the first information to a user through voice.
Specifically, show first information on the display screen of wearable equipment or voice broadcast gives the user. In a specific example, if a child asks the user how to write the 'kitchen' of the 'kitchen', the pinyin 'ch-u-chu' of the 'kitchen' character is displayed on a display screen or is broadcasted to the user in a voice mode, the 'kitchen' character is displayed on the display screen in the stroke sequence 'horizontal, left-falling and horizontal … …', or the strokes of the 'kitchen' character are broadcasted in the sequence.
S150, identifying second characters in the picture information, and searching second information matched with the second characters in a second character library, wherein the second information comprises the pronunciation and meaning of the second characters and phrases and/or sentences matched with the second characters.
When the user encounters an unknown character or word, the second character in the picture information is identified by the identification module in the wearable device after the picture information is acquired by utilizing the photographing or scanning function. In a specific example, a mother takes a child to go to a park to play, a warning board is erected beside the lawn, a 'no-trample' is written on the warning board, and when the child does not know characters on the warning board, the wearable device is used for photographing or scanning the characters on the warning board to obtain corresponding picture information. After the wearable device identifies the second characters in the picture information, second information matched with the second characters is searched in a second character library, wherein the second character library stores the second characters and the pronunciation, the meaning, the phrases and/or the sentences of the second characters matched with the second characters.
In this specific example, the wearable device finds second information matching "do not step on", the second information including: the pronunciation and meaning of 'do not step on' and related phrases or sentences.
And S151, converting the second information into corresponding second voice information and broadcasting the second voice information to the user.
And converting the second information into corresponding voice information and broadcasting the voice information to the user. In a specific example, if the text in the picture information is "do not step on", the pronunciation, meaning and related phrases or sentences of "do not step on" are reported to the user, and the reported content may be: q ǐ ng wa ǐ t a "please don't tread the lawn", and the like.
In the embodiment of the invention, when the touch operation of a touch area set in the wearable device is detected, the voice recognition function and the image acquisition function of the wearable device are started, if the acquired target information in the current environment is voice information, the voice information is converted into first character information, the first information matched with first characters in the first character information is searched in a first character library, and the first information is displayed on a display screen of the wearable device or is broadcasted to a user in a voice mode; and if the collected target information in the current environment is picture information, identifying second characters in the picture information, searching second information matched with the second characters in a second character library, converting the second information into second voice information and broadcasting the second voice information to the user. Information such as pinyin and strokes of characters or words which cannot be read or written by a user is displayed to the user in a screen display or voice broadcast mode through the wearable device, interestingness is enhanced, and operation is simple.
On the basis of the above technical solution, the interaction method for a wearable device in the embodiment of the present invention further includes: acquiring picture information in the current environment through an image acquisition function in the wearable device; and/or collect voice information in the current environment through a voice recognition function of the wearable device.
The wearable device comprises an image acquisition module, a display module and a display module, wherein the image acquisition module has an image acquisition function, and acquires picture information in the current environment through the image acquisition function of the image acquisition module integrated in the wearable device; and/or the voice recognition module has a voice recognition function, and voice information in the current environment is collected through the voice recognition function of the voice recognition module integrated in the wearable device. The acquisition of the picture information and the voice information is realized.
Example two
Fig. 2 is a flowchart of an interaction method for a wearable device according to a second embodiment of the present invention, where "displaying the first information on a display screen of the wearable device or broadcasting the first information to a user with voice" is optimized in this embodiment based on the above embodiment. Referring to fig. 2, the method may specifically include the following steps:
s210, when the touch operation of a touch area set in the wearable device is detected, starting a voice recognition function and an image acquisition function of the wearable device.
S220, collecting target information in the current environment, wherein the target information comprises voice information and/or picture information.
S230, determining whether the target information is voice information or picture information, if the target information is voice information, executing S240, and if the target information is picture information, executing S250.
S240, converting the voice information into corresponding first character information, and searching first information matched with a first character in the first character information in a first character library, wherein the first information comprises pinyin and strokes matched with the first character, and phrases and/or sentences formed by the first character.
And S241, judging whether the first information can be displayed on a display screen of the wearable device in a set font size, if so, executing S242, and otherwise, executing S243.
Specifically, whether the pinyin and strokes matched with the first character and the phrase or sentence formed by the first character can be displayed on the wearable device in a set font is judged, wherein the set font can be the font size suitable for the eyes of the child.
S242, displaying the first information on a display screen of the wearable device.
When the first information can be displayed on the display screen of the wearable device in the set font size, the first information is displayed on the display screen of the wearable device.
Optionally, the displaying the first information on the display screen of the wearable device includes: displaying the strokes matched with the first characters in the first information on a display screen of the wearable device according to the sequence.
And displaying the strokes matched with the first characters in the first information on a display screen of the wearable device according to the sequence. In a specific example, if "kitchen" needs to be displayed on the display screen of the wearable device, the strokes of the "kitchen" are dynamically displayed on the display screen, and the displaying sequence is as follows: horizontal, left-falling and horizontal … ….
And S243, broadcasting the first information to the user by voice.
And when the first information can not be displayed on the display screen in the set font size, the first information is broadcasted to the user in a voice mode.
Optionally, the first information is broadcasted to the user by voice, including: and broadcasting the pinyin matched with the first character information in the first information to a user according to the sequence of initial consonants, vowels and tones.
Specifically, the pinyin matched with the first character information in the first information is broadcasted to the user according to the sequence of initial consonants, vowels and tones. In a specific example, if the first written message is "park", the message is reported to the user in the order of g-ong-g ō ng-y-u-an-yu a n.
S250, identifying second characters in the picture information, and searching second information matched with the second characters in a second character library, wherein the second information comprises the pronunciation and meaning of the second characters and phrases and/or sentences matched with the second characters.
And S251, converting the second information into corresponding second voice information and broadcasting the second voice information to the user.
In the embodiment of the invention, when the first information can be displayed on the display screen of the wearable device in the set font size, the first information is displayed on the display screen of the wearable device, and when the first information cannot be displayed on the screen of the wearable device in the set font size, the first information is broadcasted to the user in a voice mode. The first information is displayed or broadcasted according to different conditions.
EXAMPLE III
Fig. 3 is a schematic structural diagram of an interaction apparatus for a wearable device according to a third embodiment of the present invention, where the apparatus is adapted to perform an interaction method for a wearable device according to the third embodiment of the present invention. As shown in fig. 3, the apparatus may specifically include:
the detection module 310 is configured to, when a touch operation of a touch area set in a wearable device is detected, start a voice recognition function and an image acquisition function of the wearable device;
a target information collecting module 320, configured to collect target information in a current environment, where the target information includes voice information and/or picture information;
the first information searching module 330 is configured to, when the target information is voice information, convert the voice information into corresponding first text information, and search, in a first text library, first information matched with a first text in the first text information, where the first information includes pinyin and strokes matched with the first text, and a phrase and/or sentence formed by the first text;
the first information display module 340 is configured to display the first information on a display screen of the wearable device or perform voice broadcast to a user;
a second information searching module 350, configured to, when the target information is picture information, identify a second character in the picture information, and search, in a second character library, second information matched with the second character, where the second information includes a pronunciation and a meaning of the second character, and a phrase and/or a sentence matched with the second character;
and a second information display module 360, configured to convert the second information into corresponding second voice information and broadcast the second voice information to a user.
Further, the first information display module 340 includes:
the first judgment submodule is used for judging whether the first information can be displayed on a display screen of the wearable device in a set font size;
the first information display sub-module is used for displaying the first information on the display screen of the wearable device if the first information can be displayed on the display screen of the wearable device in a set font size;
the first information broadcasting submodule is used for broadcasting the first information to a user when the first information cannot be displayed on a display screen of the wearable device in a set font size.
Further, the first information display sub-module is specifically configured to:
displaying the strokes matched with the first characters in the first information on a display screen of the wearable device according to the sequence.
Further, the first information broadcasting submodule is specifically used for:
and broadcasting the pinyin matched with the first character information in the first information to a user according to the sequence of initial consonants, vowels and tones.
Further, the method also comprises the following steps:
the information acquisition module is specifically used for acquiring picture information in the current environment through an image acquisition function in the wearable device; and/or
And acquiring voice information in the current environment through a voice recognition function of the wearable device.
The interaction device for the wearable equipment provided by the embodiment of the invention can execute the interaction method for the wearable equipment provided by any embodiment of the invention, and has corresponding functional modules and beneficial effects of the execution method.
Example four
Fig. 4 is a schematic structural diagram of a wearable device according to a fourth embodiment of the present invention. Fig. 4 shows a block diagram of an exemplary wearable device 12 suitable for use in implementing embodiments of the present invention. The wearable device 12 shown in fig. 4 is only an example, and should not bring any limitation to the function and the scope of use of the embodiment of the present invention.
As shown in fig. 4, the wearable device 12 is in the form of a general purpose computing device. The components of the wearable device 12 may include, but are not limited to: one or more processors or processing units 16, a system memory 28, and a bus 18 that couples various system components including the system memory 28 and the processing unit 16.
Bus 18 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. By way of example, such architectures include, but are not limited to, Industry Standard Architecture (ISA) bus, micro-channel architecture (MAC) bus, enhanced ISA bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus.
Wearable device 12 typically includes a variety of computer system readable media. These media may be any available media that is accessible by wearable device 12 and includes both volatile and nonvolatile media, removable and non-removable media.
The system memory 28 may include computer system readable media in the form of volatile memory, such as Random Access Memory (RAM)30 and/or cache memory 32. The wearable device 12 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 34 may be used to read from and write to non-removable, nonvolatile magnetic media (not shown in FIG. 4, and commonly referred to as a "hard drive"). Although not shown in FIG. 4, a magnetic disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk (e.g., a CD-ROM, DVD-ROM, or other optical media) may be provided. In these cases, each drive may be connected to bus 18 by one or more data media interfaces. Memory 28 may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the invention.
A program/utility 40 having a set (at least one) of program modules 42 may be stored, for example, in memory 28, such program modules 42 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each of which examples or some combination thereof may comprise an implementation of a network environment. Program modules 42 generally carry out the functions and/or methodologies of the described embodiments of the invention.
Wearable device 12 may also communicate with one or more external devices 14 (e.g., keyboard, pointing device, display 24, etc.), with one or more devices that enable a user to interact with wearable device 12, and/or with any devices (e.g., network card, modem, etc.) that enable wearable device 12 to communicate with one or more other computing devices. Such communication may be through an input/output (I/O) interface 22. Also, the wearable device 12 may also communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the internet) through the network adapter 20. As shown, the network adapter 20 communicates with other modules of the wearable device 12 over the bus 18. It should be appreciated that although not shown in fig. 4, other hardware and/or software modules may be used in conjunction with the wearable device 12, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
The processing unit 16 executes various functional applications and data processing by running programs stored in the system memory 28, for example, implementing the interaction method for the wearable device provided by the embodiment of the present invention:
that is, the processing unit implements, when executing the program: when touch operation of a touch area set in wearable equipment is detected, starting a voice recognition function and an image acquisition function of the wearable equipment; acquiring target information in the current environment, wherein the target information comprises voice information and/or picture information; if the target information is voice information, converting the voice information into corresponding first character information, and searching first information matched with a first character in the first character information in a first character library, wherein the first information comprises pinyin and strokes matched with the first character, and a phrase and/or sentence formed by the first character; displaying the first information on a display screen of the wearable device or broadcasting the first information to a user through voice; if the target information is picture information, identifying second characters in the picture information, and searching second information matched with the second characters in a second character library, wherein the second information comprises the pronunciation and meaning of the second characters and phrases and/or sentences matched with the second characters; and converting the second information into corresponding second voice information and broadcasting the second voice information to the user.
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.

Claims (6)

1. An interaction method for a wearable device, comprising:
when touch operation of a touch area set in wearable equipment is detected, starting a voice recognition function and an image acquisition function of the wearable equipment;
acquiring target information in the current environment, wherein the target information comprises voice information and/or picture information;
if the target information is voice information, converting the voice information into corresponding first character information, and searching first information matched with a first character in the first character information in a first character library, wherein the first information comprises pinyin and strokes matched with the first character, and a phrase and/or sentence formed by the first character;
displaying the first information on a display screen of the wearable device or broadcasting the first information to a user through voice;
if the target information is picture information, identifying second characters in the picture information, and searching second information matched with the second characters in a second character library, wherein the second information comprises the pronunciation and meaning of the second characters and phrases and/or sentences matched with the second characters;
converting the second information into corresponding second voice information and broadcasting the second voice information to a user;
wherein, will first information show on wearable device's the display screen or voice broadcast gives the user, includes:
judging whether the first information can be displayed on a display screen of the wearable device in a set font size;
if so, displaying the first information on a display screen of the wearable device;
otherwise, the first information is broadcasted to the user in a voice mode;
the displaying the first information on a display screen of the wearable device includes:
displaying the strokes matched with the first characters in the first information on a display screen of the wearable device according to the sequence.
2. The method of claim 1, wherein the voice broadcasting the first information to a user comprises:
and broadcasting the pinyin matched with the first character information in the first information to a user according to the sequence of initial consonants, vowels and tones.
3. The method of claim 1, further comprising:
acquiring picture information in the current environment through an image acquisition function in the wearable device; and/or
And acquiring voice information in the current environment through a voice recognition function of the wearable device.
4. An interaction apparatus for a wearable device, comprising:
the wearable device comprises a detection module, a processing module and a display module, wherein the detection module is used for starting a voice recognition function and an image acquisition function of the wearable device when detecting the touch operation of a set touch area in the wearable device;
the system comprises a target information acquisition module, a target information acquisition module and a target information processing module, wherein the target information acquisition module is used for acquiring target information in the current environment, and the target information comprises voice information and/or picture information;
the first information searching module is used for converting the voice information into corresponding first character information when the target information is the voice information, and searching first information matched with first characters in the first character information in a first character library, wherein the first information comprises pinyin and strokes matched with the first characters, and phrases and/or sentences formed by the first characters;
the first information display module is used for displaying the first information on a display screen of the wearable device or broadcasting the first information to a user in a voice mode;
the second information searching module is used for identifying second characters in the picture information and searching second information matched with the second characters in a second character library when the target information is the picture information, wherein the second information comprises the pronunciation and meaning of the second characters and phrases and/or sentences matched with the second characters;
the second information display module is used for converting the second information into corresponding second voice information and broadcasting the second voice information to a user;
wherein the first information presentation module comprises:
the first judgment submodule is used for judging whether the first information can be displayed on a display screen of the wearable device in a set font size;
the first information display sub-module is used for displaying first information on a display screen of the wearable device when the first information can be displayed on the display screen of the wearable device in a set font size;
the first information broadcasting submodule is used for broadcasting the first information to a user in a voice mode when the first information cannot be displayed on a display screen of the wearable device in a set font size;
the first information display sub-module is specifically configured to:
displaying the strokes matched with the first characters in the first information on a display screen of the wearable device according to the sequence.
5. The device according to claim 4, wherein the first information broadcast submodule is specifically configured to:
and broadcasting the pinyin matched with the first character information in the first information to a user according to the sequence of initial consonants, vowels and tones.
6. A wearable device comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor when executing the program implements the method of any of claims 1-3.
CN201710841899.3A 2017-09-18 2017-09-18 Interaction method and device for wearable equipment and wearable equipment Active CN107608618B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710841899.3A CN107608618B (en) 2017-09-18 2017-09-18 Interaction method and device for wearable equipment and wearable equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710841899.3A CN107608618B (en) 2017-09-18 2017-09-18 Interaction method and device for wearable equipment and wearable equipment

Publications (2)

Publication Number Publication Date
CN107608618A CN107608618A (en) 2018-01-19
CN107608618B true CN107608618B (en) 2020-10-09

Family

ID=61060758

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710841899.3A Active CN107608618B (en) 2017-09-18 2017-09-18 Interaction method and device for wearable equipment and wearable equipment

Country Status (1)

Country Link
CN (1) CN107608618B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109857326A (en) * 2019-02-01 2019-06-07 思特沃克软件技术(西安)有限公司 A kind of vehicular touch screen and its control method
CN110071866B (en) * 2019-04-29 2022-03-18 努比亚技术有限公司 Instant messaging application control method, wearable device and storage medium
CN110727854B (en) * 2019-08-21 2022-07-12 北京奇艺世纪科技有限公司 Data processing method and device, electronic equipment and computer readable storage medium
CN110705521A (en) * 2019-10-22 2020-01-17 深圳市本牛科技有限责任公司 Character-searching and stroke order teaching method and teaching interactive terminal
CN113467342A (en) * 2021-08-09 2021-10-01 重庆宗灿科技发展有限公司 Wisdom endowment system and intelligent watch based on internet of things
CN114415837A (en) * 2022-01-25 2022-04-29 中国农业银行股份有限公司 Operation auxiliary system and method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105260396A (en) * 2015-09-16 2016-01-20 百度在线网络技术(北京)有限公司 Word retrieval method and apparatus
CN105975551A (en) * 2016-04-29 2016-09-28 广东小天才科技有限公司 Wearable device-based information search method and apparatus
CN107102797A (en) * 2017-04-24 2017-08-29 维沃移动通信有限公司 A kind of method and terminal that search operation is performed to selected contents of object

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105260396A (en) * 2015-09-16 2016-01-20 百度在线网络技术(北京)有限公司 Word retrieval method and apparatus
CN105975551A (en) * 2016-04-29 2016-09-28 广东小天才科技有限公司 Wearable device-based information search method and apparatus
CN107102797A (en) * 2017-04-24 2017-08-29 维沃移动通信有限公司 A kind of method and terminal that search operation is performed to selected contents of object

Also Published As

Publication number Publication date
CN107608618A (en) 2018-01-19

Similar Documents

Publication Publication Date Title
CN107608618B (en) Interaction method and device for wearable equipment and wearable equipment
US9805718B2 (en) Clarifying natural language input using targeted questions
KR101160597B1 (en) Content retrieval based on semantic association
US9082035B2 (en) Camera OCR with context information
US20180349781A1 (en) Method and device for judging news quality and storage medium
JP2006190006A5 (en)
CN110770694A (en) Obtaining response information from multiple corpora
CN109343696B (en) Electronic book commenting method and device and computer readable storage medium
TW200900967A (en) Multi-mode input method editor
CN109408829B (en) Method, device, equipment and medium for determining readability of article
JP7087987B2 (en) Information presentation device and information presentation method
CN110032734B (en) Training method and device for similar meaning word expansion and generation of confrontation network model
CN109783613B (en) Question searching method and system
CN111899576A (en) Control method and device for pronunciation test application, storage medium and electronic equipment
CN109657127B (en) Answer obtaining method, device, server and storage medium
CN107239209B (en) Photographing search method, device, terminal and storage medium
CN113038175B (en) Video processing method and device, electronic equipment and computer readable storage medium
Jeeva et al. Intelligent image text reader using easy ocr, nrclex & nltk
CN111542817A (en) Information processing device, video search method, generation method, and program
CN113626441A (en) Text management method, device and equipment based on scanning equipment and storage medium
US11704090B2 (en) Audio interactive display system and method of interacting with audio interactive display system
CN112802454B (en) Method and device for recommending awakening words, terminal equipment and storage medium
CN111949767A (en) Method, device, equipment and storage medium for searching text keywords
CN113254814A (en) Network course video labeling method and device, electronic equipment and medium
CN111047924A (en) Visualization method and system for memorizing English words

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant