CN113268981B - Information processing method and device and electronic equipment - Google Patents

Information processing method and device and electronic equipment Download PDF

Info

Publication number
CN113268981B
CN113268981B CN202110587658.7A CN202110587658A CN113268981B CN 113268981 B CN113268981 B CN 113268981B CN 202110587658 A CN202110587658 A CN 202110587658A CN 113268981 B CN113268981 B CN 113268981B
Authority
CN
China
Prior art keywords
text
characters
target
word
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110587658.7A
Other languages
Chinese (zh)
Other versions
CN113268981A (en
Inventor
谢佳美
王滨宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Mobile Communications Group Co Ltd
MIGU Music Co Ltd
MIGU Culture Technology Co Ltd
Original Assignee
China Mobile Communications Group Co Ltd
MIGU Music Co Ltd
MIGU Culture Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Mobile Communications Group Co Ltd, MIGU Music Co Ltd, MIGU Culture Technology Co Ltd filed Critical China Mobile Communications Group Co Ltd
Priority to CN202110587658.7A priority Critical patent/CN113268981B/en
Publication of CN113268981A publication Critical patent/CN113268981A/en
Application granted granted Critical
Publication of CN113268981B publication Critical patent/CN113268981B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/334Query execution
    • G06F16/3343Query execution using phonetics

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Acoustics & Sound (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Artificial Intelligence (AREA)
  • Document Processing Apparatus (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses an information processing method, an information processing device and electronic equipment, relates to the technical field of information processing, and aims to solve the problems that the conventional text query operation is complicated and the efficiency is low. The method comprises the following steps: acquiring target voice, wherein the target voice comprises pronunciation of a first text in a first text displayed on electronic equipment; determining characters to be recognized in the first text according to the target voice, wherein the characters to be recognized are different from the first text; and acquiring target information of the text to be identified, wherein the target information comprises at least one of pronunciation and annotation. Therefore, when a user reads by using the electronic equipment, if the user encounters unrecognized characters, the electronic equipment can be triggered to locate the characters to be identified according to the pronunciation of the user, and further information such as pronunciation or annotation of the characters to be identified is acquired.

Description

Information processing method and device and electronic equipment
Technical Field
The present invention relates to the field of information processing technologies, and in particular, to an information processing method, an information processing device, and an electronic device.
Background
With the popularity of electronic devices, users often read various information using electronic devices. During reading, the user can encounter text that is not known to sound or rare words that are not known. In the related art, a user is usually required to operate a cursor to select and copy unknown characters, then jump to a related search page to search the characters, query pronunciation or related paraphrasing, and the selection operation on the electronic device generally has the problem of inaccurate positioning, so that the user often needs to repeatedly select the characters to be queried for multiple times. Therefore, the existing text query operation is complicated and the efficiency is low.
Disclosure of Invention
The embodiment of the invention provides an information processing method, an information processing device and electronic equipment, which are used for solving the problems of complicated operation and low efficiency of the conventional text query.
In a first aspect, an embodiment of the present invention provides an information processing method, including:
acquiring target voice, wherein the target voice comprises pronunciation of a first text in a first text displayed on electronic equipment;
determining characters to be recognized in the first text according to the target voice, wherein the characters to be recognized are different from the first text;
And acquiring target information of the text to be identified, wherein the target information comprises at least one of pronunciation and annotation.
Optionally, the target voice further comprises pronunciation of a preset prompt word;
the determining the text to be recognized in the first text according to the target voice comprises the following steps:
identifying the target voice to obtain a second text corresponding to the target voice;
determining second characters except the preset prompt words in the second text;
determining target words and sentences matched with the second text in the first text;
and determining the characters to be recognized in the target words and sentences, wherein the characters to be recognized are different from the second characters.
Optionally, the determining the second text except the preset prompting word in the second text includes:
dividing the second text according to the preset prompting words to obtain third characters before the preset prompting words and fourth characters after the preset prompting words in the second text, wherein the second characters comprise the third characters and the fourth characters;
the determining the target word and sentence matched with the second text in the first text comprises the following steps:
Determining a fifth word matched with the third word in the first text and a sixth word matched with the fourth word;
and determining a target word and sentence comprising the fifth word and the sixth word from the first text.
Optionally, the determining, from the first text, a target sentence including the fifth text and the sixth text includes:
determining a target fifth word and a target sixth word with the smallest position interval in the first text under the condition that the number of the fifth word or the sixth word is larger than 1, wherein the target fifth word is before the target sixth word;
and determining a target word and sentence in the first text by taking the target fifth word as a starting word and the target sixth word as an ending word.
Optionally, the determining the text to be identified in the target word and sentence includes:
and determining the characters between the target fifth characters and the target sixth characters in the target words and sentences as characters to be identified.
Optionally, the determining, according to the target voice, the text to be recognized in the first text includes:
identifying the target voice to obtain a third text corresponding to the target voice;
Determining a seventh text in the first text, which is matched with the third text;
receiving a first input of a user;
responding to the first input, determining the number K of the words to be recognized, wherein K is a positive integer;
and determining K characters positioned behind the seventh character in the first text as the characters to be identified.
Optionally, the receiving the first input of the user includes:
receiving a tap input of a user on a screen of the electronic device;
the determining, in response to the first input, a number of words K to be recognized, includes:
and determining the number K of the knocks input as the number of words to be recognized.
Optionally, the number of the seventh words is L, where L is an integer greater than 1;
the determining that K characters located after the seventh character in the first text are the characters to be identified includes:
determining K characters, which are respectively positioned behind each seventh character, in the first text as candidate characters to obtain L groups of candidate characters;
receiving a second input from the user;
and responding to the second input, determining a target candidate character from the L groups of candidate characters, and determining the target candidate character as the character to be identified.
Optionally, before the receiving the second input of the user, the method further includes:
identifying the L groups of candidate characters;
the receiving a second input from the user, comprising:
and receiving the selection input of the user for the L groups of candidate characters.
In a second aspect, an embodiment of the present invention further provides an information processing apparatus, including:
the electronic equipment comprises a first acquisition module, a second acquisition module and a first processing module, wherein the first acquisition module is used for acquiring target voice, and the target voice comprises pronunciation of a first word in a first text displayed on the electronic equipment;
the determining module is used for determining characters to be recognized in the first text according to the target voice, wherein the characters to be recognized are different from the first text;
and the second acquisition module is used for acquiring target information of the text to be identified, wherein the target information comprises at least one of pronunciation and annotation.
Optionally, the target voice further comprises pronunciation of a preset prompt word;
the determining module includes:
the first recognition sub-module is used for recognizing the target voice and obtaining a second text corresponding to the target voice;
the first determining submodule is used for determining second characters except the preset prompt words in the second text;
A second determining submodule, configured to determine a target sentence in the first text that matches the second text;
and a third determining submodule, configured to determine a word to be identified in the target sentence, where the word to be identified is different from the second word.
Optionally, the first determining submodule is configured to segment the second text according to the preset prompting word to obtain a third word before the preset prompting word and a fourth word after the preset prompting word in the second text, where the second word includes the third word and the fourth word;
the second determination submodule includes:
a first determining unit, configured to determine a fifth text matching the third text and a sixth text matching the fourth text in the first text;
and a second determining unit configured to determine a target sentence including the fifth text and the sixth text from the first text.
Optionally, the second determining unit includes:
a first determining subunit, configured to determine, when the number of the fifth words or the sixth words is greater than 1, a target fifth word and a target sixth word with a minimum position interval in the first text, where the target fifth word precedes the target sixth word;
And the second determining subunit is used for determining target words and sentences in the first text by taking the target fifth word as a starting word and the target sixth word as an ending word.
Optionally, the third determining submodule is configured to determine, as the text to be identified, a text located between the target fifth text and the target sixth text in the target sentence.
Optionally, the determining module includes:
the second recognition sub-module is used for recognizing the target voice and obtaining a third text corresponding to the target voice;
a fourth determining submodule, configured to determine a seventh text in the first text that matches the third text;
a receiving sub-module for receiving a first input of a user;
a fifth determining submodule, configured to determine, in response to the first input, a number K of words to be recognized, where K is a positive integer;
and a sixth determining submodule, configured to determine that K characters located after the seventh character in the first text are the characters to be identified.
Optionally, the receiving submodule is used for receiving a knocking input of a user on a screen of the electronic equipment;
and the fifth determination submodule is used for determining the knocking times K of the knocking input as the number of characters to be recognized.
Optionally, the number of the seventh words is L, where L is an integer greater than 1;
the sixth determination submodule includes:
a third determining unit, configured to determine K characters located after each seventh character in the first text as candidate characters, to obtain L groups of candidate characters;
a receiving unit for receiving a second input of a user;
and the fourth determining unit is used for responding to the second input, determining target candidate characters from the L groups of candidate characters and determining the target candidate characters as the characters to be identified.
Optionally, the sixth determining submodule further includes:
the identification unit is used for identifying the L groups of candidate characters;
the receiving unit is used for receiving the selection input of the L groups of candidate characters by a user.
In a third aspect, an embodiment of the present invention further provides an electronic device, including: a transceiver, a memory, a processor and a computer program stored on the memory and executable on the processor, which processor implements the steps in the information processing method as described above when executing the computer program.
In a fourth aspect, embodiments of the present invention also provide a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the information processing method as described above.
In the embodiment of the invention, a target voice is acquired, wherein the target voice comprises pronunciation of a first text in a first text displayed on electronic equipment; determining characters to be recognized in the first text according to the target voice, wherein the characters to be recognized are different from the first text; and acquiring target information of the text to be identified, wherein the target information comprises at least one of pronunciation and annotation. Therefore, when a user reads by using the electronic equipment, if the user encounters unrecognized characters, the electronic equipment can be triggered to locate the characters to be identified according to the pronunciation of the user, and further information such as pronunciation or annotation of the characters to be identified is acquired.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the description of the embodiments of the present invention will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort to a person of ordinary skill in the art.
FIG. 1 is a flow chart of an information processing method provided by an embodiment of the present invention;
FIG. 2 is a schematic diagram of an interactive interface for a user to query for rarely used words by voice according to an embodiment of the present invention;
FIG. 3 is a second exemplary diagram of an interactive interface for a user to query for rarely used words by voice according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of an interactive interface for inquiring rarely used words by a user through voice and knocking operations according to an embodiment of the present invention;
fig. 5 is a block diagram of an information processing apparatus provided by an embodiment of the present invention;
fig. 6 is a block diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1, fig. 1 is a flowchart of an information processing method provided in an embodiment of the present invention, as shown in fig. 1, including the following steps:
Step 101, obtaining target voice, wherein the target voice comprises pronunciation of a first word in a first text displayed on the electronic equipment.
The embodiment of the invention can be applied to the scenes of fast pronunciation inquiry of the unknown words such as the rare words, fast paraphrase inquiry of the words which are not understood in literal meaning, fast pronunciation inquiry or word sense inquiry of the unknown English words and the like in the reading process of the electronic equipment by the user.
In the embodiment of the invention, when the electronic equipment is in a reading page, namely, the first text is displayed, a user may encounter unrecognized characters in reading, and at this time, the user can read the characters before and after the unrecognized characters, so that the electronic equipment can acquire the voice sent by the user reading the characters, namely, acquire the target voice. The first text may refer to a text currently displayed on the electronic device, the first text may refer to a text segment before and after a text to be recognized in the first text, and the target voice is a voice generated by a user reading the first text. It should be noted that the first text may be text in different languages such as chinese, english, etc.
For example, if there is a text displayed on an electronic device that reads "ancient people's form, sound, and meaning distinguish between company and evil", and the user does not recognize the pronunciation of the word "company" during reading, they can read the text before and after the word "company", that is, they can read the words "ancient people's form, sound, and meaning distinguish between company and evil". For the word "company", they can pause without reading or use other prompt words instead; Alternatively, users can only read the text "Ancient Chinese Form, Sound, and Meaning Debate" before the two characters "Zhuo Company".
Optionally, the step 101 includes:
and under the condition that the preset input of the user is received, acquiring target voice.
The preset input may be a preset input for triggering the electronic device to collect the voice of the user, for example, as shown in fig. 2, the preset input may be a literacy function button 21 displayed on the interface of the touch electronic device 20, or may be a pickup function of the voice wake-up electronic device. That is, in this embodiment, the voice acquisition module may be turned on only when receiving the input of the user for triggering the voice acquisition function, so as to obtain the target voice, so as to ensure that the electronic device starts the voice acquisition function at a proper time.
Step 102, determining a character to be recognized in the first text according to the target voice, wherein the character to be recognized is different from the first text.
After the target voice is obtained, a field which is read currently by a user can be positioned in the first text according to the target voice, and specific characters to be recognized are determined according to a preset rule.
Specifically, the target voice may be first subjected to voice recognition, the target voice is converted into text, then, based on the converted text, the position of the converted text is found in the first text, so that the reading position of the user can be obtained by positioning, the text which is not spoken by the user at the position or a plurality of characters after the position can be determined as the text to be recognized, the specific number of the characters can be set by default by the system or determined based on the number of the characters further input by the user, that is, the text to be recognized is different from the first text which is spoken by the user, and the text to be recognized can be a certain number of characters between the first texts or a plurality of characters which are immediately behind the first texts.
For example, when recognizing that the text corresponding to the target voice is "old people meaning dialect bad", it may be determined that the user is reading to the text "old people meaning dialect bad" displayed on the electronic device, and two words in which the user does not read sound may be determined as the text to be recognized; or when the text corresponding to the target voice is identified as "ancient human meaning dialect", the situation that the user reads the text displayed on the electronic equipment at the "ancient human meaning dialect" can be determined, and the two words after the text are determined as the words to be identified by default.
Optionally, the target voice further comprises pronunciation of a preset prompt word;
the step 102 includes:
identifying the target voice to obtain a second text corresponding to the target voice;
determining second characters except the preset prompt words in the second text;
determining target words and sentences matched with the second text in the first text;
and determining the characters to be recognized in the target words and sentences, wherein the characters to be recognized are different from the second characters.
In one embodiment, when the user reads the unknown word, the user may use a specific prompting word to replace the word to be recognized, where the specific prompting word may be a preset prompting word used to replace the word to be recognized, such as "adjective dialect", "literacy", "recognition", etc., so that the target voice further includes pronunciation of the preset prompting word, and the electronic device may accurately determine that the word corresponding to the word in the first text is the word to be recognized based on the preset prompting word.
In this embodiment, the electronic device may identify the target voice, obtain a second text corresponding to the target voice, extract a second text except for the preset prompting word from the second text, find a target word and sentence matched with the second text from the first text based on the second text, and determine a text in a position corresponding to the preset prompting word in the target word and sentence, where the text is a text to be identified.
For example, as shown in FIG. 2, when the user reads the word "he/she is standing bad" displayed on the electronic device 20, without recognizing "he/she stands bad" four words, the word may be read, the piece is replaced by a preset prompt word of 'shape and sound meaning differentiation', namely 'shape and sound meaning differentiation' can be read out, so that the electronic equipment can acquire the section of voice information read out by a user and convert the voice information into text information; the converted text is "he shape sound meaning, very poor", the said electronic device is through the meaning of "shape sound meaning" and text "he shape sound meaning of the preset prompt word, very poor" is compared, can confirm the goal word and sentence that matches is "he, the wiggle is very poor", and can further confirm the text to be recognized corresponding to "shape sound meaning" position in this word and sentence is "the piece is found immediately", after the pronunciation or meaning of "the piece is found immediately" the four-word, the said electronic device can carry on the corresponding mark in the display position of the "piece is found immediately" the four-word after obtaining the pronunciation or meaning of "the piece is found immediately".
Therefore, through the embodiment, the user can trigger the electronic equipment to accurately position the character to be recognized according to the pronunciation of the user and feed back the target information of the character to be recognized of the user in real time only by reading the unknown character paragraph and using the preset prompting word to replace the unknown character.
Optionally, the determining the second text except the preset prompting word in the second text includes:
dividing the second text according to the preset prompting words to obtain third characters before the preset prompting words and fourth characters after the preset prompting words in the second text, wherein the second characters comprise the third characters and the fourth characters;
the determining the target word and sentence matched with the second text in the first text comprises the following steps:
determining a fifth word matched with the third word in the first text and a sixth word matched with the fourth word;
and determining a target word and sentence comprising the fifth word and the sixth word from the first text.
When determining the second text except the preset prompting word, the second text may be specifically segmented by taking the position of the preset prompting word in the second text as a boundary, so as to obtain the text before the preset prompting word, namely a third text, and the text after the preset prompting word, namely a fourth text, in the second text, wherein the second text includes the third text and the fourth text.
For example, taking "formal sense" as a preset prompting word for example, the converted text is "he" formal sense, so that the converted text is very poor ", the positions of the four words of" formal sense "in the converted text can be determined, then the" formal sense "can be used as a separator, the converted text is split into a" front section "and a" back section ", for example, the converted text is split into" he "and" formal sense "so as to be very poor, the" front section "is" he ", and the" back section "is" very poor ".
Then, the third text and the fourth text can be used as matching keywords respectively, and a fifth text which is matched with the third text and a sixth text which is matched with the fourth text are searched out from the first text; and finally, determining a target word and sentence comprising the fifth word and the sixth word from the first text, wherein the target word and sentence comprises the words comprising the fifth word and the sixth word, namely, the position of the fifth word is found from the first text, the position of the sixth word is found continuously from the first text, and the words comprising the fifth word and the sixth word are used as the target word and sentence.
Thus, through the implementation mode, the words and sentences where the characters to be recognized are can be accurately and rapidly positioned from the first text.
Optionally, the determining, from the first text, a target sentence including the fifth text and the sixth text includes:
determining a target fifth word and a target sixth word with the smallest position interval in the first text under the condition that the number of the fifth word or the sixth word is larger than 1, wherein the target fifth word is before the target sixth word;
and determining a target word and sentence in the first text by taking the target fifth word as a starting word and the target sixth word as an ending word.
In this embodiment, when a plurality of fifth characters matching the third characters are determined from the first text, or a plurality of sixth characters matching the fourth characters are determined from the first text, it is necessary to further determine the word or sentence actually read by the user, which includes the character to be recognized.
Specifically, the target fifth word and the target sixth word actually read by the user may be determined based on the positional relationship and the positional interval between each fifth word and the sixth word, for example, the size of the positional interval between each fifth word and the sixth word may be compared one by one, and finally, a group of the fifth word and the sixth word with the minimum positional interval and the fifth word located before the sixth word is used, where the group of words is the target fifth word and the target sixth word. And then, determining a target word and sentence in the first text by taking the target fifth word as a starting word and the target sixth word as an ending word, wherein the target word and sentence comprises words from the target fifth word to the target sixth word.
Therefore, under the condition that a plurality of matched characters exist in the first text, the words and sentences where the characters to be recognized are actually read by the user can be accurately positioned through the embodiment.
Optionally, the determining the text to be identified in the target word and sentence includes:
and determining the characters between the target fifth characters and the target sixth characters in the target words and sentences as characters to be identified.
When the target word and sentence is determined to be the word and sentence in the first text, the target fifth word is taken as a starting word, and the target sixth word is taken as an ending word, the word in the target word and sentence between the target fifth word and the target sixth word can be directly determined to be the word to be recognized. Of course, after the target fifth word and the target sixth word are determined, the word between the target fifth word and the target sixth word in the first text may be directly determined as the word to be recognized.
For example, referring to fig. 3, assume that P represents a text string of page content displayed on the electronic device 20, V represents a string after the user speaks the page text and the prompt "our kanji ideas" is converted into text, V1 represents a string before "kanji" in the converted text, i.e. "our kanji", and V2 represents a string after "kanji" in the converted text, i.e. "kanji".
As shown in fig. 3, in the page content displayed on the electronic device 20, a plurality of fields matching V1 may be determined assuming that X1, X2, and X3 are respectively from front to back by display position, and a plurality of fields matching V2 may be determined assuming that Y1 and Y2 are respectively from front to back by display position.
The specific matching steps may be as follows:
1) After a user reads a sentence at a character to be identified, waiting for a few seconds for the electronic equipment to convert the current reading sound into a character string V;
2) Extracting a prompt word of 'adjective meaning' contained in the V character string, and defining the front content and the rear content of the prompt word as V1 and V2 respectively;
3) Searching V1 from front to back in P to obtain X1, X2 and X3 respectively;
4) Searching V2 from back to front (or from front to back) in P to obtain Y1 and Y2 respectively;
5) Position interval minimum matching calculation: respectively judging the position relation of X1, X2 and X3 and Y1 and the position relation of X1, X2 and X3 and Y2, so that the matching relation of X3 and Y2 can be eliminated; calculating the position intervals of X1, Y1 and Y2, the position intervals of X2, Y1 and Y2 and the position intervals of X3 and Y1 respectively; determining X3 and Y1 with minimum position interval as a matching pair;
6) Determining that a character string is a character to be recognized, and sending the character string to a rear end interface to inquire pinyin information.
Through the implementation mode, the text to be recognized, which is really expected by the user, can be rapidly and accurately positioned.
Optionally, the determining, according to the target voice, the text to be recognized in the first text includes:
identifying the target voice to obtain a third text corresponding to the target voice;
determining a seventh text in the first text, which is matched with the third text;
receiving a first input of a user;
responding to the first input, determining the number K of the words to be recognized, wherein K is a positive integer;
and determining K characters positioned behind the seventh character in the first text as the characters to be identified.
In another embodiment, the user may read only a part of the text before the unrecognized text, and the electronic device locates the text matching the user pronunciation in the first text through the user pronunciation, and determines how many characters after the matched text is selected as the text to be recognized based on the user input.
In this embodiment, the electronic device may identify the target voice, obtain a third text corresponding to the target voice, find a text matching the third text from the first text, that is, a seventh text, and then execute a first input for determining a number of words to be identified by the user, so that the electronic device may receive the first input, determine the number of words K to be identified based on the first input, and further determine K words located after the seventh text in the first text as the words to be identified.
The first input may be to input a specific number on the display interface of the electronic device, for example, directly handwriting a number on the display interface, or input a corresponding number in a pop-up window for inputting the number of words by the user, or click K times on a blank of the display interface, or tap K times on a screen, or input a number by voice.
Thus, through the embodiment, the user can trigger the electronic equipment to accurately position the character to be recognized according to the pronunciation of the user and the number of the input characters by only reading the unknown character paragraph and inputting the number of the characters to be recognized.
Optionally, the receiving the first input of the user includes:
receiving a tap input of a user on a screen of the electronic device;
the determining, in response to the first input, a number of words K to be recognized, includes:
and determining the number K of the knocks input as the number of words to be recognized.
In one embodiment, the first input may be a tap input on a screen of the electronic device, after a user reads a text segment before the text to be recognized, the electronic device may locate a user reading position based on the user reading, and the user may continue to tap K times on the screen of the electronic device, so as to prompt the electronic device to select K text segments after the currently located position as the text to be recognized.
For example, for the text "ancient human figure-like sound is distinguished well, 魃 is shown by 鬾, mey 39757 is shown by 397533, the user does not know that" 魃 is shown by 鬾, mey 3975577 is shown by 397553, the user can read the text "ancient human figure-like sound is distinguished well", the electronic device can position the text on the screen of the electronic device based on the reading of the user, the user can continuously strike the screen of the electronic device for 8 times, and the electronic device monitors the times of the user striking through the striking monitoring module, so that 8 words "魃 are shown by 鬾, mey 397557 is shown by 397553 after the" ancient human figure-like sound is distinguished well.
Therefore, the user can trigger the electronic equipment to accurately position the text to be recognized by only reading a section of text in front of the text to be recognized and matching with the knocking operation, and the target information of the text to be recognized can be rapidly acquired.
Optionally, the number of the seventh words is L, where L is an integer greater than 1;
the determining that K characters located after the seventh character in the first text are the characters to be identified includes:
determining K characters, which are respectively positioned behind each seventh character, in the first text as candidate characters to obtain L groups of candidate characters;
Receiving a second input from the user;
and responding to the second input, determining a target candidate character from the L groups of candidate characters, and determining the target candidate character as the character to be identified.
That is, in one embodiment, when a plurality of seventh characters matching the pronunciation of the user are determined from the first text, the positions of the characters actually required to be identified by the user need to be further determined.
Specifically, in the case of matching to L seventh words, for each seventh word, K words located after the seventh word may be determined in the first text in a subsection manner as candidate words, so as to obtain L groups of candidate words, and then the user may perform a second input for determining a target candidate word, so that the electronic device may receive the second input, and may determine a target candidate word from the L groups of candidate words based on the second input, and may determine the target candidate word as the word to be identified.
The second input may be a plurality of taps on the screen to prompt the electronic device what group of candidate characters are target candidate characters, for example, tap 3 times, then determine that the 3 rd group of candidate characters are target candidate characters, or click several times on a blank of the display interface, or click a position where the target candidate characters are located, or input a number by voice, etc.
That is, when the system judges that the character to be recognized meeting the condition is unique, K characters after the seventh character can be directly obtained as target characters to be recognized; when the system judges that the characters to be recognized meeting the conditions are not unique, the system can monitor the number M of the secondary continuous strokes of the user, and K characters after the M seventh characters are selected as the target characters to be recognized.
Therefore, under the condition that a plurality of matched candidate characters exist in the first text, the position of the character actually needed to be identified by the user can be accurately and conveniently positioned through the implementation mode.
Optionally, before the receiving the second input of the user, the method further includes:
identifying the L groups of candidate characters;
the receiving a second input from the user, comprising:
and receiving the selection input of the user for the L groups of candidate characters.
In this embodiment, after determining that K characters located after each seventh character in the first text are candidate characters and obtaining L groups of candidate characters, the L groups of candidate characters may be identified in the first text, for example, the L groups of candidate characters are highlighted or displayed by using a specific color, so as to visually prompt a user of a location where candidate characters to be selected are located, and then the user may perform selection input on the identified L groups of candidate characters, for example, click on a group of candidate characters to be identified, or click on the screen of the electronic device a corresponding number of times after determining that a group of candidate characters to be identified is a group of candidate characters to be identified in a sequence from front to back, so as to trigger the electronic device to determine that the group of candidate characters is the text to be identified.
After determining the number of the L seventh words in the first text, the L seventh words may be identified, and after determining the number of words to be identified based on user input, candidate words after each seventh word may be further identified.
Therefore, the positions of the candidate characters of the user can be intuitively prompted by identifying the L groups of candidate characters, and the user is helped to accurately and conveniently select the target characters to be identified.
For example, as shown in Figure 4, electronic device 20 displays the text "Our Chinese characters, with a history of five thousand years left behind by pen and painting, have been recognized by the world. Each stroke of our Chinese characters is a story, and we kneel and raise a fire to make piety like a Taoist light. Woo, our Chinese characters. If a user discovers during reading that they do not recognize the phrase, they can read' our Chinese characters' and tap the screen 8 times consecutively. The electronic device identifies the display positions of the three 'our Chinese characters' and the 8 characters after each' our Chinese characters' as candidate text. At this point, the user can tap the screen 3 times consecutively, Electronic devices can determine that the text to be recognized is the third group of candidate texts, namely, and can immediately search for . After obtaining its pinyin, the pronunciation can be marked on the text.
It should be noted that, the above-mentioned process of recognizing the target voice may be executed at the terminal side or at the server side, if executed at the terminal side, the technical process of obtaining the target voice and recognizing the target voice may be directly implemented by the terminal, if executed at the server side, the target voice sent by the user may be collected by the terminal and sent to the server for recognition and conversion from voice to text.
Step 103, obtaining target information of the text to be recognized, wherein the target information comprises at least one of pronunciation and annotation.
After determining the text to be recognized, the target information of the text to be recognized, such as pronunciation, word meaning annotation and the like, can be directly obtained to help the user recognize or understand the text to be recognized.
The obtaining the target information of the text to be recognized may specifically be searching information such as pronunciation, meaning, etc. of the text to be recognized from a database, where the database may include data such as a chinese dictionary, an english-chinese dictionary, etc., or may be a background internet search of the text to be recognized, and extracting information such as pronunciation, meaning, etc. of the text to be recognized from a search result.
It should be noted that, when the target information of the text to be identified is obtained, the target information of the text to be identified may be directly identified in the first text, or the target information of the text to be identified may be output by means of voice prompt.
The information processing method of the embodiment of the invention obtains target voice, wherein the target voice comprises pronunciation of a first text in a first text displayed on electronic equipment; determining characters to be recognized in the first text according to the target voice, wherein the characters to be recognized are different from the first text; and acquiring target information of the text to be identified, wherein the target information comprises at least one of pronunciation and annotation. Therefore, when a user reads by using the electronic equipment, if the user encounters unrecognized characters, the electronic equipment can be triggered to locate the characters to be identified according to the pronunciation of the user, and further information such as pronunciation or annotation of the characters to be identified is acquired. In addition, the embodiment of the invention is also applicable to characters displayed on the interface and incapable of performing selected copy operation.
The embodiment of the invention also provides an information processing device. Referring to fig. 5, fig. 5 is a block diagram of an information processing apparatus provided in an embodiment of the present invention. Since the principle of solving the problem of the information processing apparatus is similar to that of the information processing method in the embodiment of the present invention, the implementation of the information processing apparatus can refer to the implementation of the method, and the repetition is omitted.
As shown in fig. 5, the information processing apparatus 500 includes:
a first obtaining module 501, configured to obtain a target voice, where the target voice includes a pronunciation of a first word in a first text displayed on an electronic device;
a determining module 502, configured to determine, according to the target voice, a word to be recognized in the first text, where the word to be recognized is different from the first word;
a second obtaining module 503, configured to obtain target information of the text to be identified, where the target information includes at least one of pronunciation and annotation.
Optionally, the target voice further comprises pronunciation of a preset prompt word;
the determining module 502 includes:
the first recognition sub-module is used for recognizing the target voice and obtaining a second text corresponding to the target voice;
the first determining submodule is used for determining second characters except the preset prompt words in the second text;
a second determining submodule, configured to determine a target sentence in the first text that matches the second text;
and a third determining submodule, configured to determine a word to be identified in the target sentence, where the word to be identified is different from the second word.
Optionally, the first determining submodule is configured to segment the second text according to the preset prompting word to obtain a third word before the preset prompting word and a fourth word after the preset prompting word in the second text, where the second word includes the third word and the fourth word;
the second determination submodule includes:
a first determining unit, configured to determine a fifth text matching the third text and a sixth text matching the fourth text in the first text;
and a second determining unit configured to determine a target sentence including the fifth text and the sixth text from the first text.
Optionally, the second determining unit includes:
a first determining subunit, configured to determine, when the number of the fifth words or the sixth words is greater than 1, a target fifth word and a target sixth word with a minimum position interval in the first text, where the target fifth word precedes the target sixth word;
and the second determining subunit is used for determining target words and sentences in the first text by taking the target fifth word as a starting word and the target sixth word as an ending word.
Optionally, the third determining submodule is configured to determine, as the text to be identified, a text located between the target fifth text and the target sixth text in the target sentence.
Optionally, the determining module 502 includes:
the second recognition sub-module is used for recognizing the target voice and obtaining a third text corresponding to the target voice;
a fourth determining submodule, configured to determine a seventh text in the first text that matches the third text;
a receiving sub-module for receiving a first input of a user;
a fifth determining submodule, configured to determine, in response to the first input, a number K of words to be recognized, where K is a positive integer;
and a sixth determining submodule, configured to determine that K characters located after the seventh character in the first text are the characters to be identified.
Optionally, the receiving submodule is used for receiving a knocking input of a user on a screen of the electronic equipment;
and the fifth determination submodule is used for determining the knocking times K of the knocking input as the number of characters to be recognized.
Optionally, the number of the seventh words is L, where L is an integer greater than 1;
the sixth determination submodule includes:
a third determining unit, configured to determine K characters located after each seventh character in the first text as candidate characters, to obtain L groups of candidate characters;
A receiving unit for receiving a second input of a user;
and the fourth determining unit is used for responding to the second input, determining target candidate characters from the L groups of candidate characters and determining the target candidate characters as the characters to be identified.
Optionally, the sixth determining submodule further includes:
the identification unit is used for identifying the L groups of candidate characters;
the receiving unit is used for receiving the selection input of the L groups of candidate characters by a user.
The information processing device provided by the embodiment of the present invention may execute the above method embodiment, and its implementation principle and technical effects are similar, and this embodiment will not be described herein.
The information processing apparatus 500 of the embodiment of the present invention obtains a target voice, where the target voice includes a pronunciation of a first text in a first text displayed on an electronic device; determining characters to be recognized in the first text according to the target voice, wherein the characters to be recognized are different from the first text; and acquiring target information of the text to be identified, wherein the target information comprises at least one of pronunciation and annotation. Therefore, when a user reads by using the electronic equipment, if the user encounters unrecognized characters, the electronic equipment can be triggered to locate the characters to be identified according to the pronunciation of the user, and further information such as pronunciation or annotation of the characters to be identified is acquired.
The embodiment of the invention also provides electronic equipment. Because the principle of solving the problem of the electronic device is similar to that of the information processing method in the embodiment of the invention, the implementation of the electronic device can be referred to the implementation of the method, and the repetition is omitted. As shown in fig. 6, an electronic device according to an embodiment of the present invention includes: the processor 600, configured to read the program in the memory 620, performs the following procedures:
acquiring target voice, wherein the target voice comprises pronunciation of a first text in a first text displayed on electronic equipment;
determining characters to be recognized in the first text according to the target voice, wherein the characters to be recognized are different from the first text;
and acquiring target information of the text to be identified, wherein the target information comprises at least one of pronunciation and annotation.
A transceiver 610 for receiving and transmitting data under the control of the processor 600.
Wherein in fig. 6, a bus architecture may comprise any number of interconnected buses and bridges, and in particular one or more processors represented by processor 600 and various circuits of memory represented by memory 620, linked together. The bus architecture may also link together various other circuits such as peripheral devices, voltage regulators, power management circuits, etc., which are well known in the art and, therefore, will not be described further herein. The bus interface provides an interface. Transceiver 610 may be a number of elements, including a transmitter and a transceiver, providing a means for communicating with various other apparatus over a transmission medium. The user interface 630 may also be an interface capable of interfacing with an inscribed desired device for different user devices, including but not limited to a keypad, display, speaker, microphone, joystick, etc. The processor 600 is responsible for managing the bus architecture and general processing, and the memory 620 may store data used by the processor 600 in performing operations.
Optionally, the target voice further comprises pronunciation of a preset prompt word;
the processor 600 is further configured to read the program in the memory 620, and perform the following steps:
identifying the target voice to obtain a second text corresponding to the target voice;
determining second characters except the preset prompt words in the second text;
determining target words and sentences matched with the second text in the first text;
and determining the characters to be recognized in the target words and sentences, wherein the characters to be recognized are different from the second characters.
Optionally, the processor 600 is further configured to read the program in the memory 620, and perform the following steps:
dividing the second text according to the preset prompting words to obtain third characters before the preset prompting words and fourth characters after the preset prompting words in the second text, wherein the second characters comprise the third characters and the fourth characters;
determining a fifth word matched with the third word in the first text and a sixth word matched with the fourth word;
and determining a target word and sentence comprising the fifth word and the sixth word from the first text.
Optionally, the processor 600 is further configured to read the program in the memory 620, and perform the following steps:
determining a target fifth word and a target sixth word with the smallest position interval in the first text under the condition that the number of the fifth word or the sixth word is larger than 1, wherein the target fifth word is before the target sixth word;
and determining a target word and sentence in the first text by taking the target fifth word as a starting word and the target sixth word as an ending word.
Optionally, the processor 600 is further configured to read the program in the memory 620, and perform the following steps:
and determining the characters between the target fifth characters and the target sixth characters in the target words and sentences as characters to be identified.
Optionally, the processor 600 is further configured to read the program in the memory 620, and perform the following steps:
identifying the target voice to obtain a third text corresponding to the target voice;
determining a seventh text in the first text, which is matched with the third text;
receiving a first input of a user;
responding to the first input, determining the number K of the words to be recognized, wherein K is a positive integer;
and determining K characters positioned behind the seventh character in the first text as the characters to be identified.
Optionally, the processor 600 is further configured to read the program in the memory 620, and perform the following steps:
receiving a tap input of a user on a screen of the electronic device;
and determining the number K of the knocks input as the number of words to be recognized.
Optionally, the number of the seventh words is L, where L is an integer greater than 1;
the processor 600 is further configured to read the program in the memory 620, and perform the following steps:
determining K characters, which are respectively positioned behind each seventh character, in the first text as candidate characters to obtain L groups of candidate characters;
receiving a second input from the user;
and responding to the second input, determining a target candidate character from the L groups of candidate characters, and determining the target candidate character as the character to be identified.
Optionally, the processor 600 is further configured to read the program in the memory 620, and perform the following steps:
identifying the L groups of candidate characters;
and receiving the selection input of the user for the L groups of candidate characters.
The electronic device provided by the embodiment of the present invention may execute the above method embodiment, and its implementation principle and technical effects are similar, and this embodiment will not be described herein.
Furthermore, a computer-readable storage medium of an embodiment of the present invention stores a computer program executable by a processor to implement the steps of:
Acquiring target voice, wherein the target voice comprises pronunciation of a first text in a first text displayed on electronic equipment;
determining characters to be recognized in the first text according to the target voice, wherein the characters to be recognized are different from the first text;
and acquiring target information of the text to be identified, wherein the target information comprises at least one of pronunciation and annotation.
Optionally, the target voice further comprises pronunciation of a preset prompt word;
the determining the text to be recognized in the first text according to the target voice comprises the following steps:
identifying the target voice to obtain a second text corresponding to the target voice;
determining second characters except the preset prompt words in the second text;
determining target words and sentences matched with the second text in the first text;
and determining the characters to be recognized in the target words and sentences, wherein the characters to be recognized are different from the second characters.
Optionally, the determining the second text except the preset prompting word in the second text includes:
dividing the second text according to the preset prompting words to obtain third characters before the preset prompting words and fourth characters after the preset prompting words in the second text, wherein the second characters comprise the third characters and the fourth characters;
The determining the target word and sentence matched with the second text in the first text comprises the following steps:
determining a fifth word matched with the third word in the first text and a sixth word matched with the fourth word;
and determining a target word and sentence comprising the fifth word and the sixth word from the first text.
Optionally, the determining, from the first text, a target sentence including the fifth text and the sixth text includes:
determining a target fifth word and a target sixth word with the smallest position interval in the first text under the condition that the number of the fifth word or the sixth word is larger than 1, wherein the target fifth word is before the target sixth word;
and determining a target word and sentence in the first text by taking the target fifth word as a starting word and the target sixth word as an ending word.
Optionally, the determining the text to be identified in the target word and sentence includes:
and determining the characters between the target fifth characters and the target sixth characters in the target words and sentences as characters to be identified.
Optionally, the determining, according to the target voice, the text to be recognized in the first text includes:
Identifying the target voice to obtain a third text corresponding to the target voice;
determining a seventh text in the first text, which is matched with the third text;
receiving a first input of a user;
responding to the first input, determining the number K of the words to be recognized, wherein K is a positive integer;
and determining K characters positioned behind the seventh character in the first text as the characters to be identified.
Optionally, the receiving the first input of the user includes:
receiving a tap input of a user on a screen of the electronic device;
the determining, in response to the first input, a number of words K to be recognized, includes:
and determining the number K of the knocks input as the number of words to be recognized.
Optionally, the number of the seventh words is L, where L is an integer greater than 1;
the determining that K characters located after the seventh character in the first text are the characters to be identified includes:
determining K characters, which are respectively positioned behind each seventh character, in the first text as candidate characters to obtain L groups of candidate characters;
receiving a second input from the user;
and responding to the second input, determining a target candidate character from the L groups of candidate characters, and determining the target candidate character as the character to be identified.
Optionally, before the receiving the second input of the user, the method further includes:
identifying the L groups of candidate characters;
the receiving a second input from the user, comprising:
and receiving the selection input of the user for the L groups of candidate characters.
In the several embodiments provided in this application, it should be understood that the disclosed methods and apparatus may be implemented in other ways. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may be physically included separately, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in hardware plus software functional units.
The integrated units implemented in the form of software functional units described above may be stored in a computer readable storage medium. The software functional unit is stored in a storage medium, and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform part of the steps of the transceiving method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
While the foregoing is directed to the preferred embodiments of the present invention, it will be appreciated by those skilled in the art that various modifications and adaptations can be made without departing from the principles of the present invention, and such modifications and adaptations are intended to be comprehended within the scope of the present invention.

Claims (12)

1. An information processing method, characterized by comprising:
acquiring target voice, wherein the target voice comprises pronunciation of a first text in a first text displayed on electronic equipment;
determining characters to be recognized in the first text according to the target voice, wherein the characters to be recognized are different from the first text; the method comprises the steps of finding the position of a converted text in a first text, positioning to obtain a user reading position, and determining characters which are not read by a user at the user reading position or a plurality of characters after the user reading position as characters to be identified;
And acquiring target information of the text to be identified, wherein the target information comprises at least one of pronunciation and annotation.
2. The method of claim 1, wherein the target voice further comprises a pronunciation of a preset prompt word;
the determining the text to be recognized in the first text according to the target voice comprises the following steps:
identifying the target voice to obtain a second text corresponding to the target voice;
determining second characters except the preset prompt words in the second text;
determining target words and sentences matched with the second text in the first text;
and determining the characters to be recognized in the target words and sentences, wherein the characters to be recognized are different from the second characters.
3. The method of claim 2, wherein the determining the second text in the second text other than the preset alert word comprises:
dividing the second text according to the preset prompting words to obtain third characters before the preset prompting words and fourth characters after the preset prompting words in the second text, wherein the second characters comprise the third characters and the fourth characters;
The determining the target word and sentence matched with the second text in the first text comprises the following steps:
determining a fifth word matched with the third word in the first text and a sixth word matched with the fourth word;
and determining a target word and sentence comprising the fifth word and the sixth word from the first text.
4. The method of claim 3, wherein the determining, from the first text, a target sentence comprising the fifth word and the sixth word comprises:
determining a target fifth word and a target sixth word with the smallest position interval in the first text under the condition that the number of the fifth word or the sixth word is larger than 1, wherein the target fifth word is before the target sixth word;
and determining a target word and sentence in the first text by taking the target fifth word as a starting word and the target sixth word as an ending word.
5. The method of claim 4, wherein the determining the text to be recognized in the target sentence comprises:
and determining the characters between the target fifth characters and the target sixth characters in the target words and sentences as characters to be identified.
6. The method of claim 1, wherein the determining the text to be recognized in the first text from the target speech comprises:
identifying the target voice to obtain a third text corresponding to the target voice;
determining a seventh text in the first text, which is matched with the third text;
receiving a first input of a user;
responding to the first input, determining the number K of the words to be recognized, wherein K is a positive integer;
and determining K characters positioned behind the seventh character in the first text as the characters to be identified.
7. The method of claim 6, wherein the receiving a first input from a user comprises:
receiving a tap input of a user on a screen of the electronic device;
the determining, in response to the first input, a number of words K to be recognized, includes:
and determining the number K of the knocks input as the number of words to be recognized.
8. The method of claim 6, wherein the seventh letter is L, L being an integer greater than 1;
the determining that K characters located after the seventh character in the first text are the characters to be identified includes:
Determining K characters, which are respectively positioned behind each seventh character, in the first text as candidate characters to obtain L groups of candidate characters;
receiving a second input from the user;
and responding to the second input, determining a target candidate character from the L groups of candidate characters, and determining the target candidate character as the character to be identified.
9. The method of claim 8, wherein prior to the receiving the second input from the user, the method further comprises:
identifying the L groups of candidate characters;
the receiving a second input from the user, comprising:
and receiving the selection input of the user for the L groups of candidate characters.
10. An information processing apparatus, characterized by comprising:
the electronic equipment comprises a first acquisition module, a second acquisition module and a first processing module, wherein the first acquisition module is used for acquiring target voice, and the target voice comprises pronunciation of a first word in a first text displayed on the electronic equipment;
the determining module is used for determining characters to be recognized in the first text according to the target voice, wherein the characters to be recognized are different from the first text; the method comprises the steps of finding the position of a converted text in a first text, positioning to obtain a user reading position, and determining characters which are not read by a user at the user reading position or a plurality of characters after the user reading position as characters to be identified;
And the second acquisition module is used for acquiring target information of the text to be identified, wherein the target information comprises at least one of pronunciation and annotation.
11. An electronic device, comprising: a transceiver, a memory, a processor, and a computer program stored on the memory and executable on the processor; -characterized in that the processor is arranged to read a program in a memory for implementing the steps in the information processing method according to any one of claims 1 to 9.
12. A computer-readable storage medium storing a computer program, characterized in that the computer program, when executed by a processor, implements the steps in the information processing method according to any one of claims 1 to 9.
CN202110587658.7A 2021-05-27 2021-05-27 Information processing method and device and electronic equipment Active CN113268981B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110587658.7A CN113268981B (en) 2021-05-27 2021-05-27 Information processing method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110587658.7A CN113268981B (en) 2021-05-27 2021-05-27 Information processing method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN113268981A CN113268981A (en) 2021-08-17
CN113268981B true CN113268981B (en) 2023-04-28

Family

ID=77233505

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110587658.7A Active CN113268981B (en) 2021-05-27 2021-05-27 Information processing method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN113268981B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114780793A (en) * 2022-04-29 2022-07-22 咪咕数字传媒有限公司 Information labeling method and device, terminal equipment and storage medium
CN116052671B (en) * 2022-11-21 2023-07-28 深圳市东象设计有限公司 Intelligent translator and translation method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003316384A (en) * 2002-04-24 2003-11-07 Nippon Hoso Kyokai <Nhk> Real time character correction device, method, program, and recording medium for the same
JP2013182256A (en) * 2012-03-05 2013-09-12 Toshiba Corp Voice synthesis system and voice conversion support device
CN110718226A (en) * 2019-09-19 2020-01-21 厦门快商通科技股份有限公司 Speech recognition result processing method and device, electronic equipment and medium

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI508033B (en) * 2013-04-26 2015-11-11 Wistron Corp Method and device for learning language and computer readable recording medium
CN104346084B (en) * 2013-07-24 2019-02-22 腾讯科技(深圳)有限公司 A kind of words and phrases input method and device
KR101703214B1 (en) * 2014-08-06 2017-02-06 주식회사 엘지화학 Method for changing contents of character data into transmitter's voice and outputting the transmiter's voice
CN105657146B (en) * 2015-05-28 2019-07-30 宇龙计算机通信科技(深圳)有限公司 A kind of communication information prompt method and device
KR20180087942A (en) * 2017-01-26 2018-08-03 삼성전자주식회사 Method and apparatus for speech recognition
CN109671309A (en) * 2018-12-12 2019-04-23 广东小天才科技有限公司 A kind of mistake pronunciation recognition methods and electronic equipment
CN110782885B (en) * 2019-09-29 2021-11-26 深圳数联天下智能科技有限公司 Voice text correction method and device, computer equipment and computer storage medium
CN111128185B (en) * 2019-12-25 2022-10-21 北京声智科技有限公司 Method, device, terminal and storage medium for converting voice into characters
CN111128186B (en) * 2019-12-30 2022-06-17 云知声智能科技股份有限公司 Multi-phonetic-character phonetic transcription method and device
CN112309389A (en) * 2020-03-02 2021-02-02 北京字节跳动网络技术有限公司 Information interaction method and device
CN112735428A (en) * 2020-12-27 2021-04-30 科大讯飞(上海)科技有限公司 Hot word acquisition method, voice recognition method and related equipment
CN112818089B (en) * 2021-02-23 2022-06-03 掌阅科技股份有限公司 Text phonetic notation method, electronic equipment and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003316384A (en) * 2002-04-24 2003-11-07 Nippon Hoso Kyokai <Nhk> Real time character correction device, method, program, and recording medium for the same
JP2013182256A (en) * 2012-03-05 2013-09-12 Toshiba Corp Voice synthesis system and voice conversion support device
CN110718226A (en) * 2019-09-19 2020-01-21 厦门快商通科技股份有限公司 Speech recognition result processing method and device, electronic equipment and medium

Also Published As

Publication number Publication date
CN113268981A (en) 2021-08-17

Similar Documents

Publication Publication Date Title
US8504350B2 (en) User-interactive automatic translation device and method for mobile device
CN101133411B (en) Fault-tolerant romanized input method for non-roman characters
EP2823478B1 (en) Device for extracting information from a dialog
JP5997217B2 (en) A method to remove ambiguity of multiple readings in language conversion
US9484034B2 (en) Voice conversation support apparatus, voice conversation support method, and computer readable medium
CN1918578B (en) Handwriting and voice input with automatic correction
CN111523306A (en) Text error correction method, device and system
Tinwala et al. Eyes-free text entry with error correction on touchscreen mobile devices
KR101590724B1 (en) Method for modifying error of speech recognition and apparatus for performing the method
JP2015026057A (en) Interactive character based foreign language learning device and method
CN113268981B (en) Information processing method and device and electronic equipment
US8000964B2 (en) Method of constructing model of recognizing english pronunciation variation
KR20160029587A (en) Method and apparatus of Smart Text Reader for converting Web page through TTS
US8411958B2 (en) Apparatus and method for handwriting recognition
WO2021179703A1 (en) Sign language interpretation method and apparatus, computer device, and storage medium
CN107797676B (en) Single character input method and device
JP5097802B2 (en) Japanese automatic recommendation system and method using romaji conversion
CN113743102B (en) Method and device for recognizing characters and electronic equipment
CN110010131B (en) Voice information processing method and device
CN112395863A (en) Text processing method and device
CN110969021A (en) Named entity recognition method, device, equipment and medium in single-round conversation
JP2002279353A (en) Character recognition device, method therefor, and recording medium
CN113722447B (en) Voice search method based on multi-strategy matching
JPH11110379A (en) Method and device for retrieving information
CN108959238B (en) Input stream identification method, device and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant