CN111027353A - Search content extraction method and electronic equipment - Google Patents

Search content extraction method and electronic equipment Download PDF

Info

Publication number
CN111027353A
CN111027353A CN201910124095.0A CN201910124095A CN111027353A CN 111027353 A CN111027353 A CN 111027353A CN 201910124095 A CN201910124095 A CN 201910124095A CN 111027353 A CN111027353 A CN 111027353A
Authority
CN
China
Prior art keywords
search
image
voice
user
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910124095.0A
Other languages
Chinese (zh)
Inventor
徐杨
杨昊民
黄东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
TCL China Star Optoelectronics Technology Co Ltd
Original Assignee
Shenzhen China Star Optoelectronics Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen China Star Optoelectronics Technology Co Ltd filed Critical Shenzhen China Star Optoelectronics Technology Co Ltd
Priority to CN201910124095.0A priority Critical patent/CN111027353A/en
Publication of CN111027353A publication Critical patent/CN111027353A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/62Text, e.g. of license plates, overlay texts or captions on TV images
    • G06V20/635Overlay text, e.g. embedded captions in a TV program
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/32Digital ink
    • G06V30/36Matching; Classification
    • G06V30/387Matching; Classification using human interaction, e.g. selection of the best displayed recognition candidate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/40Document-oriented image-based pattern recognition
    • G06V30/41Analysis of document content
    • G06V30/413Classification of content, e.g. text, photographs or tables
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command

Abstract

The invention relates to the technical field of electronic equipment, and discloses a search content extraction method and electronic equipment, wherein the search content extraction method comprises the following steps: when a search voice input by a user is detected, shooting a search image, wherein the search image comprises a finger image of the user; adjusting the color of the finger image to be different from the background color of the search image, and determining the click position of the fingertip of the user according to the finger image; identifying a search type contained in the search voice, and determining a search area in the search image according to the search type and the click position; and extracting the text information in the search area and determining the text information as the search content. By implementing the embodiment of the invention, the accuracy of the answer searched by the electronic equipment according to the extracted search content can be improved.

Description

Search content extraction method and electronic equipment
Technical Field
The invention relates to the technical field of electronic equipment, in particular to a search content extraction method and electronic equipment.
Background
With the rapid development of learning electronic devices such as family education machines and learning tablets, students can generally use the electronic devices to search for questions, and the electronic devices need to extract contents which the students need to search for before searching for the questions. Currently, the method for extracting the search content by the electronic device is generally as follows: the electronic equipment takes a picture to obtain an image containing the content to be searched, extracts the text information contained in the image and determines the text information as the search content. However, in practice, it is found that the electronic device may extract all the text information contained in the image, and since the text information contained in the image is not all the contents to be searched, the above method for extracting search contents may extract many contents irrelevant to the contents to be searched by the user, so that the search result obtained by the electronic device according to the extracted search contents is not accurate enough.
Disclosure of Invention
The embodiment of the invention discloses a search content extraction method and electronic equipment, which can improve the accuracy of electronic equipment search.
The first aspect of the embodiments of the present invention discloses a method for extracting search content, where the method includes:
when search voice input by a user is detected, shooting a search image, wherein the search image comprises a finger image of the user;
adjusting the color of the finger image to be a color different from the background color of the search image, and determining the click position of the fingertip of the user according to the finger image;
identifying a search type contained in the search voice, and determining a search area in the search image according to the search type and the click position;
and extracting the text information in the search area, and determining the text information as search content.
As an optional implementation manner, in the first aspect of the embodiment of the present invention, before capturing the search image when the search voice input by the user is detected, the method further includes:
when detecting that user voice exists in the environment where the electronic equipment is located, identifying corresponding voice content in the user voice;
judging whether the voice content contains a search intention;
if so, determining the user voice as a search voice, and determining that the search voice containing the search intention is detected.
As an optional implementation manner, in the first aspect of the embodiment of the present invention, the adjusting the color of the finger image to a color different from a background color of the search image, and determining the click position of the fingertip of the user according to the finger image includes:
identifying a background color of the search image and obtaining a complementary color of the background color, the complementary color being different from the background color;
adjusting the finger image to the complementary color, and identifying a fingertip image in the finger image based on the complementary color;
acquiring the position coordinates of the fingertip image in the image coordinate system of the search image;
and determining the position coordinate as the click position of the fingertip of the user.
As an optional implementation manner, in the first aspect of the embodiment of the present invention, the identifying a search type included in the search speech, and determining a search area in the search image according to the search type and the click position includes:
determining a search type of the search voice according to the search intention contained in the search voice, wherein the search type at least comprises a word search type, a word search type and a title search type;
determining a search area matching the search type;
and determining a search area matched with the search area in the search image according to the click position.
As an optional implementation manner, in the first aspect of the embodiment of the present invention, after extracting the text information in the search area and determining the text information as search content, the method further includes:
uploading the search content and the search intention to a service device to enable the service device to search answer information matched with the search content and the search intention and feed back the answer information to the electronic device;
and when the electronic equipment receives the answer information, outputting and displaying the answer information through a display of the electronic equipment.
A second aspect of an embodiment of the present invention discloses an electronic device, including:
the device comprises a shooting unit, a voice recognition unit and a voice recognition unit, wherein the shooting unit is used for shooting a search image when a search voice input by a user is detected, and the search image comprises a finger image of the user;
the adjusting unit is used for adjusting the color of the finger image into a color different from the background color of the search image and determining the click position of the fingertip of the user according to the finger image;
the first identification unit is used for identifying a search type contained in the search voice and determining a search area in the search image according to the search type and the click position;
and the extraction unit is used for extracting the character information in the search area and determining the character information as search content.
As an optional implementation manner, in the second aspect of the embodiment of the present invention, the electronic device further includes:
the second identification unit is used for identifying corresponding voice content in the user voice when the shooting unit detects the search voice input by the user, and before shooting the search image, and when detecting that the user voice exists in the environment where the electronic equipment is located;
the judging unit is used for judging whether the voice content contains a search intention;
a determination unit configured to determine the user voice as a search voice and determine that the search voice including the search intention is detected, when the result of the determination by the determination unit is yes.
As an optional implementation manner, in a second aspect of the embodiment of the present invention, the adjusting unit includes:
a recognition subunit, configured to recognize a background color of the search image, and obtain a complementary color of the background color, where the complementary color is different from the background color;
an adjusting subunit, configured to adjust the finger image to the complementary color, and identify a fingertip image in the finger image based on the complementary color;
an acquiring subunit, configured to acquire position coordinates of the fingertip image in an image coordinate system of the search image;
and the first determining subunit is used for determining the position coordinates as the click positions of the fingertips of the user.
As an optional implementation manner, in a second aspect of the embodiment of the present invention, the first identifying unit includes:
the reading subunit is used for determining the search type of the search voice according to the search intention contained in the search voice, wherein the search type at least comprises a word search type, a word search type and a title search type;
a second determining subunit, configured to determine a search area matching the search type;
and the third determining subunit is used for determining a search area matched with the search area in the search image according to the click position.
As an optional implementation manner, in the second aspect of the embodiment of the present invention, the electronic device further includes:
the uploading unit is used for uploading the search content and the search intention to service equipment after the extraction unit extracts the text information in the search area and determines the text information as the search content, so that the service equipment searches answer information matched with the search content and the search intention and feeds the answer information back to the electronic equipment;
and the output unit is used for outputting and displaying the answer information through a display of the electronic equipment when the electronic equipment receives the answer information.
A third aspect of the embodiments of the present invention discloses another electronic device, including:
a memory storing executable program code;
a processor coupled with the memory;
the processor calls the executable program code stored in the memory to perform part or all of the steps of any one of the methods of the first aspect.
A fourth aspect of the present embodiments discloses a computer-readable storage medium storing a program code, where the program code includes instructions for performing part or all of the steps of any one of the methods of the first aspect.
A fifth aspect of embodiments of the present invention discloses a computer program product, which, when run on a computer, causes the computer to perform some or all of the steps of any one of the methods of the first aspect.
A sixth aspect of the present embodiment discloses an application publishing platform, where the application publishing platform is configured to publish a computer program product, where the computer program product is configured to, when running on a computer, cause the computer to perform part or all of the steps of any one of the methods in the first aspect.
Compared with the prior art, the embodiment of the invention has the following beneficial effects:
in the embodiment of the invention, when the search voice input by a user is detected, a search image is shot, wherein the search image comprises a finger image of the user; adjusting the color of the finger image to be different from the background color of the search image, and determining the click position of the fingertip of the user according to the finger image; identifying a search type contained in the search voice, and determining a search area in the search image according to the search type and the click position; and extracting the text information in the search area and determining the text information as the search content. Therefore, by implementing the embodiment of the invention, the finger image in the shot image can be adjusted to the color different from the background color, so that the interference of the finger image on the character content identification is reduced, in addition, the search area can be determined according to the search voice, so that the content needing to be searched and extracted by the electronic equipment is more accurate, and the accuracy of the answer searched by the electronic equipment according to the extracted search content is further improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is a schematic flow chart of a method for extracting search content according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart of another method for extracting search contents according to the embodiment of the present invention;
FIG. 3 is a schematic flow chart of another method for extracting search contents according to the embodiment of the present invention;
fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the disclosure;
FIG. 5 is a schematic structural diagram of another electronic device disclosed in the embodiments of the present invention;
FIG. 6 is a schematic structural diagram of another electronic device disclosed in the embodiments of the present invention;
fig. 7 is a schematic structural diagram of another electronic device disclosed in the embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It is to be noted that the terms "comprises" and "comprising" and any variations thereof in the embodiments and drawings of the present invention are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
The embodiment of the invention discloses a search content extraction method and electronic equipment, which can determine a search area according to search voice, so that the content to be searched extracted by the electronic equipment is more accurate, and the accuracy of answers searched by the electronic equipment according to the extracted search content is further improved. The following are detailed below.
Example one
Referring to fig. 1, fig. 1 is a schematic flow chart illustrating a method for extracting search content according to an embodiment of the present invention. As shown in fig. 1, the method for extracting search content may include the steps of:
101. when a search voice input by a user is detected, the electronic device captures a search image including a finger image of the user.
In the embodiment of the present invention, the electronic device may be a family education machine, a learning tablet, a notebook computer, a desktop computer, and the like, which is not limited in the embodiment of the present invention. The electronic equipment can collect the sound in the environment where the electronic equipment is located through a microphone arranged on the electronic equipment, extracts the voice of the person in the collected sound to obtain the voice of the user, and then the electronic equipment can determine the extracted voice of the user as the search voice. The electronic device can shoot a search image through an image acquisition device arranged on the electronic device, and the image acquisition device can be a camera, a camera and the like.
In the embodiment of the invention, when the user inputs the search voice, the user can simultaneously use the fingers to indicate the content to be searched, so that the electronic equipment can determine the area needing to extract the text information according to the position indicated by the fingers of the user, and the accuracy of the text information extracted by the electronic equipment is improved.
As an alternative embodiment, when the search voice input by the user is detected, the mode of shooting the search image by the electronic device may include the following steps:
when search voice input by a user is detected, the electronic equipment identifies the position of the hand of the user through an infrared sensor;
a camera of the electronic equipment shoots a target image at the position of a hand of a user;
the electronic equipment determines the target image as a search image, and the search image comprises a finger image of the user.
By the implementation of the implementation mode, when the search voice input by the user is detected, the position of the hand of the user can be identified through the infrared sensor, so that the electronic equipment can successfully shoot the search image containing the finger of the user, and the electronic equipment can be ensured to successfully extract the search content from the search image.
102. The electronic equipment adjusts the color of the finger image into a color different from the background color of the search image, and determines the click position of the fingertip of the user according to the finger image.
In the embodiment of the invention, if the color of the finger of the user is similar to the background color of the search image, the electronic device may not be capable of accurately identifying the finger image of the user from the search image, and therefore, the electronic device may adjust the finger image of the user in the search image to a color different from the background color, so that the electronic device may accurately identify the finger image of the user from the search image. After the electronic device adjusts the color of the finger image of the user, the part representing the fingertip in the finger image can be identified, and the click position corresponding to the part representing the fingertip is determined.
As an alternative embodiment, the manner in which the electronic device determines the click position of the user fingertip according to the finger image may include the following steps:
the electronic equipment identifies the finger image through an image identification technology to obtain a fingertip image;
the electronic equipment calculates to obtain the central point of the fingertip image;
the electronic equipment determines an image coordinate system of a search image and determines position coordinates corresponding to the central point from the image coordinate system;
the electronic device determines the coordinates as the click position of the user's fingertip.
By implementing the embodiment, the fingertip area in the finger image can be identified through the image identification technology, and the coordinate of the center point of the fingertip area is determined under the image coordinate system of the search image, so that the determination mode of the click position of the user fingertip is more accurate.
103. The electronic device recognizes a search type included in the search voice, and determines a search area in the search image according to the search type and the click position.
In the embodiment of the invention, the electronic equipment can identify the meaning of the search voice through a neural network technology, and can judge whether the type of the current user needing to search is the search aiming at the characters, the search aiming at the words, the search aiming at the subjects, and the like through the identified meaning. When the search type is a search for a word, the size of a search area corresponding to the search type is matched with the area occupied by one word in the search image; when the search type is a search for a word, the size of a search area corresponding to the search type is matched with the area occupied by one word in the search image; when the search type is search for a topic, the size of a search area corresponding to the search type is matched with the area occupied by one topic in the search image. The electronic device can determine the search area according to the determined size of the search area and according to the click position.
104. The electronic equipment extracts the text information in the search area and determines the text information as search content.
In the embodiment of the invention, the electronic equipment can identify the character information contained in the search area of the search image through a character identification technology (such as an optical character identification technology) and can arrange the extracted character information, and when some characters in the search image are shielded, the electronic equipment can possibly not identify all the characters in the search area, so that the electronic equipment can supplement the shielded characters according to the identified character information or mark the positions of the shielded characters in the identified character information, thereby improving the accuracy of the electronic equipment in searching according to the character information.
In the method described in fig. 1, the search area can be determined according to the search speech, so that the content to be searched extracted by the electronic device is more accurate, and the accuracy of the answer searched by the electronic device according to the extracted search content is improved. In addition, the method described in fig. 1 is implemented to ensure that the electronic device can successfully extract the search content from the search image. In addition, the method described in fig. 1 can be implemented to make the determination of the click position of the user fingertip more accurate.
Example two
Referring to fig. 2, fig. 2 is a schematic flow chart illustrating another method for extracting search content according to an embodiment of the present invention. As shown in fig. 2, the method for extracting search content may include the steps of:
201. when the user voice in the environment where the electronic equipment is located is detected, the electronic equipment identifies the corresponding voice content in the user voice.
In the embodiment of the present invention, various sounds usually exist in an environment where the electronic device is located, and therefore, the electronic device needs to process the received sounds, and the electronic device can recognize the voice of the user and extract the voice of the user from the received sounds to obtain the voice of the user. Furthermore, the electronic device can recognize the extracted user voice, convert the user voice into target characters corresponding to the voice, and combine the converted target characters to obtain the voice content corresponding to the user voice.
202. The electronic equipment judges whether the voice content contains the search intention, if so, the step 203-step 210 are executed; if not, the flow is ended.
In the embodiment of the invention, the search intention can be an intention that the user needs the electronic equipment to search for corresponding content according to the search content through the user voice. The search intention can be determined by the query word, which can be words such as "what", and the like; the search intent may also be determined by negative words, which may be words such as "not", "unknown", "difficult", and the like. The electronic equipment can detect whether the speech content contains a question word or a negative word, and if the speech content contains the question word or the negative word, the electronic equipment can consider that the user currently meets the content to be searched, so that the user can be considered to contain a search intention in the speech; if the speech content does not contain the question words or the negative words, the electronic equipment can consider that the user does not currently encounter the content needing to be searched, and therefore, the speech of the user can be considered to contain no search intention.
203. The electronic device determines a user voice as a search voice, and determines that a search voice containing a search intention is detected.
In the embodiment of the present invention, by implementing steps 201 to 203, the user voice in the environment where the electronic device is located can be acquired, and only when it is detected that the user voice includes the search intention, the user voice can be determined as the search voice, so that a recognition process of the electronic device for the voice without the search intention is reduced, and the search efficiency of the terminal device is improved.
204. When a search voice input by a user is detected, the electronic equipment shoots a search image, and the search image contains a finger image of the user.
205. The electronic device identifies a background color of the search image and obtains a complementary color of the background color, the complementary color being different from the background color.
In the embodiment of the present invention, the manner of identifying the background color of the search image by the electronic device may be: the electronic equipment identifies each color contained in the search image; the electronic equipment can calculate the proportion of each color in the search image; the electronic device may determine the color with the largest proportion as the background color of the search image.
In the embodiment of the present invention, white light can be generated by blending the light color of the background color and the light color of the complementary color (Hucaise) corresponding to the background color, and therefore, the color difference between the background color and the complementary color of the background color is usually large. The finger image can be distinguished from the background color by adjusting the finger image to the complementary color of the background color, so that the electronic device can more accurately recognize the finger image.
206. The electronic device adjusts the finger image to complementary colors and recognizes a fingertip image in the finger image based on the complementary colors.
207. The electronic device acquires position coordinates of the fingertip image in an image coordinate system of the search image.
In the embodiment of the present invention, the electronic device may establish an image coordinate system according to the search image, the image coordinate system may be established with any one pixel point (e.g., a center point of the search image, pixel points corresponding to four corners of the search image, etc.) on the search image as an origin, and the fingertip image may include a plurality of pixel points, so that the center point of the fingertip image may be determined, and a coordinate corresponding to the center point may be determined in the image coordinate system, and the coordinate may be considered as a position coordinate of the fingertip image in the image coordinate system of the search image.
208. The electronic device determines the position coordinates as the click position of the user's fingertip.
In the embodiment of the present invention, by implementing the above steps 205 to 208, the color of the finger image can be adjusted to a color different from the background color of the search image, so that the electronic device can more accurately recognize the position indicated by the fingertip of the user finger, and the recognition accuracy of the fingertip of the user finger is improved.
209. The electronic device recognizes a search type included in the search voice, and determines a search area in the search image according to the search type and the click position.
210. The electronic equipment extracts the text information in the search area and determines the text information as search content.
In the method described in fig. 2, the search area can be determined according to the search speech, so that the content to be searched extracted by the electronic device is more accurate, and the accuracy of the answer searched by the electronic device according to the extracted search content is improved. In addition, the method described in fig. 2 is implemented, so that the search efficiency of the terminal device is improved. In addition, the method described in fig. 2 is implemented, and the recognition accuracy of the finger tip of the user is improved.
EXAMPLE III
Referring to fig. 3, fig. 3 is a schematic flow chart of another method for extracting search content according to the embodiment of the present invention. As shown in fig. 3, the method for extracting search content may include the steps of:
step 301 to step 304 are the same as step 201 to step 204, and the following description is omitted.
305. The electronic equipment adjusts the color of the finger image into a color different from the background color of the search image, and determines the click position of the fingertip of the user according to the finger image.
306. The electronic equipment determines the search type of the search voice according to the search intention contained in the search voice, wherein the search type at least comprises a word search type, a word search type and a title search type.
In the embodiment of the invention, if the search speech is identified to contain the content such as pronunciation, meaning and the like of a certain word, the search intention of the current search speech can be considered as the search aiming at the word, namely the word search type. If it is recognized that the search speech includes the content of pronunciation, meaning, usage and the like of a word, the search intention of the current search speech can be considered as the search aiming at the word, namely the word search type. If the search speech is identified to contain the content of a certain topic, the translation of a certain article, the making of a cultural and normative sentence and the like, the search intention of the current search speech can be considered as the search aiming at the topic, namely the topic search type.
307. The electronic device determines a search area that matches the search type.
In the embodiment of the invention, because the content contents of the search contents corresponding to the word search type, the word search type and the title search type are usually different in the search image, if the electronic device needs to accurately acquire the content to be searched by the user, the search area needs to be limited, so that the search area only contains the content to be searched as far as possible, thereby improving the search accuracy of the electronic device, the electronic device can determine different search areas according to different search types, and the electronic device can also determine different search areas based on the size of characters in the shot search image.
As an alternative embodiment, the manner in which the electronic device determines the search area matching the search type may include the steps of:
the electronic equipment calculates the unit area occupied by any character in the search image;
when the search type is detected to be the word search type, the electronic equipment determines that the search area matched with the word search type is a unit area;
when the search type is detected to be a word search type, the electronic equipment determines that the search area matched with the word search type is four unit areas, namely, the default word to be searched contains at most four words;
when the search type is detected to be the topic search type, the electronic device determines that the search area matched with the topic search type is a preset topic area, and the preset topic area can be an area calculated by the electronic device according to an average value of areas occupied by each topic in the search image.
By implementing the implementation mode, the search areas corresponding to different search types can be determined according to the unit area of one word in the search image, so that the area searched by the electronic equipment is related to the unit area of one word in the search image, and the accuracy of extracting the search content is improved.
308. And the electronic equipment determines a search area matched with the search area in the search image according to the click position.
In the embodiment of the present invention, the electronic device may set the shape of the search area to be any shape such as a rectangle, a circular diamond, and the like, and the area of the search area should be the same as the determined search area. The determination of the search area may be based on the click position, for example, the click position may be a center point of the search area.
In the embodiment of the present invention, by implementing the above steps 306 to 308, a search area may be determined according to a search intention included in the search speech, and a search area having the same area as the search area is selected according to a click position of a finger tip of the user, so that the electronic device extracts the text information in the search area, thereby improving accuracy of extracting the text information to be searched.
309. The electronic equipment extracts the text information in the search area and determines the text information as search content.
310. The electronic device uploads the search content and the search intention to the service device so that the service device searches answer information matched with the search content and the search intention, and feeds the answer information back to the electronic device.
In the embodiment of the invention, the electronic equipment can be connected with the service equipment in advance, and the service equipment can execute the searching function so as to reduce the utilization rate of the electronic equipment and improve the efficiency of extracting the searched content by the electronic equipment. After the service equipment searches the answer information, the answer information can be stored in association with the search content and the search intention, so that other electronic equipment connected with the service equipment can directly send the corresponding answer information to other electronic equipment when uploading the same search content and search intention, the service equipment does not need to perform the search operation again, and the answer searching efficiency of the service equipment is improved.
311. When the electronic equipment receives the answer information, the electronic equipment outputs and displays the answer information through a display of the electronic equipment.
In the embodiment of the present invention, by implementing the above steps 310 to 311, the search content and the search intention can be simultaneously fed back to the service device, so that the service device searches for an answer that is simultaneously matched with the search content and the search intention, thereby improving the accuracy of the searched answer.
As an alternative implementation manner, after the electronic device performs step 311, the following steps may also be performed:
the electronic equipment stores the answer information and the search content and the search intention in an associated manner;
when it is determined that the electronic device identifies the current search intention and the current search content, the electronic device detects whether target answer information matched with the current search intention and the current search content at the same time is stored;
if yes, the electronic equipment outputs and displays target answer information through the display;
if not, the electronic device performs step 310.
By implementing the implementation of the implementation mode, the searched answer information, the search content and the search intention can be stored in an associated manner, so that the electronic equipment can directly output the corresponding answer information from the memory when the same search content and search intention are subsequently identified, and the searching speed of the electronic equipment is improved.
In the method described in fig. 3, the search area can be determined according to the search speech, so that the content to be searched extracted by the electronic device is more accurate, and the accuracy of the answer searched by the electronic device according to the extracted search content is improved. In addition, the method described in fig. 3 is implemented, and the accuracy of the search content extraction is improved. In addition, the method described in fig. 3 is implemented, so that the accuracy of extracting the text information to be searched is improved. In addition, the method described in fig. 3 is implemented to improve the accuracy of the searched answer. In addition, the method described in fig. 3 is implemented, so that the searching speed of the electronic equipment is increased.
Example four
Referring to fig. 4, fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the disclosure. As shown in fig. 4, the electronic device may include:
a shooting unit 401 for shooting a search image including an image of a finger of a user when a search voice input by the user is detected.
As an alternative embodiment, the manner of photographing the search image by the photographing unit 401 when the search voice input by the user is detected may include the following steps:
when search voice input by a user is detected, recognizing the position of a hand of the user through an infrared sensor;
the camera shoots a target image at the position of the hand of the user;
the target image is determined as a search image, and the search image includes a finger image of the user.
By the implementation of the implementation mode, when the search voice input by the user is detected, the position of the hand of the user can be identified through the infrared sensor, so that the electronic equipment can successfully shoot the search image containing the finger of the user, and the electronic equipment can be ensured to successfully extract the search content from the search image.
An adjusting unit 402, configured to adjust the color of the finger image captured by the capturing unit 401 to a color different from the background color of the search image, and determine the click position of the user fingertip according to the finger image.
As an alternative embodiment, the manner of determining the click position of the user fingertip according to the finger image by the adjusting unit 402 may specifically be:
identifying the finger image by an image identification technology to obtain a fingertip image;
calculating to obtain the central point of the fingertip image;
determining an image coordinate system of a search image, and determining a position coordinate corresponding to the central point from the image coordinate system;
the coordinates are determined as the click position of the user's fingertip.
By implementing the embodiment, the fingertip area in the finger image can be identified through the image identification technology, and the coordinate of the center point of the fingertip area is determined under the image coordinate system of the search image, so that the determination mode of the click position of the user fingertip is more accurate.
A first recognition unit 403, configured to recognize a search type included in the search speech detected by the shooting unit 401, and determine a search area in the search image according to the search type and the click position determined by the adjustment unit 402.
An extracting unit 404 configured to extract the text information in the search area identified by the first identifying unit 403 and determine the text information as the search content.
Therefore, by implementing the electronic device described in fig. 4, the search area can be determined according to the search speech, so that the content to be searched extracted by the electronic device is more accurate, and the accuracy of the answer searched by the electronic device according to the extracted search content is improved. In addition, the electronic device described in fig. 4 is implemented, so that the electronic device can successfully extract the search content from the search image. In addition, the electronic device described in fig. 4 can be implemented to make the determination of the click position of the user fingertip more accurate.
EXAMPLE five
Referring to fig. 5, fig. 5 is a schematic structural diagram of another electronic device according to an embodiment of the disclosure. The electronic device shown in fig. 5 is optimized from the electronic device shown in fig. 4. Compared to the electronic device shown in fig. 4, the electronic device shown in fig. 5 may further include:
a second recognition unit 405, configured to recognize corresponding voice content in the user voice when the shooting unit 401 detects the search voice input by the user, and before shooting the search image, and when detecting that the user voice exists in the environment where the electronic device is located.
A judging unit 406, configured to judge whether the voice content recognized by the second recognition unit 405 contains a search intention.
A determination unit 407 configured to determine the user voice as the search voice and determine that the search voice containing the search intention is detected, when the result of the determination by the determination unit 406 is yes.
In the embodiment of the invention, the user voice in the environment where the electronic equipment is located can be acquired, and the user voice can be determined as the search voice only when the fact that the user voice contains the search intention is detected, so that the recognition process of the electronic equipment to the voice without the search intention is reduced, and the search efficiency of the terminal equipment is improved.
As an alternative implementation, the adjusting unit 402 of the electronic device shown in fig. 5 may include:
the identifying subunit 4021 is configured to identify a background color of the search image, and acquire a complementary color of the background color, where the complementary color is different from the background color;
an adjusting subunit 4022 configured to adjust the finger image to a complementary color, and recognize a fingertip image in the finger image based on the complementary color acquired by the identifying subunit 4021;
an acquiring subunit 4023, configured to acquire the position coordinates of the fingertip image determined by the adjusting subunit 4022 in the image coordinate system of the search image;
a first determining sub-unit 4024, configured to determine the position coordinates acquired by the acquiring sub-unit 4023 as a click position of a fingertip of the user.
By implementing the implementation mode, the color of the finger image can be adjusted to be different from the background color of the search image, so that the electronic equipment can more accurately identify the position indicated by the fingertip of the finger of the user, and the identification accuracy of the fingertip of the finger of the user is improved.
Therefore, by implementing the electronic device described in fig. 5, the search area can be determined according to the search speech, so that the content to be searched extracted by the electronic device is more accurate, and the accuracy of the answer searched by the electronic device according to the extracted search content is improved. In addition, the electronic device described in fig. 5 is implemented, so that the search efficiency of the terminal device is improved. In addition, the electronic equipment described in fig. 5 is implemented, so that the recognition accuracy of the finger tip of the user is improved.
EXAMPLE six
Referring to fig. 6, fig. 6 is a schematic structural diagram of another electronic device according to an embodiment of the disclosure. The electronic device shown in fig. 6 is optimized from the electronic device shown in fig. 5. Compared to the electronic device shown in fig. 5, the first identification unit 403 of the electronic device shown in fig. 6 may include:
a reading subunit 4031, configured to determine a search type of the search speech according to the search intention included in the search speech, where the search type at least includes a word search type, and a title search type.
A second determining subunit 4032 configured to determine a search area matching the search type read by the reading subunit 4031.
As an optional implementation manner, the manner of determining the search area matching the search type by the second determining subunit 4032 may specifically be:
calculating the unit area occupied by any character in the search image;
when the search type is detected to be the word search type, determining the search area matched with the word search type to be a unit area;
when the search type is detected to be a word search type, determining that the search area matched with the word search type is four unit areas, namely, the default word to be searched contains at most four characters;
when the search type is detected to be the topic search type, determining the search area matched with the topic search type to be a preset topic area, wherein the preset topic area can be an area calculated by the electronic device according to an average value of areas occupied by each topic in the search image.
By implementing the implementation mode, the search areas corresponding to different search types can be determined according to the unit area of one word in the search image, so that the area searched by the electronic equipment is related to the unit area of one word in the search image, and the accuracy of extracting the search content is improved.
A third determining subunit 4033, configured to determine, in the search image, a search area that matches the search area determined by the second determining subunit 4032 based on the click position.
In the embodiment of the invention, the search area can be determined according to the search intention contained in the search voice, and the search area with the same area as the search area is selected according to the click position of the finger tip of the user, so that the electronic equipment can extract the text information in the search area, and the accuracy of extracting the text information to be searched is improved.
As an alternative implementation, the electronic device shown in fig. 5 may further include:
an uploading unit 408 for uploading the search content and the search intention to the service apparatus after the extracting unit 404 extracts the text information in the search area and determines the text information as the search content, so that the service apparatus searches for answer information matching the search content and the search intention and feeds back the answer information to the electronic apparatus;
and an output unit 409, configured to output and display the answer information through a display of the electronic device when the electronic device receives the answer information.
By implementing the embodiment, the search content and the search intention can be simultaneously fed back to the service device, so that the service device searches for answers matched with the search content and the search intention simultaneously, and the accuracy of the searched answers is improved.
As an optional implementation, the output unit 409 may be further configured to:
storing answer information and search content and search intention in an associated manner;
when the current search intention and the current search content are determined to be identified, detecting whether target answer information matched with the current search intention and the current search content at the same time is stored;
if yes, outputting and displaying target answer information through a display;
and if not, uploading the search content and the search intention to the service device, so that the service device searches answer information matched with the search content and the search intention, and feeding back the answer information to the electronic device.
By implementing the implementation of the implementation mode, the searched answer information, the search content and the search intention can be stored in an associated manner, so that the electronic equipment can directly output the corresponding answer information from the memory when the same search content and search intention are subsequently identified, and the searching speed of the electronic equipment is improved.
Therefore, by implementing the electronic device described in fig. 6, the search area can be determined according to the search speech, so that the content to be searched extracted by the electronic device is more accurate, and the accuracy of the answer searched by the electronic device according to the extracted search content is improved. In addition, the electronic equipment described in fig. 6 is implemented, so that the accuracy of search content extraction is improved. In addition, the electronic device described in fig. 6 is implemented, so that the accuracy of extracting the text information to be searched is improved. In addition, the electronic device described in fig. 6 is implemented to improve the accuracy of the searched answer. In addition, the electronic device described in fig. 6 is implemented, so that the speed of searching the electronic device is increased.
EXAMPLE seven
Referring to fig. 7, fig. 7 is a schematic structural diagram of another electronic device according to an embodiment of the disclosure. As shown in fig. 7, the electronic device may include:
a memory 701 in which executable program code is stored;
a processor 702 coupled to the memory 701;
wherein, the processor 702 calls the executable program code stored in the memory 701 to execute part or all of the steps of the method in the above method embodiments.
The embodiment of the invention also discloses a computer readable storage medium, wherein the computer readable storage medium stores program codes, wherein the program codes comprise instructions for executing part or all of the steps of the method in the above method embodiments.
Embodiments of the present invention also disclose a computer program product, wherein, when the computer program product is run on a computer, the computer is caused to execute part or all of the steps of the method as in the above method embodiments.
The embodiment of the present invention also discloses an application publishing platform, wherein the application publishing platform is used for publishing a computer program product, and when the computer program product runs on a computer, the computer is caused to execute part or all of the steps of the method in the above method embodiments.
It should be appreciated that reference throughout this specification to "an embodiment of the present invention" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, the appearances of the phrase "in embodiments of the invention" appearing in various places throughout the specification are not necessarily all referring to the same embodiments. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. Those skilled in the art should also appreciate that the embodiments described in this specification are exemplary and alternative embodiments, and that the acts and modules illustrated are not required in order to practice the invention.
In various embodiments of the present invention, it should be understood that the sequence numbers of the above-mentioned processes do not imply an inevitable order of execution, and the execution order of the processes should be determined by their functions and inherent logic, and should not constitute any limitation on the implementation process of the embodiments of the present invention.
In addition, the terms "system" and "network" are often used interchangeably herein. It should be understood that the term "and/or" herein is merely one type of association relationship describing an associated object, meaning that three relationships may exist, for example, a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
In the embodiments provided herein, it should be understood that "B corresponding to a" means that B is associated with a from which B can be determined. It should also be understood, however, that determining B from a does not mean determining B from a alone, but may also be determined from a and/or other information.
It will be understood by those skilled in the art that all or part of the steps in the methods of the embodiments described above may be implemented by instructions associated with a program, which may be stored in a computer-readable storage medium, where the storage medium includes Read-Only Memory (ROM), Random Access Memory (RAM), Programmable Read-Only Memory (PROM), Erasable Programmable Read-Only Memory (EPROM), One-time Programmable Read-Only Memory (OTPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), compact disc-Read-Only Memory (CD-ROM), or other Memory, magnetic disk, magnetic tape, or magnetic tape, Or any other medium which can be used to carry or store data and which can be read by a computer.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated units, if implemented as software functional units and sold or used as a stand-alone product, may be stored in a computer accessible memory. Based on such understanding, the technical solution of the present invention, which is a part of or contributes to the prior art in essence, or all or part of the technical solution, can be embodied in the form of a software product, which is stored in a memory and includes several requests for causing a computer device (which may be a personal computer, a server, a network device, or the like, and may specifically be a processor in the computer device) to execute part or all of the steps of the above-described method of each embodiment of the present invention.
The method for extracting search content and the electronic device disclosed by the embodiment of the invention are described in detail, a specific example is applied in the text to explain the principle and the implementation of the invention, and the description of the embodiment is only used for helping to understand the method and the core idea of the invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (12)

1. An extraction method of search content, the method comprising:
when search voice input by a user is detected, shooting a search image, wherein the search image comprises a finger image of the user;
adjusting the color of the finger image to be a color different from the background color of the search image, and determining the click position of the fingertip of the user according to the finger image;
identifying a search type contained in the search voice, and determining a search area in the search image according to the search type and the click position;
and extracting the text information in the search area, and determining the text information as search content.
2. The method according to claim 1, wherein before capturing a search image when the search voice input by the user is detected, the method further comprises:
when detecting that user voice exists in the environment where the electronic equipment is located, identifying corresponding voice content in the user voice;
judging whether the voice content contains a search intention;
if so, determining the user voice as a search voice, and determining that the search voice containing the search intention is detected.
3. The method of claim 2, wherein the adjusting the color of the finger image to a color different from a background color of the search image and determining the click position of the user fingertip from the finger image comprises:
identifying a background color of the search image and obtaining a complementary color of the background color, the complementary color being different from the background color;
adjusting the finger image to the complementary color, and identifying a fingertip image in the finger image based on the complementary color;
acquiring the position coordinates of the fingertip image in the image coordinate system of the search image;
and determining the position coordinate as the click position of the fingertip of the user.
4. The method according to claim 2 or 3, wherein the identifying a search type included in the search speech and determining a search area in the search image according to the search type and the click position includes:
determining a search type of the search voice according to the search intention contained in the search voice, wherein the search type at least comprises a word search type, a word search type and a title search type;
determining a search area matching the search type;
and determining a search area matched with the search area in the search image according to the click position.
5. The method according to any one of claims 2 to 4, wherein after extracting the text information in the search area and determining the text information as search content, the method further comprises:
uploading the search content and the search intention to a service device to enable the service device to search answer information matched with the search content and the search intention and feed back the answer information to the electronic device;
and when the electronic equipment receives the answer information, outputting and displaying the answer information through a display of the electronic equipment.
6. An electronic device, comprising:
the device comprises a shooting unit, a voice recognition unit and a voice recognition unit, wherein the shooting unit is used for shooting a search image when a search voice input by a user is detected, and the search image comprises a finger image of the user;
the adjusting unit is used for adjusting the color of the finger image into a color different from the background color of the search image and determining the click position of the fingertip of the user according to the finger image;
the first identification unit is used for identifying a search type contained in the search voice and determining a search area in the search image according to the search type and the click position;
and the extraction unit is used for extracting the character information in the search area and determining the character information as search content.
7. The electronic device of claim 6, further comprising:
the second identification unit is used for identifying corresponding voice content in the user voice when the shooting unit detects the search voice input by the user, and before shooting the search image, and when detecting that the user voice exists in the environment where the electronic equipment is located;
the judging unit is used for judging whether the voice content contains a search intention;
a determination unit configured to determine the user voice as a search voice and determine that the search voice including the search intention is detected, when the result of the determination by the determination unit is yes.
8. The electronic device according to claim 7, wherein the adjusting unit includes:
a recognition subunit, configured to recognize a background color of the search image, and obtain a complementary color of the background color, where the complementary color is different from the background color;
an adjusting subunit, configured to adjust the finger image to the complementary color, and identify a fingertip image in the finger image based on the complementary color;
an acquiring subunit, configured to acquire position coordinates of the fingertip image in an image coordinate system of the search image;
and the first determining subunit is used for determining the position coordinates as the click positions of the fingertips of the user.
9. The electronic device according to claim 7 or 8, wherein the first identification unit includes:
the reading subunit is used for determining the search type of the search voice according to the search intention contained in the search voice, wherein the search type at least comprises a word search type, a word search type and a title search type;
a second determining subunit, configured to determine a search area matching the search type;
and the third determining subunit is used for determining a search area matched with the search area in the search image according to the click position.
10. The electronic device according to any one of claims 7 to 9, further comprising:
the uploading unit is used for uploading the search content and the search intention to service equipment after the extraction unit extracts the text information in the search area and determines the text information as the search content, so that the service equipment searches answer information matched with the search content and the search intention and feeds the answer information back to the electronic equipment;
and the output unit is used for outputting and displaying the answer information through a display of the electronic equipment when the electronic equipment receives the answer information.
11. An electronic device, comprising:
a memory storing executable program code;
a processor coupled with the memory;
the processor calls the executable program code stored in the memory to execute the extraction method of the search content according to any one of claims 1 to 5.
12. A computer-readable storage medium characterized by storing a computer program for causing a computer to execute the extraction method of search content according to any one of claims 1 to 5.
CN201910124095.0A 2019-02-18 2019-02-18 Search content extraction method and electronic equipment Pending CN111027353A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910124095.0A CN111027353A (en) 2019-02-18 2019-02-18 Search content extraction method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910124095.0A CN111027353A (en) 2019-02-18 2019-02-18 Search content extraction method and electronic equipment

Publications (1)

Publication Number Publication Date
CN111027353A true CN111027353A (en) 2020-04-17

Family

ID=70203454

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910124095.0A Pending CN111027353A (en) 2019-02-18 2019-02-18 Search content extraction method and electronic equipment

Country Status (1)

Country Link
CN (1) CN111027353A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111858855A (en) * 2020-07-20 2020-10-30 百度在线网络技术(北京)有限公司 Information query method, device, system, electronic equipment and storage medium
CN113590864A (en) * 2020-04-30 2021-11-02 百度在线网络技术(北京)有限公司 Method and device for obtaining search result, electronic equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050213039A1 (en) * 2004-03-10 2005-09-29 Fuji Xerox Co., Ltd. Color vision characteristic detection apparatus
US20080316223A1 (en) * 2007-06-19 2008-12-25 Canon Kabushiki Kaisha Image generation method
CN105844242A (en) * 2016-03-23 2016-08-10 湖北知本信息科技有限公司 Method for detecting skin color in image
CN106610761A (en) * 2015-10-21 2017-05-03 中兴通讯股份有限公司 Icon color adjusting method and device
CN109192204A (en) * 2018-08-31 2019-01-11 广东小天才科技有限公司 A kind of sound control method and smart machine based on smart machine camera

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050213039A1 (en) * 2004-03-10 2005-09-29 Fuji Xerox Co., Ltd. Color vision characteristic detection apparatus
US20080316223A1 (en) * 2007-06-19 2008-12-25 Canon Kabushiki Kaisha Image generation method
CN106610761A (en) * 2015-10-21 2017-05-03 中兴通讯股份有限公司 Icon color adjusting method and device
CN105844242A (en) * 2016-03-23 2016-08-10 湖北知本信息科技有限公司 Method for detecting skin color in image
CN109192204A (en) * 2018-08-31 2019-01-11 广东小天才科技有限公司 A kind of sound control method and smart machine based on smart machine camera

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113590864A (en) * 2020-04-30 2021-11-02 百度在线网络技术(北京)有限公司 Method and device for obtaining search result, electronic equipment and storage medium
CN111858855A (en) * 2020-07-20 2020-10-30 百度在线网络技术(北京)有限公司 Information query method, device, system, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN109635772B (en) Dictation content correcting method and electronic equipment
CN111078083A (en) Method for determining click-to-read content and electronic equipment
CN109597943B (en) Learning content recommendation method based on scene and learning equipment
CN111026949A (en) Question searching method and system based on electronic equipment
CN110941992B (en) Smile expression detection method and device, computer equipment and storage medium
CN111079494A (en) Learning content pushing method and electronic equipment
CN110955818A (en) Searching method, searching device, terminal equipment and storage medium
CN111026924A (en) Method for acquiring content to be searched and electronic equipment
CN111680177A (en) Data searching method, electronic device and computer-readable storage medium
CN111027353A (en) Search content extraction method and electronic equipment
CN107992872B (en) Method for carrying out text recognition on picture and mobile terminal
CN111078983B (en) Method for determining page to be identified and learning equipment
CN111079726B (en) Image processing method and electronic equipment
CN111753168A (en) Method and device for searching questions, electronic equipment and storage medium
CN111091034A (en) Multi-finger recognition-based question searching method and family education equipment
CN111077997A (en) Point reading control method in point reading mode and electronic equipment
CN111079489A (en) Content identification method and electronic equipment
CN111711758B (en) Multi-pointing test question shooting method and device, electronic equipment and storage medium
CN111079503B (en) Character recognition method and electronic equipment
CN111753715A (en) Method and device for shooting test questions in click-to-read scene, electronic equipment and storage medium
CN111079498B (en) Learning function switching method based on mouth shape recognition and electronic equipment
CN111159433B (en) Content positioning method and electronic equipment
CN109084750B (en) Navigation method and electronic equipment
CN111078080B (en) Point reading control method and electronic equipment
CN113449652A (en) Positioning method and device based on biological feature recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination