CN109766413B - Searching method applied to family education equipment and family education equipment - Google Patents

Searching method applied to family education equipment and family education equipment Download PDF

Info

Publication number
CN109766413B
CN109766413B CN201910041713.5A CN201910041713A CN109766413B CN 109766413 B CN109766413 B CN 109766413B CN 201910041713 A CN201910041713 A CN 201910041713A CN 109766413 B CN109766413 B CN 109766413B
Authority
CN
China
Prior art keywords
pop
learning
camera
search result
family education
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910041713.5A
Other languages
Chinese (zh)
Other versions
CN109766413A (en
Inventor
徐杨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Genius Technology Co Ltd
Original Assignee
Guangdong Genius Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Genius Technology Co Ltd filed Critical Guangdong Genius Technology Co Ltd
Priority to CN201910041713.5A priority Critical patent/CN109766413B/en
Publication of CN109766413A publication Critical patent/CN109766413A/en
Application granted granted Critical
Publication of CN109766413B publication Critical patent/CN109766413B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The embodiment of the invention relates to the technical field of electronic equipment, and discloses a searching method applied to family education equipment and the family education equipment, wherein a pop-up camera arranged on the family education equipment can be freely popped out of the front surface of a shell or folded on the front surface of the shell, and the method comprises the following steps: controlling the pop-up camera to pop up the front surface of the shell according to an input voice searching instruction; after the pop-up camera is popped out of the front surface of the shell, a preset included angle is formed between the central axis of the lens of the pop-up camera and the front surface of the shell; controlling the pop-up camera to detect a learning area which is limited by the included angle and is positioned below the pop-up camera and pointed by a user in a learning scene; controlling the pop-up camera to shoot the learning area so as to capture a learning image; searching for a search result matching the learning image, and outputting the search result. By implementing the embodiment of the invention, the search convenience and the intelligent degree can be improved, and the learning efficiency of the user can be further effectively improved.

Description

Searching method applied to family education equipment and family education equipment
Technical Field
The invention relates to the technical field of education, in particular to a searching method applied to home education equipment and the home education equipment.
Background
In the learning process, a user (such as a student) often uses a family education device (such as a family education machine) to shoot a learning image corresponding to learning content to be searched (such as a certain practice problem), and controls the family education device to search a learning result (such as a problem solving thought and an answer of the certain practice problem) matched with the learning image. However, in practice, the searching method depends on that the user manually controls the family education equipment to shoot the learning image, and the user needs to operate the family education equipment for many times, so that the operation is complicated, and the intelligent degree is low.
Disclosure of Invention
The embodiment of the invention discloses a searching method applied to family education equipment and the family education equipment, which can improve the searching convenience and the intelligent degree.
The first aspect of the embodiment of the present invention discloses a search method applied to a family education device, wherein a pop-up camera is embedded in a front surface of a casing of the family education device, the front surface of the casing is used for facing a user, and the pop-up camera can freely pop up the front surface of the casing or be folded on the front surface of the casing, and the method includes:
controlling the pop-up camera to pop up the front surface of the shell according to an input voice searching instruction; after the pop-up camera is popped out of the front surface of the shell, a preset included angle is formed between the central axis of the lens of the pop-up camera and the front surface of the shell;
controlling the pop-up camera to detect a learning area which is limited by the included angle and is positioned below the pop-up camera and pointed by a user in a learning scene;
controlling the pop-up camera to shoot the learning area so as to capture a learning image;
searching for a search result matching the learning image, and outputting the search result.
As an optional implementation manner, in the first aspect of the embodiment of the present invention, the controlling, according to the input voice search instruction, the pop-up camera to pop up the front surface of the chassis includes:
judging whether the input voice search instruction contains a keyword for controlling the pop-up camera to pop up;
if the voice search instruction contains keywords for controlling the pop-up camera to pop up, judging whether the voice features corresponding to the input voice search instruction are matched with legal voice features preset by the family education equipment;
and if the front surface of the shell is matched with the front surface of the shell, controlling the pop-up camera to pop up the front surface of the shell.
As an optional implementation manner, in the first aspect of the embodiment of the present invention, the controlling the pop-up camera to shoot the learning area to capture a learning image includes:
detecting pointing position information of a user from the learning region;
determining a target search range in the learning area according to the pointing position information;
and controlling the pop-up camera to shoot the target search range so as to capture a learning image.
As an alternative implementation, in the first aspect of the embodiments of the present invention, the searching for the search result matching the learning image and outputting the search result includes:
searching out at least one search result matched with the learning image;
determining a search result with the highest matching degree from the at least one search result matched with the learning image as a target search result;
sending information containing the target search result, the learning image and the prompt field to guardian equipment associated with the family education equipment; the prompt field is used for prompting the guardian equipment to send learning guide information to the family education equipment, and the learning guide information is used for guiding the user to obtain the target search result according to the learning image;
acquiring the learning guide information sent by the guardian equipment;
outputting the learning guidance information and starting timing;
and outputting the target search result when the timed duration reaches the designated duration.
As an alternative implementation, in the first aspect of the embodiments of the present invention, the searching for the search result matching the learning image and outputting the search result includes:
searching out at least one search result matched with the learning image;
reporting the learning image and the at least one search result matched with the learning image to a teacher terminal associated with the family education equipment;
recording a certain search result selected by the teacher terminal from the at least one search result matched with the learning image as a target search result;
acquiring learning guide information sent by the teacher terminal, wherein the learning guide information is used for guiding the user to obtain the target search result according to the learning image;
outputting the learning guidance information and starting timing;
and outputting the target search result when the timed duration reaches the designated duration.
A second aspect of the embodiments of the present invention discloses a family education device, in which a pop-up camera is embedded in a front surface of a housing of the family education device, the front surface of the housing facing a user, and the pop-up camera can be freely popped up or retracted from the front surface of the housing, the family education device including:
the first control unit is used for controlling the pop-up camera to pop up the front surface of the shell according to an input voice search instruction; after the pop-up camera is popped out of the front surface of the shell, a preset included angle is formed between the central axis of the lens of the pop-up camera and the front surface of the shell;
the second control unit is used for controlling the pop-up camera to detect a learning area which is limited by the included angle and is positioned below the pop-up camera and pointed by a user in a learning scene;
a third control unit for controlling the pop-up camera to shoot the learning area so as to capture a learning image;
and the searching unit is used for searching the searching result matched with the learning image and outputting the searching result.
As an optional implementation manner, in a second aspect of the embodiment of the present invention, the first control unit includes:
the first judgment subunit is used for judging whether the input voice search instruction contains a keyword for controlling the pop-up camera to pop up;
the second judgment subunit is configured to, when the voice search instruction includes a keyword for controlling the pop-up camera to pop up, judge whether a voice feature corresponding to the input voice search instruction matches a legal voice feature preset by the family education device;
and the first control subunit is used for controlling the pop-up camera to pop up the front surface of the shell when the judgment result of the second judgment subunit is matching.
As an optional implementation manner, in a second aspect of the embodiment of the present invention, the third control unit includes:
a detection subunit configured to detect pointing position information of a user from the learning region;
the first determining subunit is used for determining a target search range in the learning area according to the pointing position information;
and the second control subunit is used for controlling the pop-up camera to shoot the target search range so as to capture a learning image.
As an optional implementation manner, in the second aspect of the embodiment of the present invention, the search unit includes:
the first searching subunit is used for searching out at least one searching result matched with the learning image;
a second determining subunit, configured to determine, as a target search result, a search result with a highest matching degree from the at least one search result matched with the learning image;
the first interaction subunit is used for sending information containing the target search result, the learning image and the prompt field to guardian equipment associated with the family education equipment; the prompt field is used for prompting the guardian equipment to send learning guide information to the family education equipment, and the learning guide information is used for guiding the user to obtain the target search result according to the learning image; and acquiring the learning guidance information sent by the guardian equipment;
a first output control subunit, configured to output the learning guidance information and start timing; and outputting the target search result when the timed duration reaches the specified duration.
As an optional implementation manner, in the second aspect of the embodiment of the present invention, the search unit includes:
the second searching subunit is used for searching out at least one searching result matched with the learning image;
the second interaction subunit is used for reporting the learning image and the at least one search result matched with the learning image to a teacher terminal associated with the family education equipment;
the recording subunit is used for recording a certain search result selected by the teacher terminal from the at least one search result matched with the learning image as a target search result;
the second interaction subunit is further configured to acquire learning guidance information sent by the teacher terminal, where the learning guidance information is used to guide the user to obtain the target search result according to the learning image;
a second output control subunit, configured to output the learning guidance information and start timing; and outputting the target search result when the timed duration reaches the specified duration.
The third aspect of the embodiments of the present invention discloses a family education device, including:
a memory storing executable program code;
a processor coupled with the memory;
the processor calls the executable program code stored in the memory for executing the search method applied to the family education device according to the first aspect.
A fourth aspect of the present invention discloses a computer-readable storage medium storing a computer program, where the computer program causes a computer to execute the search method applied to a family education device according to the first aspect.
Compared with the prior art, the embodiment of the invention has the following beneficial effects:
in the embodiment of the invention, the family education equipment controls the pop-up camera to pop up the front surface of the shell according to the input voice search instruction; after the pop-up camera is popped up, a preset included angle (for example, an included angle smaller than 90 degrees) is formed between the central axis of the lens of the pop-up camera and the front surface of the shell; the family education equipment controls the pop-up camera to detect a learning area which is limited by an included angle and is positioned below the pop-up camera and pointed by a user in a learning scene, and controls the pop-up camera to shoot the learning area so as to capture a learning image; and finally searching a search result matched with the learning image and outputting the search result. By implementing the method, the pop-up camera can pop up and shoot the learning area when the user sends a voice search instruction and points to the learning area, the learning image is captured from the learning area, the search result matched with the learning image is searched, and the search result is displayed and played, so that the user can be answered by sending voice and pointing to the knowledge point or question on the book or exercise book, the search convenience and the intelligence can be improved, and the learning efficiency of the user can be effectively improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a schematic flow chart illustrating a searching method applied to a family education device according to an embodiment of the present invention;
FIG. 2 is a schematic view of a usage scenario of a family education device according to an embodiment of the present invention;
FIG. 3 is a schematic flow chart illustrating another searching method applied to a family education device according to an embodiment of the present invention;
FIG. 4 is a flowchart illustrating a searching method applied to a family education device according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a family education device according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of another family education device disclosed in embodiments of the present invention;
FIG. 7 is a schematic diagram of another family education device disclosed in the embodiments of the present invention;
FIG. 8 is a schematic diagram of yet another family education device disclosed in embodiments of the present invention;
FIG. 9 is a schematic diagram of another teaching device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The embodiment of the invention discloses a searching method applied to family education equipment and the family education equipment, which can improve the searching convenience and the intelligent degree. The following are detailed below.
Example one
Referring to fig. 1, fig. 1 is a schematic flowchart illustrating a searching method applied to a family education device according to an embodiment of the present invention. The searching method applied to the family education device described in fig. 1 can be applied to the family education devices such as a learning tablet, a learning machine, a family education machine, a point reading machine, and the like, and the embodiment of the present invention is not limited thereto. As shown in fig. 1, the searching method applied to the family education device may include the steps of:
101. the family education equipment controls the pop-up camera to pop up the front surface of the shell according to the input voice search instruction; after the pop-up camera is popped out of the front surface of the shell, a preset included angle is formed between the central axis of the lens of the pop-up camera and the front surface of the shell.
Referring to fig. 2, fig. 2 is a schematic view of a usage scenario of a family education device according to an embodiment of the present invention. As shown in fig. 2, a pop-up camera 20 may be embedded in the front surface 11 of the housing of the home education device 10 for facing a user, wherein the pop-up camera 20 can be freely popped out of the front surface 11 of the housing, and after being popped out, a central axis of a lens of the pop-up camera 20 forms a preset included angle a with the front surface 11 of the housing, and the included angle a may range from 0 ° < a < 90 °, so that the pop-up camera 20 can be ensured to photograph a learning scene 30 located below the pop-up camera 20. The learning scenario 30 may include any page of a book. In an embodiment of the present invention, the pop-up camera 20 can be freely folded on the front surface 11 of the housing (not shown in fig. 2), for example, the family education device controls the pop-up camera 20 to be freely folded on the front surface 11 of the housing according to the voice information input by the user, or the user can manually control the pop-up camera 20 to be freely folded on the front surface 11 of the housing, which is not limited in the embodiment of the present invention.
102. The family education device controls the pop-up camera to detect a learning area which is limited by the included angle and located below the pop-up camera and pointed by a user in a learning scene.
In the embodiment of the invention, after the pop-up camera is popped up, the pop-up camera can start to detect the learning area which is limited by the included angle and is positioned below the pop-up camera and pointed by the user in the learning scene in real time.
In this learning scenario, the colors that appear are at least: the background color of a book or exercise book that is approximately white or light in color, the color of the font on the book or exercise book, and the color of the user's finger pointing to the font, the pointing pen, the pointing laser point. The gradation value corresponding to white is 255 and the gradation value corresponding to black is 0. Suppose the learning scene is the gray scale value of the color of the book or the exercise book is 0-30, and the gray scale value of the font on the book or the exercise book is 150-255.
As an alternative implementation, the process of detecting the learning scenario may be:
a. and selecting the gray value of the color of the book or the exercise book and the gray value of the font on the book or the exercise book as a plurality of areas 1 by adopting a threshold segmentation method.
The threshold segmentation method comprises threshold segmentation and connected domain segmentation, the home education device screens out the region 0 with the gray values of 0-30 and 150-255 in the learning scene through the threshold segmentation, and then the region 0 is segmented into a plurality of independent regions 1 through the connected domain segmentation. This results in a region 1 with both background and text. The threshold segmentation method realizes a preprocessing method for dividing the image by using the gray value, compresses the data volume, and simplifies the analysis and processing steps.
b. The plurality of regions 1 are edge-extracted into a plurality of regions 2 by using an edge extraction method.
The edge extraction method comprises the steps of extracting edges and selecting areas with the size of a template, extracting the edges in a plurality of areas 1 by the aid of an edge extraction operator by the aid of the family education equipment, and segmenting the areas with the size and the shape similar to the size and the shape of the template from the areas included in the edges in the areas 1 to form a plurality of areas 2. By adopting the algorithm, the areas 1 subjected to image preprocessing are changed into the areas 2 close to the template, so that the subsequent template matching is more convenient.
c. And comparing and matching the plurality of areas 2 with a preset template by using a template matching method, and selecting an area 2 with the highest matching degree as a learning area.
The method comprises the steps of extracting prefabricated templates, obtaining matching parameters with the highest matching degree of a plurality of templates and a plurality of regions 2 through finite tests and calculation, inputting the matching parameters into a template matching operator which is suitable for the scene, carrying out template matching on the regions 2 and the prefabricated templates, and selecting the region 2 with the highest matching degree as a learning region. By adopting the template matching method, the calculation amount of the family education equipment can be reduced, and the learning area can be quickly found out.
It should be noted that, in this step 102, the predetermined template and the plurality of regions 2 to be compared are all images with uncorrected lens distortion.
By adopting the image detection processing mode, the family education equipment can distinguish the learning region pointed by the user from other interference regions in a template matching mode under the condition of containing interference, really obtain the region pointed by the user and select the learning region.
103. The family education device controls the pop-up camera to photograph the learning area to capture a learning image.
In the embodiment of the invention, after the learning area pointed by the user is detected, the family education equipment needs to shoot and process the learning image so as to obtain the knowledge points and the problems in the learning image.
As an alternative embodiment, this step may include:
a: shooting the learning area to obtain an area 3
b: the distortion of the region 3 is corrected to be a region 4.
Since the shooting angle of the pop-up camera is not perpendicular to the shot area, the shot image has distortion caused by the non-perpendicular shooting angle. The mapping relation between the distorted image and the undistorted image is derived by the distortion model, and then the undistorted image is calculated to be the area 4.
By adopting the image processing mode, character distortion formed by the parameter setting of the pop-up camera and the included angle between the camera and the learning area can be removed, so that character recognition can be accurately carried out on the learning area.
c: the region 4 is subjected to an expansion operation and/or an erosion operation to form a learning image.
The expansion operation is used for expanding the highlight part in the image, expanding the field of the highlight part, and enabling the highlight part to have a highlight area larger than the original image after the expansion operation;
the erosion operation is to erode the highlight portion in the image, to reduce the area of the highlight portion, and to have a highlight area smaller than the original image after the erosion operation. Dilation and or erosion operations are used to eliminate noise in region 4 and to segment or connect adjacent pixels. Making the image easier to recognize.
104. The family education device searches for a search result matching the learning image and outputs the search result.
In the embodiment of the invention, the steps comprise: the learning image is subjected to OCR (Optical character Recognition).
And searching the characters selected after the OCR processing and outputting a search result. The selected characters can be searched in a database carried by a search engine or a family education device, and a search method is output.
By implementing the method described in fig. 1, the family education device can automatically pop up the pop-up camera according to the voice search instruction sent by the user, shoot the learning area pointed by the user, retrieve the characters in the shot learning image, and finally output the result, so that the search convenience and the intelligence degree can be improved, and the learning efficiency of the user can be further effectively improved.
Example two
Referring to fig. 3, fig. 3 is a flowchart illustrating another searching method applied to a family education device according to a second embodiment of the present invention. In the method described in fig. 3, the pop-up camera is embedded in the front surface of the cabinet for facing the user in the home education device, and the pop-up camera can be freely popped out of or retracted into the front surface of the cabinet. As shown in fig. 3, the search method applied to the family education device may include the steps of:
301. the family education equipment judges whether the input voice search instruction contains keywords for controlling the pop-up camera to pop up or not, if yes, step 302 is executed; if not, the process is ended.
In an embodiment of the present invention, a microphone device may be built in the home education device, the microphone device is configured to collect an input voice search instruction, and analyze, through a Natural Language Processing (NLP), whether the input voice search instruction includes a keyword, for example, the keyword may be "on", "pop", "shoot", "photo", or the like, and if it is analyzed that the input voice search instruction includes the keyword, control the pop-up camera to pop up.
302. The family education equipment judges whether the voice characteristics corresponding to the input voice search instruction are matched with legal voice characteristics preset by the family education equipment, if so, the step 303 is executed; if not, the flow is ended.
Wherein the preset legal voice features are pre-stored voice features before the family education device is used. For example, before use, children, parents and the like who can control the family education device can record keyword voices. And after detecting that the voice search instruction comprises the keywords, judging whether the voice search instruction is matched with the pre-recorded voice characteristics. By adopting the mode, the condition that the pop-up camera is popped up by mistake triggered by the sound of other irrelevant personnel when the family education equipment is used is avoided.
303. The family education device controls the pop-up camera to pop up the front surface of the shell.
304. The family education device controls the pop-up camera to detect a learning area which is limited by the included angle and located below the pop-up camera and pointed by a user in a learning scene.
Step 304 is the same as step 102 in the first embodiment, and is not described herein again.
305. The family education apparatus detects pointing position information of the user from the learning area.
As an optional implementation manner, a non-learning region opposite to the learning region is selected in the learning scene, a pointing position region of the user's finger, pointing pen, pointing laser point, which has a large difference with the background grayscale value in the learning scene, is selected from the non-learning region, and the pointing position information is the relevant coordinate information of the pointing position region.
The selection method may use a threshold segmentation method, a shape matching method, or the like.
As another alternative, when the gray-scale values of the user's finger, the pointing pen, and the pointing position area pointing to the laser spot are too similar to the gray-scale value of the background, it is difficult to distinguish the difference between the user's finger, the pointing pen, and the pointing position area pointing to the laser spot and the background. The method comprises the steps of adopting a dynamic detection method, detecting the shaking of a finger, a laser pen or a laser point of a user in the pointing process, enabling the area of a background to be changed continuously through the action, enabling background pictures of different frames to change due to the shaking, detecting the shielded area of the background in the background pictures of the different frames by the family education equipment, calculating the average size and the average central position of the area in the shielded area of the background pictures of the different frames, taking the shielded area as a template to be matched, and selecting the pointing position area of the finger, the pointing pen or the pointing laser point of the user by using a template matching method.
306. And the family education equipment determines a target search range in the learning area according to the pointing position information.
As an alternative embodiment, the family education device may use the region growing operator to determine the size of the region growing and grow the target search range according to the pointing position information. That is, if the pointing position region is small, the region growth is large, and if the pointing position region is large, the region growth is small.
It can be known that the stop of the region growth is determined according to the pixel condition at the growth position, and the growth is stopped if the gray value at the right side of one pixel growing to the right and the gray value at the point do not change in a certain region. For example, the region growing starts from the central point of the question text that the user wants to ask the question, the central point is often a colored font, then the region grows from the font, and the gray values around the region in the growing process are sequentially from the central point to the right: and if the gray value of the font color is from the gray value of the background area to the gray value of the font color is from the gray value of the font color to the gray value of the background area … … until the gray values are all the gray values of the background area, the target search range to be covered is completely covered, the growth is stopped, and the target search range is obtained. The target search range is a range in which the area grows from the pointing point such as the finger of the user, the pointing pen, the pointing position area pointing to the laser point, and the like, and the target search range which the user wants to inquire can be found out more accurately through the area pointed by the user, so that the difficulty of the user is solved.
As an alternative implementation, after the target search range is determined, query information is output to query whether the determined target search range is a range that the user needs to search, where the query information may be: "whether the target search range is a search-required range". And if the user answers the correct words or makes corresponding correct gestures, the target search range is considered as the search range required by the user. The correct words may be: "true," "true," or "kah," etc., the correct gesture may be a thumb-extending and four-finger-rest-fist-making or "ok" gesture, etc.
And if the user answers wrong words or makes corresponding wrong gestures, the target search range is not considered as the search range required by the user. The wrong word may be: "not right", "not" or "wrong", etc., the wrong gesture may be to extend the thumb upside down and the remaining four fingers make a fist or throw the hand, etc. When the family education device detects that the user answers the wrong words or makes the corresponding wrong gestures, the step 305 is returned to detect the pointing position information of the user again.
307. The family education apparatus controls the pop-up camera to photograph the target search range to capture the learning image.
In the embodiment of the invention, after the learning area pointed by the user is detected, the learning image needs to be shot and processed to obtain the knowledge points and problems in the learning image.
As an alternative embodiment, this step may include:
a: shooting the learning area to obtain an area 3
b: the distortion of the region 3 is corrected to be a region 4.
Since the pop-up camera is not a region photographed by the vertical photographing, a photographed image may have distortion caused by a photographing angle being not vertical. The mapping relation between the distorted image and the undistorted image is derived by the distortion model, and then the undistorted image is calculated to be the area 4.
c: the region 4 is subjected to an expansion operation and/or an erosion operation to form a learning image.
The expansion operation is used for expanding the highlight part in the image, expanding the field of the highlight part, and enabling the highlight part to have a highlight area larger than the original image after the expansion operation; the erosion operation is to erode the highlight portion in the image, to reduce the area of the highlight portion, and to have a highlight area smaller than the original image after the erosion operation.
The dilation operation, erosion operation, open operation, or close operation is used to eliminate noise in the region 3 and to divide or connect adjacent pixels. Making the image easier to recognize.
308. The family education device searches out at least one search result matching the learning image.
In the embodiment of the invention, the family education equipment performs OCR processing on the learning image. And placing the characters selected after the OCR processing into a database of a search engine or the family education equipment for searching, and searching out at least one search result matched with the learning image.
309. And the family education equipment determines a search result with the highest matching degree from at least one search result matched with the learning image as a target search result.
The target search result can be answer to a question, thought to solve a question, knowledge point solution, learning formula, and the like.
In the embodiment of the invention, the search result with the highest matching degree with the characters selected by the OCR is determined from at least one search result and is used as the target search result to be output. The words with the highest degree of matching, which may be selected by the ORC, are in the same order or have a high repetition rate as the words in the database of the search engine or the home education device itself.
For example, if the OCR-selected text is "text", then the text in the search engine or the database of the family education device itself contains: the "language book", "text" and "text … … book" are the "language book" with the highest matching degree, and the "language book" is taken as the target search result.
310. The family education equipment sends information containing the target search result, the learning image and the prompt field to guardian equipment associated with the family education equipment; the prompt field is used for prompting the guardian equipment to send learning guide information to the family education equipment, and the learning guide information is used for guiding a user to obtain a target search result according to the learning image.
In the embodiment of the present invention, the prompt field may be similar to "your child encounters difficulty in learning, please give suggested learning guidance information based on the learning image and the target search result". And sending information containing the target search result, the learning image and the prompt field to the guardian equipment associated with the family education equipment together, wherein the aim is to enable the guardian to synchronously know the learning process of the child and give learning guide information of the subject, and the learning guide information can be a solution thought, a subject principle or a knowledge point page number of a book and the like.
As an optional implementation manner, in an embodiment of the present invention, the family education device may analyze a voice feature corresponding to the voice search instruction input in step 301, and recognize, according to the voice feature, a device identifier of the guardian device associated with the voice feature, as the device identifier of the guardian device associated with the family education device at the current time, and further, the family education device may send information including the target search result, the learning image, and the prompt field to the guardian device associated with the family education device.
Further, before the family education device sends the information including the target search result, the learning image and the prompt field to the guardian device associated with the family education device, the family education device may further obtain attribute information configured by the guardian device associated with the family education device, where the attribute information may include a permitted reported information time period (e.g., a non-examination time period) and a reported information area range (e.g., a non-examination point area range) of the guardian device associated with the family education device; correspondingly, the family education device can detect whether the current time belongs to the permitted reported information time period (such as a non-examination time period) of the guardian device associated with the family education device, if so, the family education device can detect whether the current position is located in the permitted reported information area range (such as a non-examination point area range) of the guardian device associated with the family education device, and if so, the family education device sends the information containing the target search result, the learning image and the prompt field to the guardian device associated with the family education device.
311. The family education equipment acquires the learning guide information sent by the guardian equipment.
In the embodiment of the invention, after the device of the person to be monitored feeds back the learning guide information, the information is acquired for standby.
312. The family education device outputs the learning guidance information and starts timing.
In the embodiment of the invention, the home education equipment adopts a mode of outputting the learning guide information, so that a user (student) can independently make the question or want to understand a certain knowledge point by help of the learning guide information without directly thinking about plagiarism on an answer after seeking help to the home education equipment to obtain a target search result.
313. And outputting a target search result when the timing duration reaches the designated timing duration by the family education equipment.
In the embodiment of the invention, when the timing duration of the family education equipment reaches the designated duration, the target search result is output. The mode is to prevent the user from thinking the question when encountering the question which can not be done and a large amount of time is wasted. Meanwhile, the purpose of outputting the target search result not at the first time by adopting a timing mode is to enable the user not to directly copy the answer, but to perform one-time thinking and then answer the question by referring to the answer.
By implementing the method described in fig. 2, after the family education device searches the target search result, the information including the target search result, the learning image and the prompt field can be sent to the guardian device associated with the family education device, and the user can view the learning guidance information for a period of time and then obtain the target search result by receiving the learning guidance information sent by the guardian device. The user can not copy the answers in the target search result at the first time and think for a period of time, so that the searching convenience and the intelligent degree can be improved, and the learning efficiency of the user can be further effectively improved.
EXAMPLE III
Referring to fig. 4, fig. 4 is a flowchart illustrating a searching method applied to a family education device according to a third embodiment of the present invention. In the method described in fig. 4, the pop-up camera is embedded in the front surface of the cabinet for facing the user in the home education device, and the pop-up camera can be freely popped out of or retracted into the front surface of the cabinet. As shown in fig. 4, the search method applied to the family education device may include the steps of:
401-408; wherein steps 401 and 408 are the same as steps 301 and 308 in the second embodiment, and are not described herein again.
409. And the family education equipment reports the learning image and at least one search result matched with the learning image to a teacher terminal associated with the family education equipment.
In the embodiment of the invention, at least one search result obtained by searching is reported to a teacher terminal associated with the family education equipment, and the teacher terminal is handed to a teacher to distinguish which search result is suitable for the user. Namely, some problem solving skills or problem solving ideas which are beyond the learning scope of the user are eliminated, so that a teacher can select which search result is more suitable for the user and is more easily accepted by the user.
410. The family education device records a certain search result selected by the teacher terminal from at least one search result matched with the learning image as a target search result.
411. The family education equipment acquires the learning guidance information sent by the teacher terminal.
412 and 413; wherein steps 412-413 are the same as steps 312-313 in the second embodiment, and are not described herein again.
By implementing the third embodiment, the learning image and at least one search result matched with the learning image are reported to the teacher terminal associated with the family education device, and the search result selected by the teacher terminal is recorded as the target search result. The user can obtain the target search result and the learning guide information which are suitable for the learning progress of the user, and the user can answer the question according to the learning guide information and the target search result learned by the user, so that the learning progress of the user is matched.
Example four
Referring to fig. 5, fig. 5 is a schematic diagram of a family education device according to a fourth embodiment of the present invention. In the home teaching device shown in fig. 5, a pop-up camera is embedded in the front surface of the casing of the home teaching device for facing the user, and the pop-up camera can freely pop up from the front surface of the casing or be folded back from the front surface of the casing (not shown in fig. 5). As shown in fig. 5, the family education device may include:
a first control unit 501 for controlling the pop-up camera to pop up the front surface of the housing according to an input voice search instruction; after the pop-up camera is popped out of the front surface of the shell, a preset included angle is formed between the central axis of the lens of the pop-up camera and the front surface of the shell;
a second control unit 502, configured to control the pop-up camera to detect a learning area, which is defined by an included angle and located below the pop-up camera and pointed by a user in a learning scene;
a third control unit 503 for controlling the pop-up camera to photograph the learning region to capture a learning image;
a search unit 504 for searching for a search result matching the learning image and outputting the search result.
As an alternative implementation, the second control unit 502 may also detect a learning scenario, such as: a. and selecting areas which are except the user's finger, the pointing pen and the pointing laser point and have small difference with the gray value of the background as a plurality of areas 1 by adopting a threshold segmentation method.
The threshold segmentation method includes threshold segmentation and connected domain segmentation, the second control unit 502 screens out the region 0 with the gray value meeting the requirement in the learning scene through threshold segmentation, and then the region 0 is segmented into a plurality of independent regions 1 through connected domain segmentation. The threshold segmentation method realizes a preprocessing method for dividing the image by using the gray value, compresses the data volume, and simplifies the analysis and processing steps.
b. The plurality of regions 1 are edge-extracted into a plurality of regions 2 by using an edge extraction method.
The edge extraction method includes edge extraction and template size selection, the second control unit 502 extracts edges in the plurality of regions 1 through an edge extraction operator, and then divides the plurality of regions similar to the template size and shape from the plurality of regions included in the edges in the plurality of regions 1 into a plurality of regions 2. By adopting the algorithm, the areas 1 subjected to image preprocessing are changed into the areas 2 close to the template, so that the subsequent template matching is more convenient.
c. And comparing and matching the plurality of areas 2 with a preset template by using a template matching method, and selecting an area 2 with the highest matching degree as a learning area.
The second control unit 502 extracts the prefabricated template, obtains a matching parameter with the highest matching degree between the plurality of templates and the plurality of regions 2 through finite tests and calculation, inputs the matching parameter into a template matching operator more suitable for the scene, performs template matching on the plurality of regions 2 and the prefabricated template, and selects one region 2 with the highest matching degree as a learning region. By adopting the template matching method, the calculation amount of the family education equipment can be reduced, and the learning area can be quickly found out.
In the above embodiment, when there is interference, the second control unit 502 may distinguish the learning region pointed by the user from other interference regions by template matching, really obtain the region pointed by the user, and select the learning region.
As an alternative embodiment, the third control unit 503 may further photograph the learning region to obtain a region 3, and then correct the distortion of the region 3 into a region 4.
Since the shooting angle of the pop-up camera is not perpendicular to the shot area, the shot image has distortion caused by the non-perpendicular shooting angle. The mapping relation between the distorted image and the undistorted image is derived by the distortion model, and then the undistorted image is calculated to be the area 4.
By adopting the image processing method, the third control unit 503 can remove the character distortion formed by the parameter setting of the pop-up camera and the included angle between the camera and the learning area, thereby accurately performing character recognition on the learning area.
The third control unit 503 finally performs the dilation operation and/or erosion operation on the region 4 to become a learning image.
The expansion operation is used for expanding the highlight part in the image, expanding the field of the highlight part, and enabling the highlight part to have a highlight area larger than the original image after the expansion operation; the erosion operation is to erode the highlight portion in the image, to reduce the area of the highlight portion, and to have a highlight area smaller than the original image after the erosion operation.
The third control unit 503 uses dilation and or erosion operations to eliminate noise of the region 4 and to segment or connect adjacent pixels. Making the image easier to recognize.
The search unit 504 may also perform OCR (Optical character Recognition) processing on the learning image.
In the real-time method, the search unit 504 may search the selected characters after the OCR processing and output the search result. The selected characters can be searched in a database carried by a search engine or a family education device, and a search method is output.
The family education device as described in fig. 5 can automatically pop up a pop-up camera according to a voice search instruction sent by a user, shoot a learning area pointed by the user, retrieve characters in the shot learning image, and finally output a result, so that the convenience and the intelligent degree of search can be improved, and the learning efficiency of the user can be further effectively improved.
EXAMPLE five
Referring to fig. 6, fig. 6 is a schematic diagram of another family education device disclosed in the fifth embodiment of the present invention; the family education device shown in fig. 6 is optimized by the family education device shown in fig. 5. In comparison with the family education device shown in fig. 5, in the family education device shown in fig. 6:
the first control unit 501 may include:
the first judgment subunit 5011 is configured to judge whether the input voice search instruction includes a keyword for controlling pop-up of the pop-up camera.
As an alternative embodiment, the family education device may have a built-in microphone device (not shown) for collecting the input voice search command, the first judging subunit 5011 analyzes whether the input voice search command includes a keyword through Natural Language Processing (NLP), for example, the keyword may be "on", "pop", "shoot", "photo", or the like, and if the input voice search command includes a keyword, the pop-up camera is controlled to pop up.
The second judging subunit 5012 is configured to, when the voice search instruction includes a keyword for controlling the pop-up camera to pop up, judge whether the voice feature corresponding to the input voice search instruction matches a legal voice feature preset by the family education device.
As an alternative embodiment, before use, the children and parents of the family education device can be controlled to record the keyword voices. The second judging subunit 5012, after detecting that the voice search instruction includes the keyword, judges whether the voice search instruction matches with the pre-entered voice feature. By adopting the mode, the condition that the pop-up camera is popped up by mistake triggered by the sound of other irrelevant personnel when the family education equipment is used is avoided.
The first control subunit 5013 is configured to control the pop-up camera to pop up the front surface of the chassis when the determination result of the second determination subunit 5012 is a match.
As an alternative embodiment, in the family education device shown in fig. 6:
the third control unit 503 may include:
a detecting sub-unit 5031 configured to detect pointing position information of the user from the learning area.
As an alternative embodiment, the detecting sub-unit 5031 selects a non-learning area opposite to the learning area in the learning scene, and selects a pointing position area of the user's finger, pointing pen, pointing laser point, which has a larger difference from the background grayscale value in the learning scene, from the non-learning area, where the pointing position information is the relevant coordinate information of the pointing position area. The selection method may use a threshold segmentation method, a shape matching method, or the like.
A first determining sub-unit 5032, configured to determine the target search range in the learning area according to the pointing position information.
As an alternative embodiment, the first determining sub-unit 5032 may use a region growing operator to determine the size of the region growing and grow the target search range according to the pointing position information. That is, if the pointing position region is small, the region growth is large, and if the pointing position region is large, the region growth is small.
It can be known that the stop of the region growth is determined according to the pixel condition at the growth position, and the growth is stopped if the gray value at the right side of one pixel growing to the right and the gray value at the point do not change in a certain region. For example, the region growing starts from the central point of the question text that the user wants to ask the question, the central point is often a colored font, then the region grows from the font, and the gray values around the region in the growing process are sequentially from the central point to the right: and if the gray value of the font color is from the gray value of the background area to the gray value of the font color is from the gray value of the font color to the gray value of the background area … … until the gray values are all the gray values of the background area, the target search range to be covered is completely covered, the growth is stopped, and the target search range is obtained. The target search range is a range in which the area grows from the pointing point such as the finger of the user, the pointing pen, the pointing position area pointing to the laser point, and the like, and the target search range which the user wants to inquire can be found out more accurately through the area pointed by the user, so that the difficulty of the user is solved.
A second control sub-unit 5033 for controlling the pop-up camera to photograph the target search range to capture the learning image.
As an alternative embodiment, the second control sub-unit 5033 first photographs the learning region to obtain the region 3, and then corrects the distortion of the region 3 into the region 4.
Since the pop-up camera is not a region photographed by the vertical photographing, a photographed image may have distortion caused by a photographing angle being not vertical. The mapping relation between the distorted image and the undistorted image is derived by the distortion model, and then the undistorted image is calculated to be the area 4.
And finally, performing expansion operation and/or corrosion operation on the region 4 to form a learning image.
The dilation operation, erosion operation, open operation, or close operation is used to eliminate noise in the region 3 and to divide or connect adjacent pixels. Making the image easier to recognize.
The family education device described in fig. 6 can determine whether the voice feature of the voice search instruction is legal and contains a keyword, if so, automatically pop up a pop-up camera according to the voice search instruction sent by the user, shoot the learning area pointed by the user, retrieve the characters in the shot learning image, and finally output the result, so that the search convenience and the intelligence degree can be improved, and the learning efficiency of the user can be further effectively improved.
EXAMPLE six
Referring to fig. 7, fig. 7 is a schematic diagram of another family education apparatus according to a sixth embodiment of the present invention; the family education device shown in fig. 7 is optimized by the family education device shown in fig. 6. Compared with the family education device shown in fig. 6, in the family education device shown in fig. 7, the search unit 504 may include:
a first search sub-unit 5041, a first interaction sub-unit 5043, and a first output control sub-unit 5044. Wherein:
a first searching sub-unit 5041, configured to search out at least one search result matching the learning image.
As an alternative embodiment, the first search sub-unit 5041 may further perform OCR processing on the learning image. And placing the characters selected after the OCR processing into a database of a search engine or the family education equipment for searching, and searching out at least one search result matched with the learning image.
A second determining sub-unit 5042, configured to determine, as the target search result, a search result that matches the learning image to the highest degree among the at least one search result that matches the learning image.
The target search result can be answer to a question, thought to solve a question, knowledge point solution, learning formula, and the like.
The second determining sub-unit 5042 may further determine, as the target search result to be output, a search result that matches the OCR-selected character most highly among the at least one search result. The words with the highest degree of matching, which may be selected by the ORC, are in the same order or have a high repetition rate as the words in the database of the search engine or the home education device itself.
A first interaction subunit 5043, configured to send information including the target search result, the learning image, and the prompt field to a guardian device associated with the family education device; the prompting field is used for prompting the guardian equipment to send learning guide information to the family education equipment, and the learning guide information is used for guiding a user to obtain a target search result according to a learning image; and acquiring learning guide information sent by the guardian equipment.
As an alternative embodiment, the first interaction subunit 5043 may also send information including the target search result, the learning image and the prompt field to the guardian device associated with the family education device, so as to let the guardian synchronously know the learning progress of the child and give the learning guidance information of the topic, which may be a solution idea, a topic principle, or a knowledge point page number of a book, etc. And after the device of the person to be monitored feeds back the learning guide information, the information is acquired for standby.
A first output control subunit 5044 configured to output learning guidance information and start timing; and outputting the target search result when the timed duration reaches the specified duration.
As an alternative embodiment, the first output control subunit 5044 may further output the target search result when the counted time length reaches a specified time length.
As an optional implementation manner, in an embodiment of the present invention, the first interaction sub-unit 5043 may analyze a voice feature corresponding to the input voice search instruction, and recognize, according to the voice feature, a device identifier of the guardian device associated with the voice feature, so as to serve as the device identifier of the guardian device associated with the family education device at the current time, and further, the first interaction sub-unit 5043 may send information including the target search result, the learning image, and the prompt field to the guardian device associated with the family education device.
Further, before the first interaction subunit 5043 sends the information including the target search result, the learning image, and the prompt field to the guardian device associated with the family education device, attribute information configured by the guardian device associated with the family education device may be acquired, where the attribute information may include a permitted reported information time period (e.g., a non-examination time period) and a reported information area range (e.g., a non-examination point area range) of the guardian device associated with the family education device; correspondingly, the first interaction subunit 5043 may detect whether the current time belongs to an information reporting time period (e.g., a non-examination time period) permitted by the guardian device associated with the family education device, and if so, the first interaction subunit 5043 may detect whether the current location is within a reporting information area range (e.g., a non-examination point area range) permitted by the guardian device associated with the family education device, and if so, the family education device sends information including the target search result, the learning image, and the prompt field to the guardian device associated with the family education device.
The family education device as described in fig. 6 can implement the function of the intelligent host as described in fig. 5, and can send information including a target search result, a learning image and a prompt field to the guardian device associated with the family education device after the target search result is searched, so that the user can view the learning guide information for a period of time and then obtain the target search result by receiving the learning guide information sent by the guardian device. The user can not copy the answers in the target search result at the first time and think for a period of time, so that the learning initiative of the user is improved.
EXAMPLE seven
Referring to fig. 8, fig. 8 is a schematic view of another family education apparatus according to a seventh embodiment of the present invention; the family education device shown in fig. 8 is optimized by the family education device shown in fig. 6. Compared with the family education device shown in fig. 6, in the family education device shown in fig. 8, the search unit 504 may include:
and a second searching sub-unit 5045 for searching out at least one search result matching the learning image.
As an optional implementation manner, the second search subunit 5045 may also report at least one search result obtained by the search to a teacher terminal associated with the home education device, and the teacher terminal is handed to a teacher to distinguish which search result is suitable for the user. Namely, some problem solving skills or problem solving ideas which are beyond the learning scope of the user are eliminated, so that a teacher can select which search result is more suitable for the user and is more easily accepted by the user.
The second interaction subunit 5046 is configured to report the learning image and at least one search result matched with the learning image to a teacher terminal associated with the family education device;
a recording sub-unit 5047, configured to record, as a target search result, a search result selected by the teacher terminal from among the at least one search result matching the learning image;
the second interaction subunit 5046 is further configured to obtain learning guidance information sent by the teacher terminal, where the learning guidance information is used to guide the user to obtain a target search result according to the learning image;
a second output control subunit 5048, configured to output learning guidance information and start timing; and outputting the target search result when the timed duration reaches the specified duration.
The family education device as described in fig. 7 may, in addition to implementing the function of the intelligent host as described in fig. 5, report the learning image and at least one search result matching the learning image to the teacher terminal associated with the family education device, and record the search result selected by the teacher terminal as the target search result. The user can obtain the target search result and the learning guide information which are suitable for the learning progress of the user, and the user can answer the question according to the learning guide information and the target search result learned by the user, so that the learning progress of the user is matched.
Example eight
As shown in fig. 9, an eighth embodiment of the present invention discloses yet another family education device, including:
a memory 901 in which executable program code is stored;
a processor 902 coupled to a memory;
the processor 902 calls the executable program code stored in the memory 901 for executing any one of the search methods of fig. 1-4 applied to the family education device.
Example nine
The ninth embodiment of the invention discloses a computer-readable storage medium which stores a computer program, wherein the computer program enables a computer to execute a search method applied to a family education device, and the search method is applied to the family education device in any one of the figures 1 to 4.
The searching method applied to the family education device and the family education device disclosed by the embodiment of the invention are introduced in detail, a specific embodiment is applied in the text to explain the principle and the implementation mode of the invention, and the description of the embodiment is only used for helping to understand the method and the core idea of the invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (6)

1. A searching method applied to a family education device, wherein a pop-up camera is embedded in a front surface of a housing of the family education device for facing a user, and the pop-up camera can freely pop up or retract from the front surface of the housing, the method comprising:
controlling the pop-up camera to pop up the front surface of the shell according to an input voice searching instruction; after the pop-up camera is popped out of the front surface of the shell, a preset included angle is formed between the central axis of the lens of the pop-up camera and the front surface of the shell;
controlling the pop-up camera to detect a learning area which is limited by the included angle and is positioned below the pop-up camera and pointed by a user in a learning scene;
controlling the pop-up camera to shoot the learning area so as to capture a learning image;
searching a search result matched with the learning image, and outputting the search result;
the controlling the pop-up camera to photograph the learning area to capture a learning image includes:
controlling the pop-up camera to shoot the learning area to obtain an area 3; correcting the distortion of the area 3 into an area 4; performing expansion operation and/or corrosion operation on the region 4 to form a learning image;
the searching for a search result matching the learning image and outputting the search result includes:
searching out at least one search result matched with the learning image;
determining a search result with the highest matching degree from the at least one search result matched with the learning image as a target search result;
sending information containing the target search result, the learning image and the prompt field to guardian equipment associated with the family education equipment; the prompt field is used for prompting the guardian equipment to send learning guide information to the family education equipment, and the learning guide information is used for guiding the user to obtain the target search result according to the learning image; the process of sending the information containing the target search result, the learning image and the prompt field to the guardian device associated with the family education device comprises the following steps: analyzing the voice characteristics corresponding to the input voice search instruction, identifying the equipment identifier of the guardian equipment associated with the voice characteristics, using the equipment identifier of the guardian equipment associated with the family education equipment at the current time, and sending information containing the target search result, the learning image and the prompt field to the guardian equipment associated with the family education equipment at the current time;
acquiring the learning guide information sent by the guardian equipment;
outputting the learning guidance information and starting timing;
and outputting the target search result when the timed duration reaches the designated duration.
2. The method according to claim 1, wherein the controlling the pop-up camera to pop up the front surface of the housing according to the inputted voice search instruction comprises:
judging whether the input voice search instruction contains a keyword for controlling the pop-up camera to pop up;
if the voice search instruction contains keywords for controlling the pop-up camera to pop up, judging whether the voice features corresponding to the input voice search instruction are matched with legal voice features preset by the family education equipment;
and if the front surface of the shell is matched with the front surface of the shell, controlling the pop-up camera to pop up the front surface of the shell.
3. A home teaching device, wherein a pop-up camera is embedded in a front surface of a housing of the home teaching device for facing a user, and the pop-up camera can freely pop up or retract from the front surface of the housing, the home teaching device comprising:
the first control unit is used for controlling the pop-up camera to pop up the front surface of the shell according to an input voice search instruction; after the pop-up camera is popped out of the front surface of the shell, a preset included angle is formed between the central axis of the lens of the pop-up camera and the front surface of the shell;
the second control unit is used for controlling the pop-up camera to detect a learning area which is limited by the included angle and is positioned below the pop-up camera and pointed by a user in a learning scene;
a third control unit for controlling the pop-up camera to shoot the learning area so as to capture a learning image;
a search unit for searching for a search result matching the learning image and outputting the search result;
the controlling the pop-up camera to photograph the learning area to capture a learning image includes:
controlling the pop-up camera to shoot the learning area to obtain an area 3; correcting the distortion of the area 3 into an area 4; performing expansion operation and/or corrosion operation on the region 4 to form a learning image;
the search unit includes:
the first searching subunit is used for searching out at least one searching result matched with the learning image;
a second determining subunit, configured to determine, as a target search result, a search result with a highest matching degree from the at least one search result matched with the learning image;
the first interaction subunit is used for sending information containing the target search result, the learning image and the prompt field to guardian equipment associated with the family education equipment; the prompt field is used for prompting the guardian equipment to send learning guide information to the family education equipment, and the learning guide information is used for guiding the user to obtain the target search result according to the learning image; and acquiring the learning guidance information sent by the guardian equipment; the process of sending the information containing the target search result, the learning image and the prompt field to the guardian device associated with the family education device comprises the following steps: analyzing the voice characteristics corresponding to the input voice search instruction, identifying the equipment identifier of the guardian equipment associated with the voice characteristics, using the equipment identifier of the guardian equipment associated with the family education equipment at the current time, and sending information containing the target search result, the learning image and the prompt field to the guardian equipment associated with the family education equipment at the current time;
a first output control subunit, configured to output the learning guidance information and start timing; and outputting the target search result when the timed duration reaches the specified duration.
4. The family education device of claim 3 wherein the first control unit includes:
the first judgment subunit is used for judging whether the input voice search instruction contains a keyword for controlling the pop-up camera to pop up;
the second judgment subunit is configured to, when the voice search instruction includes a keyword for controlling the pop-up camera to pop up, judge whether a voice feature corresponding to the input voice search instruction matches a legal voice feature preset by the family education device;
and the first control subunit is used for controlling the pop-up camera to pop up the front surface of the shell when the judgment result of the second judgment subunit is matching.
5. A family education device, comprising:
a memory storing executable program code;
a processor coupled with the memory;
the processor calls the executable program code stored in the memory for executing the search method of any one of claims 1-2 applied to a family education device.
6. A computer-readable storage medium storing a computer program, wherein the computer program causes a computer to execute the search method applied to a family education device according to any one of claims 1 to 2.
CN201910041713.5A 2019-01-16 2019-01-16 Searching method applied to family education equipment and family education equipment Active CN109766413B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910041713.5A CN109766413B (en) 2019-01-16 2019-01-16 Searching method applied to family education equipment and family education equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910041713.5A CN109766413B (en) 2019-01-16 2019-01-16 Searching method applied to family education equipment and family education equipment

Publications (2)

Publication Number Publication Date
CN109766413A CN109766413A (en) 2019-05-17
CN109766413B true CN109766413B (en) 2021-04-30

Family

ID=66454076

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910041713.5A Active CN109766413B (en) 2019-01-16 2019-01-16 Searching method applied to family education equipment and family education equipment

Country Status (1)

Country Link
CN (1) CN109766413B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111176433B (en) * 2019-10-22 2024-02-23 广东小天才科技有限公司 Search result display method based on intelligent sound box and intelligent sound box

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103020184A (en) * 2012-11-29 2013-04-03 北京百度网讯科技有限公司 Method and system utilizing shot images to obtain search results
CN109003478A (en) * 2018-08-07 2018-12-14 广东小天才科技有限公司 A kind of learning interaction method and facility for study

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20150033488A (en) * 2013-09-24 2015-04-01 주홍찬 Apparatus and method for learning kids foreign language by using augmented reality.
CN104217197B (en) * 2014-08-27 2018-04-13 华南理工大学 A kind of reading method and device of view-based access control model gesture
US9918006B2 (en) * 2016-05-20 2018-03-13 International Business Machines Corporation Device, system and method for cognitive image capture
CN108471475A (en) * 2018-06-27 2018-08-31 维沃移动通信有限公司 A kind of concealed pick-up head control method and terminal device
CN109063583A (en) * 2018-07-10 2018-12-21 广东小天才科技有限公司 A kind of learning method and electronic equipment based on read operation
CN108961887A (en) * 2018-07-24 2018-12-07 广东小天才科技有限公司 A kind of phonetic search control method and private tutor's equipment
CN109033418A (en) * 2018-08-07 2018-12-18 广东小天才科技有限公司 A kind of the intelligent recommendation method and facility for study of learning Content
CN109192204B (en) * 2018-08-31 2021-05-11 广东小天才科技有限公司 Voice control method based on intelligent equipment camera and intelligent equipment

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103020184A (en) * 2012-11-29 2013-04-03 北京百度网讯科技有限公司 Method and system utilizing shot images to obtain search results
CN109003478A (en) * 2018-08-07 2018-12-14 广东小天才科技有限公司 A kind of learning interaction method and facility for study

Also Published As

Publication number Publication date
CN109766413A (en) 2019-05-17

Similar Documents

Publication Publication Date Title
CN109192204B (en) Voice control method based on intelligent equipment camera and intelligent equipment
CN109635772A (en) A kind of dictation content corrects method and electronic equipment
CN110956138B (en) Auxiliary learning method based on home education equipment and home education equipment
CN109656465B (en) Content acquisition method applied to family education equipment and family education equipment
CN110085068A (en) A kind of study coach method and device based on image recognition
CN109637286A (en) A kind of Oral Training method and private tutor&#39;s equipment based on image recognition
CN109710750A (en) One kind searching topic method and facility for study
CN111353501A (en) Book point-reading method and system based on deep learning
CN111026949A (en) Question searching method and system based on electronic equipment
CN116561276A (en) Knowledge question-answering method, device, equipment and storage medium
CN108090424B (en) Online teaching investigation method and equipment
CN109766413B (en) Searching method applied to family education equipment and family education equipment
CN103744971B (en) A kind of method and apparatus of active push information
CN109147002B (en) Image processing method and device
CN111026786A (en) Dictation list generation method and family education equipment
CN111079777B (en) Page positioning-based click-to-read method and electronic equipment
CN110795918B (en) Method, device and equipment for determining reading position
CN111711758B (en) Multi-pointing test question shooting method and device, electronic equipment and storage medium
CN111582281B (en) Picture display optimization method and device, electronic equipment and storage medium
CN110415688B (en) Information interaction method and robot
JP6396813B2 (en) Program, apparatus and method for estimating learning items spent on learning from learning video
CN111078080B (en) Point reading control method and electronic equipment
CN111176430B (en) Interaction method of intelligent terminal, intelligent terminal and storage medium
CN111967328A (en) Method for tutoring operation, intelligent drawing robot and server
CN111028558A (en) Dictation detection method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant