CN111078101A - Recognition method of reading content and electronic equipment - Google Patents

Recognition method of reading content and electronic equipment Download PDF

Info

Publication number
CN111078101A
CN111078101A CN201910485276.6A CN201910485276A CN111078101A CN 111078101 A CN111078101 A CN 111078101A CN 201910485276 A CN201910485276 A CN 201910485276A CN 111078101 A CN111078101 A CN 111078101A
Authority
CN
China
Prior art keywords
content
user
target image
reading
cursor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910485276.6A
Other languages
Chinese (zh)
Other versions
CN111078101B (en
Inventor
彭婕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
TCL China Star Optoelectronics Technology Co Ltd
Original Assignee
Shenzhen China Star Optoelectronics Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen China Star Optoelectronics Technology Co Ltd filed Critical Shenzhen China Star Optoelectronics Technology Co Ltd
Priority to CN201910485276.6A priority Critical patent/CN111078101B/en
Publication of CN111078101A publication Critical patent/CN111078101A/en
Application granted granted Critical
Publication of CN111078101B publication Critical patent/CN111078101B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/04Electrically-operated educational appliances with audible presentation of the material to be studied

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Business, Economics & Management (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the invention relates to the technical field of education, and discloses a recognition method of reading contents and electronic equipment, wherein the method comprises the following steps: when a point reading instruction input by a user is detected, judging whether the type of content in a target area indicated by fingers of the user is a picture type or not, wherein the target area is an area on a paper point reading carrier; if yes, outputting a target image on a display screen of the electronic equipment, wherein the content in the target image is the content in the target area; recognizing gesture operation of a user; according to the operation intention indicated by the gesture operation, corresponding processing is carried out on the target image; and selecting the reading contents from the contents of the processed target image according to the content selection operation executed by the user aiming at the processed target image. By implementing the embodiment of the invention, the identification accuracy of the electronic equipment can be improved.

Description

Recognition method of reading content and electronic equipment
Technical Field
The invention relates to the technical field of education, in particular to a recognition method of reading contents and electronic equipment.
Background
At present, many electronic devices (such as a family education machine) are widely applied to the field of reading, and the electronic devices can read by recognizing the content scanned on a reading carrier (such as paper) by a user through a reading pen. However, in practice, it has been found that the click-to-read carrier is usually accompanied by pictures that assist the user in learning, and that these pictures are generally small in size. When a user wants to read a certain part of content in a picture, due to the limitation of the size of the picture, the reading pen is often difficult to accurately scan the content that the user wants to specify, so that the electronic equipment cannot accurately identify the read content, and the reading is often wrong. Therefore, how to effectively improve the accuracy of the electronic device in identifying the read-back content is one of the hot issues that needs to be considered urgently.
Disclosure of Invention
The embodiment of the invention discloses a recognition method of reading contents and electronic equipment, which can improve the recognition accuracy of the electronic equipment.
The first aspect of the embodiments of the present invention discloses a method for identifying a reading content, including:
when a point reading instruction input by a user is detected, judging whether the type of the content in the target area indicated by the finger of the user is a picture type; the target area is an area on the paper point-reading carrier;
if the type of the content in the target area is a picture type, outputting a target image on a display screen of the electronic equipment; the content in the target image is the content in the target area;
recognizing gesture operation of the user;
according to the operation intention indicated by the gesture operation, performing corresponding processing on the target image;
and selecting reading contents from the processed contents of the target image according to the content selection operation executed by the user aiming at the processed target image.
As an optional implementation manner, in the first aspect of the embodiment of the present invention, before determining whether a type of content in a target area indicated by a finger of a user is a picture type when a click-to-read instruction input by the user is detected, the method further includes:
detecting whether the electronic equipment is provided with a reflector or not;
if so, adjusting an included angle of the reflector relative to a lens surface of a shooting module of the electronic equipment so that a mirror image exists in the reflector; wherein the mirror image is an image of the content in the target area in the mirror;
and when a click-to-read instruction input by a user is detected, judging whether the type of the content in the target area indicated by the finger of the user is a picture type or not, wherein the judging step comprises the following steps:
when a point reading instruction input by a user is detected, controlling the shooting module to shoot the mirror image in the reflector so as to obtain the target image;
and judging whether the type of the content in the target area indicated by the finger of the user is a picture type or not according to the target image.
As an optional implementation manner, in the first aspect of the embodiment of the present invention, the performing, according to the operation intention indicated by the gesture operation, corresponding processing on the target image includes:
if the operation intention indicated by the gesture operation is amplification, performing amplification processing on the target image; or if the operation intention indicated by the gesture operation is reduction, carrying out reduction processing on the target image; or if the operation intention indicated by the gesture operation is translation, acquiring a translation direction during translation, and performing translation processing on the target image according to the translation direction.
As an optional implementation manner, in the first aspect of the embodiment of the present invention, after the selecting, according to a content selection operation performed by the user on the processed target image, reading content from the content of the processed target image, the method further includes:
acquiring decibels of sound in the surrounding environment of the electronic equipment;
judging whether the decibel number of the sound is smaller than a preset decibel number or not;
if yes, outputting prompt information to prompt the user to wear the specified wearable equipment;
when the wearable device is detected to be worn by the user, the reading content is sent to the wearable device, so that the wearable device can deliver the reading content to the user through bone media.
As an optional implementation manner, in the first aspect of the embodiment of the present invention, the selecting, according to a content selection operation performed by the user on the processed target image, reading content from the content of the processed target image includes:
when it is detected that the finger of the user presses the display screen, displaying a first cursor and a second cursor on the processed target image; the position of the first cursor is the position of the user when the finger presses the display screen;
fixedly displaying the first cursor, detecting the dragging operation of the finger of the user for the second cursor, and labeling the content between the first cursor and the second cursor in the dragging process;
and when the end of the dragging operation is detected, taking the marked content as the reading content selected from the processed content of the target image.
A second aspect of an embodiment of the present invention discloses an electronic device, including:
the device comprises a first judging unit, a second judging unit and a third judging unit, wherein the first judging unit is used for judging whether the type of the content in the target area indicated by the finger of the user is a picture type or not when a point reading instruction input by the user is detected; the target area is an area on the paper point-reading carrier;
the output unit is used for outputting a target image on a display screen of the electronic equipment when the first judging unit judges that the type of the content in the target area is the picture type; the content in the target image is the content in the target area;
the recognition unit is used for recognizing the gesture operation of the user;
the processing unit is used for carrying out corresponding processing on the target image according to the operation intention indicated by the gesture operation;
and the selecting unit is used for selecting the reading content from the processed contents of the target image according to the content selecting operation executed by the user aiming at the processed target image.
As an optional implementation manner, in the second aspect of the embodiment of the present invention, the electronic device further includes:
the detection unit is used for detecting whether the electronic equipment is provided with a reflector or not before the first judgment unit judges whether the type of the content in the target area indicated by the finger of the user is a picture type or not when the point reading instruction input by the user is detected;
the adjusting unit is used for adjusting an included angle of the reflector relative to a lens surface of a shooting module of the electronic equipment when the detecting unit detects that the reflector is installed on the electronic equipment, so that a mirror image exists in the reflector; wherein the mirror image is an image of the content in the target area in the mirror;
and the first judging unit includes:
the shooting subunit is used for controlling the shooting module to shoot the mirror image in the reflector when a point reading instruction input by a user is detected so as to obtain the target image;
and the judging subunit is used for judging whether the type of the content in the target area indicated by the finger of the user is a picture type according to the target image.
As an alternative implementation, in the second aspect of the embodiment of the present invention:
the processing unit is specifically configured to perform amplification processing on the target image if the operation intention indicated by the gesture operation is amplification; or if the operation intention indicated by the gesture operation is reduction, carrying out reduction processing on the target image; or if the operation intention indicated by the gesture operation is translation, acquiring a translation direction during translation, and performing translation processing on the target image according to the translation direction.
As an optional implementation manner, in the second aspect of the embodiment of the present invention, the electronic device further includes:
an obtaining unit, configured to obtain decibels of sound in a surrounding environment of the electronic device after the selecting unit selects, according to a content selection operation performed by the user for the processed target image, a reading content from the content of the processed target image;
the second judging unit is used for judging whether the decibel number of the sound is smaller than a preset decibel number or not;
the prompting unit is used for outputting prompting information to prompt the user to wear the specified wearable device when the second judging unit judges that the decibel number of the sound is smaller than the preset decibel number;
a sending unit, configured to send the reading content to the wearable device when it is detected that the user wears the wearable device, so that the wearable device delivers the reading content to the user through a bone medium.
As an optional implementation manner, in the second aspect of the embodiment of the present invention, the selecting unit includes:
the display subunit is used for displaying a first cursor and a second cursor on the processed target image when the fact that the finger of the user presses the display screen is detected; the position of the first cursor is the position of the user when the finger presses the display screen;
the labeling subunit is configured to fixedly display the first cursor, detect a dragging operation of a finger of the user for the second cursor, and label content between the first cursor and the second cursor in a dragging process;
and the selecting subunit is used for taking the marked content as the reading content selected from the processed contents of the target image when the end of the dragging operation is detected.
A third aspect of an embodiment of the present invention discloses an electronic device, including:
a memory storing executable program code;
a processor coupled with the memory;
the processor calls the executable program code stored in the memory to execute the method for identifying the read-back content disclosed by the first aspect of the embodiment of the invention.
A fourth aspect of the embodiments of the present invention discloses a computer-readable storage medium, which stores a computer program, where the computer program enables a computer to execute the method for identifying a read-back content disclosed in the first aspect of the embodiments of the present invention.
A fifth aspect of embodiments of the present invention discloses a computer program product, which, when run on a computer, causes the computer to perform some or all of the steps of any one of the methods of the first aspect.
A sixth aspect of the present embodiment discloses an application publishing platform, where the application publishing platform is configured to publish a computer program product, where the computer program product is configured to, when running on a computer, cause the computer to perform part or all of the steps of any one of the methods in the first aspect.
Compared with the prior art, the embodiment of the invention has the following beneficial effects:
in the embodiment of the invention, when a click-to-read instruction input by a user is detected, the electronic equipment judges whether the type of content in a target area indicated by a finger of the user is a picture type, the target area is an area on a paper click-to-read carrier, if so, a target image is output on a display screen of the electronic equipment, the content in the target image is the content in the target area, then gesture operation of the user is identified, corresponding processing is carried out on the target image according to an operation intention indicated by the gesture operation, and then reading content is selected from the content of the processed target image according to content selection operation executed by the user aiming at the processed target image. Therefore, by implementing the embodiment of the invention, when the type of the content in the area indicated by the finger of the user is the picture type, the picture (namely the target image) is displayed through the display screen, then the target image is correspondingly processed according to the operation intention indicated by the gesture operation of the user, and then the reading content is selected from the content of the target image according to the content selection operation executed by the user aiming at the processed target image, so that the identification accuracy of the electronic equipment can be improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is a schematic flowchart illustrating a method for identifying reading contents according to an embodiment of the present invention;
FIG. 2 is a flow chart illustrating another method for identifying a reading content according to an embodiment of the present invention;
FIG. 3 is a flowchart illustrating a method for identifying a reading content according to another embodiment of the present invention;
fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the disclosure;
FIG. 5 is a schematic structural diagram of another electronic device disclosed in the embodiments of the present invention;
FIG. 6 is a schematic structural diagram of another electronic device disclosed in the embodiments of the present invention;
fig. 7 is a schematic structural diagram of another electronic device disclosed in the embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It is to be noted that the terms "first" and "second" and the like in the description and the claims of the present invention are used for distinguishing different objects, and are not used for describing a specific order. The terms "comprises," "comprising," and any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In the present invention, the terms "upper", "lower", "left", "right", "front", "rear", "top", "bottom", "inner", "outer", "center", "vertical", "horizontal", "lateral", "longitudinal", and the like indicate an orientation or positional relationship based on the orientation or positional relationship shown in the drawings. These terms are used primarily to better describe the invention and its embodiments and are not intended to limit the indicated devices, elements or components to a particular orientation or to be constructed and operated in a particular orientation.
Moreover, some of the above terms may be used to indicate other meanings besides the orientation or positional relationship, for example, the term "on" may also be used to indicate some kind of attachment or connection relationship in some cases. The specific meanings of these terms in the present invention can be understood by those skilled in the art as appropriate.
Furthermore, the terms "mounted," "disposed," "provided," "connected," and "connected" are to be construed broadly. For example, it may be a fixed connection, a removable connection, or a unitary construction; can be a mechanical connection, or an electrical connection; may be directly connected, or indirectly connected through intervening media, or may be in internal communication between two devices, elements or components. The specific meanings of the above terms in the present invention can be understood by those of ordinary skill in the art according to specific situations.
The embodiment of the invention discloses a recognition method of reading contents and electronic equipment, which can improve the recognition accuracy of the electronic equipment. The following detailed description is made with reference to the accompanying drawings.
Example one
Referring to fig. 1, fig. 1 is a schematic flow chart illustrating a method for identifying reading contents according to an embodiment of the present invention. As shown in fig. 1, the method may include the following steps.
101. When a point reading instruction input by a user is detected, the electronic equipment judges whether the type of the content in the target area indicated by the finger of the user is a picture type; if yes, executing step 102-step 105; otherwise, the flow is ended.
In the embodiment of the invention, the electronic equipment has a point reading function, can report according to the content indicated by the finger of the user, and is vertically arranged at a certain angle with the desktop in the point reading process without limitation. In the embodiment of the invention, the target area is an area on the paper point-reading carrier. The paper point-reading carrier may be a textbook or a storybook, or may be another book, and the embodiment of the present invention is not limited.
In the embodiment of the present invention, the content on the paper point-reading carrier may be a text content or a picture content, when a point-reading instruction input by a user is detected, the electronic device may determine whether the type of the content in the target area on the paper point-reading carrier indicated by the finger of the user is a picture type, and if so, the electronic device continues to execute steps 102 to 105.
In this embodiment of the present invention, the electronic Device may be a learning tablet, a learning machine, a learning Mobile phone, a point reading machine, a teaching machine, a Mobile phone, a Mobile tablet, a Personal Digital Assistant (PDA), a Mobile Internet Device (MID), a television, and other various devices for a user, which are not limited in the embodiment of the present invention.
In the embodiment of the present invention, the electronic device may support the following network technologies including but not limited to: global System for Mobile Communications (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (CDMA), wideband Code Division Multiple Access (W-CDMA), CDMA2000, IMT Single Carrier (IMT Single Carrier), Enhanced Data rate GSM Evolution (Enhanced Data Rates for GSM Evolution, EDGE), Long-term Evolution (Long-term Evolution, LTE), advanced Long-term Evolution (LTE), Time-Division Long-term Evolution (Time-Division LTE, TD-LTE), High-Performance Radio Local Area Network (High-Performance Radio Local Area Network, High-Performance lan), High-Performance wide Area Network (High wan), Local multi-point SDMA (Local multi-point Access, LMDS), Worldwide Interoperability for Microwave (WiMAX), bluetooth (orthogonal frequency Division multiplexing), and High-capacity space Division multiplexing (space Division multiplexing, hch) Universal Mobile Telecommunications System (UMTS), universal mobile telecommunications system time division duplex (UMTS-TDD), evolved high speed packet access (HSPA +), time division synchronous code division multiple access (TD-SCDMA), evolution data optimized (EV-DO), Digital Enhanced Cordless Telecommunications (DECT), and others.
In this embodiment of the present invention, the electronic device shown below may be any one of the electronic devices described above, and the embodiment of the present invention is not limited thereto. The operating system of the electronic device may include, but is not limited to, an Android operating system, an IOS operating system, a Symbian operating system, a blackberry operating system, a Windows Phone8 operating system, and the like.
102. The electronic equipment outputs a target image on a display screen of the electronic equipment, wherein the content in the target image is the content in the target area.
It is understood that, in some embodiments, the display screen may also be a display screen of another device bound to the electronic device, and the embodiments of the present invention are not limited. For convenience of understanding, the embodiment of the present invention is described taking a display screen provided in the electronic apparatus itself as an example, and in the embodiment of the present invention, the electronic apparatus outputs the target image on the display screen provided in the electronic apparatus itself.
As an alternative embodiment, the step 102 of outputting, by the electronic device, the target image on a display screen of the electronic device includes:
detecting a target page number corresponding to a current page of a paper point-reading carrier indicated by a finger of a user;
acquiring the content of the current page of the paper point-reading carrier according to the target page number;
detecting the position coordinates of the fingers of the user on the current page of the paper point-reading carrier;
determining a target image corresponding to the content in the target area indicated by the finger of the user from the content of the current page of the paper point-reading carrier according to the position coordinate;
outputting the target image on a display screen of the electronic device.
By implementing the optional implementation mode, a method for determining a target image is provided, the content of the current page is obtained according to the target page number corresponding to the current page of the paper point-reading carrier, and then the target image is further determined according to the position coordinates of the fingers of the user on the current page, so that the efficiency of obtaining the target image can be improved.
103. The electronic equipment identifies gesture operation of a user.
In the embodiment of the present invention, the electronic device may set up a gesture database in advance, where the gesture database may include a gesture operation corresponding to zooming in, a gesture operation corresponding to zooming out, and a gesture operation corresponding to translating.
Therefore, as an optional implementation manner, the step 103 of recognizing, by the electronic device, the gesture operation of the user includes:
acquiring gesture operation of a user aiming at a target image;
searching a target gesture operation matched with the gesture operation from a gesture database;
and acquiring an operation intention corresponding to the target gesture operation.
By implementing the optional implementation mode, the method for recognizing the gesture operation of the user is improved, the gesture operation of the user is matched with the gesture operation in the pre-built gesture database, the target gesture operation is determined, the operation intention corresponding to the target gesture operation is further obtained, and the accuracy of recognizing the gesture operation of the user can be improved.
104. And the electronic equipment performs corresponding processing on the target image according to the operation intention indicated by the gesture operation.
In the embodiment of the present invention, the operation intention indicated by the gesture operation may include zooming in, zooming out, or translating, and the like, and the embodiment of the present invention is not limited thereto. For example, when the user feels that the target image output on the display screen is too small, a gesture operation corresponding to the enlargement of the target image may be performed, and the electronic device may recognize the gesture operation and then enlarge the target image.
105. And the electronic equipment selects the reading content from the contents of the processed target image according to the content selection operation executed by the user aiming at the processed target image.
In the embodiment of the invention, after the electronic equipment correspondingly processes the target image according to the operation intention indicated by the gesture operation, the user can select the content of the processed target image, and then the electronic equipment selects the reading content from the content of the processed target image according to the content selection operation executed by the user on the processed target image. For example, assume that the content of the target image includes an image of a puppy and a piece of text, where the text is "the dog is a good friend of human. It is a mammal, and has many kinds, sensitive smell sense and hearing sense, long and thin tongue, and can dissipate heat, and the hair has yellow, white and black colors. The domestic animals domesticated in the earliest time of human beings can be trained into police dogs, and some domestic animals can be used for helping hunting, shepherd and the like. And the user only wants the electronic equipment to read the text of 'dog is a good friend of human', so the user can select the text of 'dog is a good friend of human', and the electronic equipment selects the operation according to the content of the user and takes the text of 'dog is a good friend of human' as reading content.
It can be seen that, when the method described in fig. 1 is implemented, when the type of the content in the area indicated by the finger of the user is the picture type, the picture (i.e., the target image) is displayed through the display screen, then the target image is correspondingly processed according to the operation intention indicated by the gesture operation of the user, and then the reading content is selected from the content of the target image according to the content selection operation executed by the user on the processed target image, so that the recognition accuracy of the electronic device can be improved.
Example two
Referring to fig. 2, fig. 2 is a schematic flow chart illustrating another identification method for reading contents according to an embodiment of the present invention. As shown in fig. 2, the method may include the following steps.
201. The electronic equipment detects whether the electronic equipment is provided with a reflector or not; if yes, executing step 202-step 204; otherwise, the flow is ended.
As an alternative embodiment, the reflective mirror may have magnetism, and when the reflective mirror is mounted on the electronic device, the electronic device may sense the strength of the magnetism, and if the strength of the magnetism reaches a predetermined strength, the electronic device determines that the reflective mirror is mounted on the electronic device, and if the strength of the magnetism does not reach the predetermined strength, the electronic device determines that the reflective mirror is not mounted on the electronic device. By implementing the embodiment, the electronic equipment judges whether the reflector is arranged on the electronic equipment or not by sensing the magnetism of the reflector, and can detect the accuracy of whether the reflector is arranged on the electronic equipment or not.
202. The electronic equipment adjusts the included angle of the reflector relative to the lens surface of the shooting module of the electronic equipment, so that the mirror image exists in the reflector.
In the embodiment of the invention, the mirror image is the image of the content in the target area in the reflector.
In the embodiment of the present invention, the adjustment of the reflective mirror may also be manually adjusted, and the embodiment of the present invention is not limited.
In the embodiment of the invention, the content in the target area on the paper point-reading carrier can be projected in the reflector to form a mirror image, and the electronic equipment can adjust the included angle of the reflector relative to the lens surface of the shooting module of the electronic equipment, so that the content in the target area is projected in the reflector.
203. When a point reading instruction input by a user is detected, the electronic equipment controls the shooting module to shoot the mirror image in the reflector so as to obtain a target image.
In the embodiment of the invention, the content in the target image is the content in the target area.
In an actual situation, when the electronic device controls the shooting module to shoot the mirror image in the mirror, the content in the target area in the shot image may be blocked by the finger of the user, so as to, as an optional implementation manner, when the electronic device adjusts an included angle of the mirror with respect to a lens surface of the shooting module of the electronic device so that the mirror image exists in the mirror, the electronic device may control the shooting module to continuously shoot the mirror image in the mirror to obtain a multi-frame image, and when a click-read instruction input by the user is detected, the electronic device may determine the target image that does not include the finger of the user from the multi-frame image. By implementing the optional implementation mode, the loss of the content in the inner target image caused by blocking the content in the target area by the fingers of the user can be avoided.
204. The electronic equipment judges whether the type of the content in the target area indicated by the finger of the user is a picture type or not according to the target image; if yes, executing step 205-step 208; otherwise, the flow is ended.
In this embodiment of the present invention, optionally, the electronic device may perform recognition according to the target image to extract the target feature in the target image, and if the target feature is the picture feature, it indicates that the type of the content in the target area indicated by the finger of the user is the picture type, and otherwise, indicates that the type of the content in the target area indicated by the finger of the user is not the picture type. Implementing this alternative embodiment can improve the accuracy of identifying the type of content in the target area.
205. The electronic device outputs a target image on a display screen of the electronic device.
206. The electronic equipment identifies gesture operation of a user.
As an alternative implementation, the electronic device may further perform the following steps:
when the two fingers of the user are detected to be pulled apart within the shooting range of the electronic equipment until the angle formed by the two fingers is larger than a first preset angle, determining that the operation intention indicated by the gesture operation of the user is amplification;
when detecting that two fingers of a user are approached to the fingertip touch of the two fingers from a second preset angle within the shooting range of the electronic equipment, determining that the operation intention indicated by the gesture operation of the user is zooming out;
when the two fingers of the user are detected to move towards the target direction relative to the display screen within the shooting range of the electronic equipment, the operation intention indicated by the gesture operation of the user is determined to be translation, and the translation direction is consistent with the target direction.
By implementing the optional embodiment, a method for determining the operation intention indicated by the gesture operation of the user is provided, and the recognition accuracy of the gesture operation can be improved.
207. And the electronic equipment performs corresponding processing on the target image according to the operation intention indicated by the gesture operation.
As an optional implementation manner, in step 207, the electronic device performs corresponding processing on the target image according to the operation intention indicated by the gesture operation, including:
if the operation intention indicated by the gesture operation is amplification, the electronic equipment amplifies the target image; or if the operation intention indicated by the gesture operation is reduction, the electronic equipment performs reduction processing on the target image; or, if the operation intention indicated by the gesture operation is translation, the electronic device acquires a translation direction during translation, and performs translation processing on the target image according to the translation direction.
By implementing the optional embodiment, a method for performing corresponding processing on the target image according to the operation intention indicated by the gesture operation is provided, and the target image can be subjected to zooming and translating processing according to the intention of the user, so that the content in the target image can be clearly displayed to the user.
208. And the electronic equipment selects the reading content from the contents of the processed target image according to the content selection operation executed by the user aiming at the processed target image.
It can be seen that, with the implementation of the method described in fig. 2, when the type of the content in the area indicated by the finger of the user is the picture type, the picture (i.e., the target image) is displayed through the display screen, then the target image is correspondingly processed according to the operation intention indicated by the gesture operation of the user, and then the reading content is selected from the content of the target image according to the content selection operation executed by the user on the processed target image, so that the recognition accuracy of the electronic device can be improved. In addition, the method described in fig. 2 is implemented, the shooting function of the electronic device is awakened by installing the reflective mirror, the shooting module of the electronic device shoots an image of the target area in the reflective mirror by adjusting the included angle between the reflective mirror and the lens surface of the shooting module, and then the shooting module is controlled to shoot the mirror image in the reflective mirror to obtain the target image, so that the shooting accuracy can be improved.
EXAMPLE III
Referring to fig. 3, fig. 3 is a schematic flowchart illustrating a method for identifying a reading content according to another embodiment of the present invention. As shown in fig. 3, the method may include the following steps.
301-307; wherein, steps 301 to 307 are the same as steps 201 to 207 in the second embodiment, and are not described herein again.
308. When it is detected that the user's finger presses the display screen, the electronic device displays a first cursor and a second cursor on the processed target image.
In the embodiment of the invention, the position of the first cursor is the position of the user when the finger presses the display screen, and the position of the second cursor is not limited.
As an alternative embodiment, when it is detected that the user's finger presses the display screen, the electronic device may further perform the following steps: detecting whether the pressing time length is greater than a preset time length or not; if so, displaying a first cursor and a second cursor on the processed target image; if not, the user is indicated to touch the display screen by mistake, and no operation is performed. By implementing the optional implementation mode, the phenomenon that the user mistakenly touches the display screen to press the display screen by the finger of the user can be prevented, so that the first cursor and the second cursor are further displayed on the processed target image, and the intelligence of the electronic equipment is improved.
309. The electronic equipment fixedly displays the first cursor, detects the dragging operation of a finger of a user for the second cursor, and marks the content between the first cursor and the second cursor in the dragging process.
In the embodiment of the invention, when the electronic equipment detects that the finger of the user presses the display screen, the first cursor and the second cursor are displayed on the processed target image, then the first cursor is fixedly displayed, namely, the first cursor is not movable, and meanwhile, the dragging operation of the finger of the user for the second cursor is detected, namely, the user can drag the second cursor by the finger to move, and in the moving process, the content between the first cursor and the second cursor is marked, so that the efficiency of selecting the reading content can be improved.
310. And when the end of the dragging operation is detected, the electronic equipment takes the marked content as the reading content selected from the processed contents of the target image.
In the embodiment of the present invention, the electronic device labels the content between the first cursor and the second cursor, because the second cursor is moved by the finger of the user, so to speak, the content between the first cursor and the second cursor is the content that the user wants to read, and therefore, the electronic device can use the labeled content as the read content selected from the content of the processed target image.
In the embodiment of the present invention, steps 308 to 310 are implemented to provide a method for selecting a reading content according to a content selection operation performed by a user on a processed target image, where by fixing a first cursor and moving a second cursor, a content between the first cursor and the second cursor is labeled to obtain the reading content, and thus, the accuracy of selecting the reading content can be improved.
311. The electronic device obtains decibels of sound in an ambient environment of the electronic device.
In an embodiment of the present invention, a sound pickup module (e.g., a microphone) may be built in the electronic device, and accordingly, the electronic device may collect sound in a surrounding environment of the electronic device through the built-in sound pickup module, further obtain a decibel number of the sound, and then execute step 312 to determine whether the decibel number of the sound is smaller than a preset decibel number.
312. The electronic equipment judges whether the decibel number of sound in the surrounding environment of the electronic equipment is smaller than a preset decibel number or not; if so, go to step 313-step 314; if not, step 315 is performed.
In the embodiment of the invention, the electronic device judges whether the decibel number of the sound in the surrounding environment of the electronic device is smaller than a preset decibel number, if the decibel number of the sound is smaller than the preset decibel number, the sound of the environment where the electronic device is located is low, the electronic device is a quiet place (such as a library or a study room) and can execute the steps 313 to 314 to avoid influencing other people due to the fact that the reading content is played outside, and the reading content is transmitted to a user through a bone medium by using a designated wearable device; if the decibel number of the sound is not less than the preset decibel number, the sound is higher in the environment where the electronic equipment is located and is not a quiet place, and therefore the electronic equipment can directly play the reading content to the user.
313. The electronic device outputs a prompt message to prompt the user to wear the designated wearable device.
314. When it is detected that the user wears a specified wearable device, the electronic device sends the reading content to the specified wearable device, so that the specified wearable device delivers the reading content to the user through the bone medium.
In the embodiment of the present invention, the electronic device may send the reading content to the designated wearable device through a wireless transmission mode such as bluetooth or Wi-Fi, and may also send the reading content to the designated wearable device through a wired transmission mode such as a data line, which is not limited herein.
In the embodiment of the present invention, steps 311 to 314 are implemented, by determining whether the decibel number of the sound in the surrounding environment of the electronic device is less than the preset decibel number, and if the decibel number is less than the preset decibel number, it indicates that the environment where the electronic device is located is a quiet place, at this time, the electronic device sends the reading content to the designated wearable device, and the designated wearable device transmits the reading content to the user through the bone medium, so that it is possible to avoid that the reading content is released to affect other people.
315. The electronic equipment plays the reading content to the user.
In the embodiment of the invention, the electronic equipment can be internally provided with a power amplifier module (such as a loudspeaker), and correspondingly, the electronic equipment can play the read contents to the user through the built-in power amplifier module.
It can be seen that, with the implementation of the method described in fig. 3, when the type of the content in the area indicated by the finger of the user is the picture type, the picture (i.e., the target image) is displayed through the display screen, then the target image is correspondingly processed according to the operation intention indicated by the gesture operation of the user, and then the reading content is selected from the content of the target image according to the content selection operation executed by the user on the processed target image, so that the recognition accuracy of the electronic device can be improved. In addition, the method described in fig. 3 is implemented, the shooting function of the electronic device is awakened by installing the reflective mirror, the shooting module of the electronic device shoots an image of the target area in the reflective mirror by adjusting the included angle between the reflective mirror and the lens surface of the shooting module, and then the shooting module is controlled to shoot the mirror image in the reflective mirror to obtain the target image, so that the shooting accuracy can be improved. In addition, the method described in fig. 3 is implemented to provide a method for selecting the reading content according to the content selection operation performed by the user on the processed target image, so that the accuracy of selecting the reading content can be improved. In addition, by implementing the method described in fig. 3, by determining whether the decibel number of the sound in the surrounding environment of the electronic device is less than the preset decibel number, and if the decibel number is less than the preset decibel number, it indicates that the environment where the electronic device is located is a quiet place, the electronic device sends the reading content to the designated wearable device, and the designated wearable device transmits the reading content to the user through the bone medium, it is possible to avoid that other people are affected by the reading content being externally played.
Example four
Referring to fig. 4, fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the disclosure. As shown in fig. 4, the electronic device may include:
a first judging unit 401, configured to, when a click-to-read instruction input by a user is detected, judge whether a type of content in a target area indicated by a finger of the user is a picture type.
In the embodiment of the invention, the target area is an area on a paper point-reading carrier. The paper point-reading carrier may be a textbook or a storybook, or may be another book, and the embodiment of the present invention is not limited.
An output unit 402, configured to output a target image on a display screen of the electronic device when the first determination unit 401 determines that the type of the content in the target area is a picture type, where the content in the target image is the content in the target area.
It is understood that, in some embodiments, the display screen may also be a display screen of another device bound to the electronic device, and the embodiments of the present invention are not limited. For convenience of understanding, the embodiment of the present invention is described taking a display screen provided by the electronic device itself as an example, and in the embodiment of the present invention, when the first judgment unit 401 judges that the type of the content in the target area is the picture type, the output unit 402 outputs the target image on the display screen of the electronic device.
As an alternative implementation, the output unit 402 may include the following sub-units not shown:
the first detection subunit is used for detecting a target page number corresponding to a current page of the paper point-reading carrier indicated by the finger of the user;
the first acquisition subunit is used for acquiring the content of the current page of the paper point-reading carrier according to the target page number;
the second detection subunit is used for detecting the position coordinates of the fingers of the user on the current page of the paper point-reading carrier;
the determining subunit is used for determining a target image corresponding to the content in the target area indicated by the finger of the user from the content of the current page of the paper point-reading carrier according to the position coordinates;
and the output subunit is used for outputting the target image on the display screen of the electronic equipment.
By implementing the optional implementation mode, a method for determining a target image is provided, the content of the current page is obtained according to the target page number corresponding to the current page of the paper point-reading carrier, and then the target image is further determined according to the position coordinates of the fingers of the user on the current page, so that the efficiency of obtaining the target image can be improved.
The recognition unit 403 is configured to recognize a gesture operation of a user.
In this embodiment of the present invention, the recognition unit 403 may set up a gesture database in advance, where the gesture database may include a gesture operation corresponding to zooming in, a gesture operation corresponding to zooming out, and a gesture operation corresponding to translating.
Therefore, as an alternative implementation, the identification unit 403 may include the following sub-units not shown in the figure:
the acquisition subunit is used for acquiring gesture operation performed by a user aiming at the target image;
the searching subunit is used for searching a target gesture operation matched with the gesture operation from a gesture database;
and the second acquisition subunit is used for acquiring the operation intention corresponding to the target gesture operation.
By implementing the optional implementation mode, the method for recognizing the gesture operation of the user is improved, the gesture operation of the user is matched with the gesture operation in the pre-built gesture database, the target gesture operation is determined, the operation intention corresponding to the target gesture operation is further obtained, and the accuracy of recognizing the gesture operation of the user can be improved.
And the processing unit 404 is configured to perform corresponding processing on the target image according to the operation intention indicated by the gesture operation.
In this embodiment of the present invention, the operation intention indicated by the gesture operation may include zooming in, zooming out, or translating, and the like, and the embodiment of the present invention is not limited thereto, and accordingly, the processing unit 404 may perform zooming in, zooming out, or translating processing on the target image according to the operation intention indicated by the gesture operation.
A selecting unit 405, configured to select, according to a content selecting operation performed by a user on the processed target image, a reading content from the content of the processed target image.
As can be seen, with the electronic device described in fig. 4, when the type of the content in the area indicated by the finger of the user is the picture type, the picture (i.e., the target image) is displayed through the display screen, then the target image is correspondingly processed according to the operation intention indicated by the gesture operation of the user, and then the reading content is selected from the content of the target image according to the content selection operation performed by the user on the processed target image, so that the recognition accuracy of the electronic device can be improved.
EXAMPLE five
Referring to fig. 5, fig. 5 is a schematic structural diagram of another electronic device according to an embodiment of the disclosure. The electronic device shown in fig. 5 is further optimized from the electronic device shown in fig. 4. Compared to the electronic device shown in fig. 4, the electronic device shown in fig. 5 may further include:
a detecting unit 406, configured to detect whether the electronic device is equipped with a mirror before the first determining unit 401 determines whether the type of the content in the target area indicated by the finger of the user is a picture type when detecting the click-to-read instruction input by the user.
As an alternative embodiment, the reflective mirror may have magnetism, and when the reflective mirror is mounted on the electronic device, the detection unit 406 may sense the strength of the magnetism, and if the strength of the magnetism reaches a predetermined strength, the detection unit 406 determines that the reflective mirror is mounted on the electronic device, and if the strength of the magnetism does not reach the predetermined strength, the detection unit 406 determines that the reflective mirror is not mounted on the electronic device. By implementing the embodiment, whether the reflector is arranged on the electronic equipment is judged by sensing the magnetism of the reflector, and the accuracy of whether the reflector is arranged on the electronic equipment can be detected.
And an adjusting unit 407, configured to adjust an included angle of the mirror with respect to a lens surface of a shooting module of the electronic device when the detecting unit 406 detects that the mirror is installed on the electronic device, so that a mirror image exists in the mirror.
In the embodiment of the invention, the mirror image is the image of the content in the target area in the reflector.
In the embodiment of the present invention, the adjustment of the reflective mirror may also be manually adjusted, and the embodiment of the present invention is not limited.
The first judgment unit 401 includes:
the shooting sub-unit 4011 is configured to control the shooting module to shoot the mirror image in the reflector when a click-to-read instruction input by a user is detected, so as to obtain a target image;
a judging sub-unit 4012 configured to judge, according to the target image, whether the type of the content in the target area indicated by the finger of the user is a picture type.
In the embodiment of the invention, the content in the target image is the content in the target area.
In practical applications, when the capturing sub-unit 4011 controls the capturing module to capture the mirror image in the mirror, the content in the target area in the captured image may be blocked by a finger of the user, and therefore, as an alternative embodiment, when the adjusting unit 407 adjusts an included angle of the mirror with respect to a lens surface of the capturing module of the electronic device so that the mirror image exists in the mirror, the capturing sub-unit 4011 may control the capturing module to continuously capture the mirror image in the mirror to obtain a multi-frame image, and when a click-to-read instruction input by the user is detected, the capturing sub-unit 4011 may determine the target image without the finger of the user from the multi-frame image. By implementing the optional implementation mode, the loss of the content in the inner target image caused by blocking the content in the target area by the fingers of the user can be avoided.
In this embodiment of the present invention, optionally, the determining sub-unit 4012 may perform recognition according to the target image to extract the target feature in the target image, and if the target feature is the picture feature, it indicates that the type of the content in the target area indicated by the finger of the user is the picture type, and otherwise, indicates that the type of the content in the target area indicated by the finger of the user is not the picture type. Implementing this alternative embodiment can improve the accuracy of identifying the type of content in the target area.
As an optional implementation manner, the processing unit 404 is configured to perform corresponding processing on the target image according to the operation intention indicated by the gesture operation, specifically:
a processing unit 404, configured to perform an enlargement process on the target image if the operation intention indicated by the gesture operation is enlargement; or if the operation intention indicated by the gesture operation is reduction, carrying out reduction processing on the target image; or, if the operation intention indicated by the gesture operation is translation, acquiring a translation direction during translation, and performing translation processing on the target image according to the translation direction.
By implementing the optional embodiment, a method for performing corresponding processing on the target image according to the operation intention indicated by the gesture operation is provided, and the target image can be subjected to zooming and translating processing according to the intention of the user, so that the content in the target image can be clearly displayed to the user.
Further, as an optional implementation manner, the identifying unit 403 may further perform the following steps:
when the two fingers of the user are detected to be pulled apart within the shooting range of the electronic equipment until the angle formed by the two fingers is larger than a first preset angle, determining that the operation intention indicated by the gesture operation of the user is amplification;
when detecting that two fingers of a user are approached to the fingertip touch of the two fingers from a second preset angle within the shooting range of the electronic equipment, determining that the operation intention indicated by the gesture operation of the user is zooming out;
when the two fingers of the user are detected to move towards the target direction relative to the display screen within the shooting range of the electronic equipment, the operation intention indicated by the gesture operation of the user is determined to be translation, and the translation direction is consistent with the target direction.
By implementing the optional embodiment, a method for determining the operation intention indicated by the gesture operation of the user is provided, and the recognition accuracy of the gesture operation can be improved.
It can be seen that, with the electronic device described in fig. 5, when the type of the content in the area indicated by the finger of the user is the picture type, the picture (i.e., the target image) is displayed through the display screen, then the target image is correspondingly processed according to the operation intention indicated by the gesture operation of the user, and then the reading content is selected from the content of the target image according to the content selection operation executed by the user on the processed target image, so that the recognition accuracy of the electronic device can be improved. In addition, the electronic device described in fig. 5 is implemented, the shooting function of the electronic device is awakened by installing the reflective mirror, the shooting module of the electronic device shoots an image of the target area in the reflective mirror by adjusting the included angle between the reflective mirror and the lens surface of the shooting module, and then the shooting module is controlled to shoot a mirror image in the reflective mirror to obtain a target image, so that the shooting accuracy can be improved.
EXAMPLE six
Referring to fig. 6, fig. 6 is a schematic structural diagram of another electronic device according to an embodiment of the disclosure. The electronic device shown in fig. 6 is further optimized from the electronic device shown in fig. 5. Compared to the electronic device shown in fig. 5, the electronic device shown in fig. 6 may further include:
an obtaining unit 408, configured to obtain the decibel number of the sound in the surrounding environment of the electronic device after the selecting unit 405 selects the reading content from the content of the processed target image according to the content selecting operation performed by the user on the processed target image.
In an embodiment of the present invention, the obtaining unit 408 may have a built-in sound pickup module (e.g., a microphone), and accordingly the obtaining unit 408 may collect sound in the surrounding environment of the electronic device through the built-in sound pickup module, and further obtain the decibel number of the sound.
The second determining unit 409 is configured to determine whether a decibel number of sound in the surrounding environment of the electronic device is less than a preset decibel number.
The prompting unit 410 is configured to output a prompting message to prompt the user to wear the specified wearable device when the second determining unit 409 determines that the decibel number of the sound is smaller than the preset decibel number.
A sending unit 411, configured to send the reading content to a specified wearable device when it is detected that the user wears the specified wearable device, so that the specified wearable device delivers the reading content to the user through a bone medium.
In this embodiment of the present invention, the sending unit 411 may send the reading content to the designated wearable device through a wireless transmission manner such as bluetooth or Wi-Fi, or may send the reading content to the designated wearable device through a wired transmission manner such as a data line, which is not limited herein.
The selecting unit 405 includes:
a display sub-unit 4051 configured to display a first cursor and a second cursor on the processed target image when it is detected that the user's finger presses the display screen;
a labeling subunit 4052, configured to fixedly display a first cursor, detect a dragging operation of a finger of a user for a second cursor, and label content between the first cursor and the second cursor in a dragging process;
the selecting sub-unit 4053 is configured to, when it is detected that the drag operation is ended, use the annotated content as a reading content selected from the processed contents of the target image.
In the embodiment of the invention, the position of the first cursor is the position of the user when the finger presses the display screen, and the position of the second cursor is not limited.
As an alternative embodiment, the selecting unit 405 may further include a third detecting subunit, not shown in the drawings, and when it is detected that the user's finger presses the display screen, the following steps may be further performed:
the third detection subunit is used for detecting whether the pressing time length is greater than the preset time length;
the display subunit 4051 is specifically configured to display the first cursor and the second cursor on the processed target image when the third detecting subunit detects that the pressing duration is longer than the preset duration.
By implementing the optional implementation mode, the phenomenon that the user mistakenly touches the display screen to press the display screen by the finger of the user can be prevented, so that the first cursor and the second cursor are further displayed on the processed target image, and the intelligence of the electronic equipment is improved.
As can be seen, with the electronic device described in fig. 6, when the type of the content in the area indicated by the finger of the user is the picture type, the picture (i.e., the target image) is displayed through the display screen, then the target image is correspondingly processed according to the operation intention indicated by the gesture operation of the user, and then the reading content is selected from the content of the target image according to the content selection operation performed by the user on the processed target image, so that the recognition accuracy of the electronic device can be improved. In addition, the electronic device described in fig. 6 is implemented, the shooting function of the electronic device is awakened by installing the reflective mirror, the shooting module of the electronic device shoots an image of the target area in the reflective mirror by adjusting the included angle between the reflective mirror and the lens surface of the shooting module, and then the shooting module is controlled to shoot a mirror image in the reflective mirror to obtain a target image, so that the shooting accuracy can be improved. In addition, the electronic device described in fig. 6 is implemented to provide a method for selecting the reading content according to the content selection operation performed by the user on the processed target image, so that the accuracy of selecting the reading content can be improved. In addition, when the electronic device described in fig. 6 is implemented, by determining whether the decibel number of the sound in the surrounding environment of the electronic device is smaller than the preset decibel number, and if the decibel number is smaller than the preset decibel number, it indicates that the environment where the electronic device is located is a quiet place, the electronic device sends the reading content to the designated wearable device, and the designated wearable device transmits the reading content to the user through the bone medium, so that the influence on other people due to the fact that the reading content is played outside can be avoided.
EXAMPLE seven
Referring to fig. 7, fig. 7 is a schematic structural diagram of another electronic device according to an embodiment of the disclosure. As shown in fig. 7, the electronic device may include:
a memory 701 in which executable program code is stored;
a processor 702 coupled to the memory 701;
the processor 702 calls the executable program code stored in the memory 701 to execute any one of the methods for identifying the read contents in fig. 1 to 3.
The embodiment of the invention discloses a computer-readable storage medium which stores a computer program, wherein the computer program enables a computer to execute any one identification method of reading contents in figures 1-3.
An embodiment of the present invention discloses a computer program product, which includes a non-transitory computer-readable storage medium storing a computer program, and the computer program is operable to make a computer execute any one of the identification methods of the read contents in fig. 1 to 3.
The embodiment of the present invention also discloses an application publishing platform, wherein the application publishing platform is used for publishing a computer program product, and when the computer program product runs on a computer, the computer is caused to execute part or all of the steps of the method in the above method embodiments.
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. Those skilled in the art should also appreciate that the embodiments described in this specification are exemplary and alternative embodiments, and that the acts and modules illustrated are not required in order to practice the invention.
In various embodiments of the present invention, it should be understood that the sequence numbers of the above-mentioned processes do not imply an inevitable order of execution, and the execution order of the processes should be determined by their functions and inherent logic, and should not constitute any limitation on the implementation process of the embodiments of the present invention.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated units, if implemented as software functional units and sold or used as a stand-alone product, may be stored in a computer accessible memory. Based on such understanding, the technical solution of the present invention, which is a part of or contributes to the prior art in essence, or all or part of the technical solution, can be embodied in the form of a software product, which is stored in a memory and includes several requests for causing a computer device (which may be a personal computer, a server, a network device, or the like, and may specifically be a processor in the computer device) to execute part or all of the steps of the above-described method of each embodiment of the present invention.
In the embodiments provided herein, it should be understood that "B corresponding to a" means that B is associated with a from which B can be determined. It should also be understood, however, that determining B from a does not mean determining B from a alone, but may also be determined from a and/or other information.
In various embodiments of the present invention, it is understood that the meaning of "a and/or B" means that a and B are each present alone or both a and B are included.
It will be understood by those skilled in the art that all or part of the steps in the methods of the embodiments described above may be implemented by instructions associated with a program, which may be stored in a computer-readable storage medium, where the storage medium includes Read-Only Memory (ROM), Random Access Memory (RAM), Programmable Read-Only Memory (PROM), Erasable Programmable Read-Only Memory (EPROM), One-time Programmable Read-Only Memory (OTPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), compact disc-Read-Only Memory (CD-ROM), or other Memory, magnetic disk, magnetic tape, or magnetic tape, Or any other medium which can be used to carry or store data and which can be read by a computer.
The method for identifying the reading contents and the electronic device disclosed by the embodiment of the invention are described in detail, a specific example is applied in the description to explain the principle and the implementation of the invention, and the description of the embodiment is only used for helping to understand the method and the core idea of the invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (10)

1. A recognition method of reading contents is characterized by comprising the following steps:
when a point reading instruction input by a user is detected, judging whether the type of the content in the target area indicated by the finger of the user is a picture type; the target area is an area on the paper point-reading carrier;
if the type of the content in the target area is a picture type, outputting a target image on a display screen of the electronic equipment; the content in the target image is the content in the target area;
recognizing gesture operation of the user;
according to the operation intention indicated by the gesture operation, performing corresponding processing on the target image;
and selecting reading contents from the processed contents of the target image according to the content selection operation executed by the user aiming at the processed target image.
2. The method according to claim 1, wherein before the determining whether the type of the content in the target area indicated by the finger of the user is a picture type when the click-to-read instruction input by the user is detected, the method further comprises:
detecting whether the electronic equipment is provided with a reflector or not;
if so, adjusting an included angle of the reflector relative to a lens surface of a shooting module of the electronic equipment so that a mirror image exists in the reflector; wherein the mirror image is an image of the content in the target area in the mirror;
and when a click-to-read instruction input by a user is detected, judging whether the type of the content in the target area indicated by the finger of the user is a picture type or not, wherein the judging step comprises the following steps:
when a point reading instruction input by a user is detected, controlling the shooting module to shoot the mirror image in the reflector so as to obtain the target image;
and judging whether the type of the content in the target area indicated by the finger of the user is a picture type or not according to the target image.
3. The method according to claim 2, wherein the processing the target image according to the operation intention indicated by the gesture operation comprises:
if the operation intention indicated by the gesture operation is amplification, performing amplification processing on the target image; or if the operation intention indicated by the gesture operation is reduction, carrying out reduction processing on the target image; or if the operation intention indicated by the gesture operation is translation, acquiring a translation direction during translation, and performing translation processing on the target image according to the translation direction.
4. The method according to any one of claims 1 to 3, wherein after the selecting, according to the content selection operation performed by the user for the processed target image, reading content from the content of the processed target image, the method further comprises:
acquiring decibels of sound in the surrounding environment of the electronic equipment;
judging whether the decibel number of the sound is smaller than a preset decibel number or not;
if yes, outputting prompt information to prompt the user to wear the specified wearable equipment;
when the wearable device is detected to be worn by the user, the reading content is sent to the wearable device, so that the wearable device can deliver the reading content to the user through bone media.
5. The method according to claim 4, wherein the selecting, according to a content selection operation performed by the user on the processed target image, reading content from the content of the processed target image includes:
when it is detected that the finger of the user presses the display screen, displaying a first cursor and a second cursor on the processed target image; the position of the first cursor is the position of the user when the finger presses the display screen;
fixedly displaying the first cursor, detecting the dragging operation of the finger of the user for the second cursor, and labeling the content between the first cursor and the second cursor in the dragging process;
and when the end of the dragging operation is detected, taking the marked content as the reading content selected from the processed content of the target image.
6. An electronic device, comprising:
the device comprises a first judging unit, a second judging unit and a third judging unit, wherein the first judging unit is used for judging whether the type of the content in the target area indicated by the finger of the user is a picture type or not when a point reading instruction input by the user is detected; the target area is an area on the paper point-reading carrier;
the output unit is used for outputting a target image on a display screen of the electronic equipment when the first judging unit judges that the type of the content in the target area is the picture type; the content in the target image is the content in the target area;
the recognition unit is used for recognizing the gesture operation of the user;
the processing unit is used for carrying out corresponding processing on the target image according to the operation intention indicated by the gesture operation;
and the selecting unit is used for selecting the reading content from the processed contents of the target image according to the content selecting operation executed by the user aiming at the processed target image.
7. The electronic device of claim 6, further comprising:
the detection unit is used for detecting whether the electronic equipment is provided with a reflector or not before the first judgment unit judges whether the type of the content in the target area indicated by the finger of the user is a picture type or not when the point reading instruction input by the user is detected;
the adjusting unit is used for adjusting an included angle of the reflector relative to a lens surface of a shooting module of the electronic equipment when the detecting unit detects that the reflector is installed on the electronic equipment, so that a mirror image exists in the reflector; wherein the mirror image is an image of the content in the target area in the mirror;
and the first judging unit includes:
the shooting subunit is used for controlling the shooting module to shoot the mirror image in the reflector when a point reading instruction input by a user is detected so as to obtain the target image;
and the judging subunit is used for judging whether the type of the content in the target area indicated by the finger of the user is a picture type according to the target image.
8. The electronic device of claim 7, wherein:
the processing unit is specifically configured to perform amplification processing on the target image if the operation intention indicated by the gesture operation is amplification; or if the operation intention indicated by the gesture operation is reduction, carrying out reduction processing on the target image; or if the operation intention indicated by the gesture operation is translation, acquiring a translation direction during translation, and performing translation processing on the target image according to the translation direction.
9. The electronic device of any of claims 6-8, further comprising:
an obtaining unit, configured to obtain decibels of sound in a surrounding environment of the electronic device after the selecting unit selects, according to a content selection operation performed by the user for the processed target image, a reading content from the content of the processed target image;
the second judging unit is used for judging whether the decibel number of the sound is smaller than a preset decibel number or not;
the prompting unit is used for outputting prompting information to prompt the user to wear the specified wearable device when the second judging unit judges that the decibel number of the sound is smaller than the preset decibel number;
a sending unit, configured to send the reading content to the wearable device when it is detected that the user wears the wearable device, so that the wearable device delivers the reading content to the user through a bone medium.
10. The electronic device of claim 9, wherein the selecting unit comprises:
the display subunit is used for displaying a first cursor and a second cursor on the processed target image when the fact that the finger of the user presses the display screen is detected; the position of the first cursor is the position of the user when the finger presses the display screen;
the labeling subunit is configured to fixedly display the first cursor, detect a dragging operation of a finger of the user for the second cursor, and label content between the first cursor and the second cursor in a dragging process;
and the selecting subunit is used for taking the marked content as the reading content selected from the processed contents of the target image when the end of the dragging operation is detected.
CN201910485276.6A 2019-06-03 2019-06-03 Recognition method of reading content and electronic equipment Active CN111078101B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910485276.6A CN111078101B (en) 2019-06-03 2019-06-03 Recognition method of reading content and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910485276.6A CN111078101B (en) 2019-06-03 2019-06-03 Recognition method of reading content and electronic equipment

Publications (2)

Publication Number Publication Date
CN111078101A true CN111078101A (en) 2020-04-28
CN111078101B CN111078101B (en) 2021-08-20

Family

ID=70310374

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910485276.6A Active CN111078101B (en) 2019-06-03 2019-06-03 Recognition method of reading content and electronic equipment

Country Status (1)

Country Link
CN (1) CN111078101B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN204029175U (en) * 2014-07-02 2014-12-17 日照海帝电器有限公司 A kind of point-of-reading system and talking pen with audio frequency and video playing function
CN104658351A (en) * 2015-03-17 2015-05-27 智慧流(福建)网络科技有限公司 Point-reading method, point-reading system and client
CN206741428U (en) * 2016-11-30 2017-12-12 世优(北京)科技有限公司 Support with reflective mirror
CN107748642A (en) * 2017-11-07 2018-03-02 广东欧珀移动通信有限公司 Adjust method, apparatus, storage medium and the electronic equipment of picture
CN109240582A (en) * 2018-08-30 2019-01-18 广东小天才科技有限公司 A kind of put reads control method and smart machine
CN109360454A (en) * 2018-09-30 2019-02-19 与德科技有限公司 A kind of reading method
CN109726333A (en) * 2019-01-23 2019-05-07 广东小天才科技有限公司 It is a kind of that topic method and private tutor's equipment are searched based on image
CN109756676A (en) * 2019-01-16 2019-05-14 广东小天才科技有限公司 A kind of image processing method and electronic equipment

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN204029175U (en) * 2014-07-02 2014-12-17 日照海帝电器有限公司 A kind of point-of-reading system and talking pen with audio frequency and video playing function
CN104658351A (en) * 2015-03-17 2015-05-27 智慧流(福建)网络科技有限公司 Point-reading method, point-reading system and client
CN206741428U (en) * 2016-11-30 2017-12-12 世优(北京)科技有限公司 Support with reflective mirror
CN107748642A (en) * 2017-11-07 2018-03-02 广东欧珀移动通信有限公司 Adjust method, apparatus, storage medium and the electronic equipment of picture
CN109240582A (en) * 2018-08-30 2019-01-18 广东小天才科技有限公司 A kind of put reads control method and smart machine
CN109360454A (en) * 2018-09-30 2019-02-19 与德科技有限公司 A kind of reading method
CN109756676A (en) * 2019-01-16 2019-05-14 广东小天才科技有限公司 A kind of image processing method and electronic equipment
CN109726333A (en) * 2019-01-23 2019-05-07 广东小天才科技有限公司 It is a kind of that topic method and private tutor's equipment are searched based on image

Also Published As

Publication number Publication date
CN111078101B (en) 2021-08-20

Similar Documents

Publication Publication Date Title
JP6389014B2 (en) Voice control method, device, program, recording medium, control device and smart device for smart device
US9380209B2 (en) Apparatus having camera and method for image photographing
KR102022444B1 (en) Method for synthesizing valid images in mobile terminal having multi camera and the mobile terminal therefor
KR102313755B1 (en) Mobile terminal and method for controlling the same
JP2022547892A (en) Screen capture method and related device
KR102076773B1 (en) Method for obtaining video data and an electronic device thereof
KR102165818B1 (en) Method, apparatus and recovering medium for controlling user interface using a input image
KR20170016215A (en) Mobile terminal and method for controlling the same
CN108965691B (en) Camera control method and device, mobile terminal and storage medium
KR20160133781A (en) Mobile terminal and method for controlling the same
CN109756676B (en) Image processing method and electronic equipment
CN109639952B (en) Photographing question searching method and electronic equipment
CN103327270B (en) A kind of image processing method, device and terminal
US20180213147A1 (en) Information processing apparatus having camera function and producing guide display to capture character recognizable image, control method thereof, and storage medium
CN111182167B (en) File scanning method and electronic equipment
KR20140012757A (en) Facilitating image capture and image review by visually impaired users
CN110991455A (en) Video text broadcasting method and device, electronic circuit and storage medium
CN105786944A (en) Method and device for automatically turning pages of browser
CN111078101B (en) Recognition method of reading content and electronic equipment
EP3073798A1 (en) Method and mobile phone device for realizing digital magnifier
KR20180133138A (en) Mobile terminal and method for controlling the same
US11776286B2 (en) Image text broadcasting
CN111553356B (en) Character recognition method and device, learning device and computer readable storage medium
CN113079311B (en) Image acquisition method and device, electronic equipment and storage medium
CN111079769B (en) Identification method of writing content and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant