CN110990107A - Reading assistance method and device and electronic equipment - Google Patents

Reading assistance method and device and electronic equipment Download PDF

Info

Publication number
CN110990107A
CN110990107A CN201911305405.5A CN201911305405A CN110990107A CN 110990107 A CN110990107 A CN 110990107A CN 201911305405 A CN201911305405 A CN 201911305405A CN 110990107 A CN110990107 A CN 110990107A
Authority
CN
China
Prior art keywords
character string
target
target area
reading
image data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911305405.5A
Other languages
Chinese (zh)
Inventor
钟波
肖适
王鑫
余金清
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Jimi Technology Co Ltd
Original Assignee
Chengdu Jimi Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Jimi Technology Co Ltd filed Critical Chengdu Jimi Technology Co Ltd
Priority to CN201911305405.5A priority Critical patent/CN110990107A/en
Priority to PCT/CN2020/079181 priority patent/WO2021120420A1/en
Publication of CN110990107A publication Critical patent/CN110990107A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application provides a reading assistance method, a reading assistance device and electronic equipment, wherein the method comprises the following steps: acquiring first image data in a target area, wherein the first image data comprises an indication action image of a target object, and reading content is displayed in the target area; identifying the indication action image to determine a character string to be identified corresponding to the indication action image; carrying out designated processing on the character string to be identified so as to determine a target character string corresponding to the character string to be identified; and displaying the target character string in the target area.

Description

Reading assistance method and device and electronic equipment
Technical Field
The application relates to the technical field of image processing, in particular to a reading assisting method and device and electronic equipment.
Background
The current reading modes are generally: 1) the physical book bears reading contents; 2) the electronic equipment displays the content to be read, and the two reading modes can be used for looking up the related content through the electronic equipment such as a mobile phone and the like when the unfamiliar content is met. In terms of reading, however, the reading efficiency of the above two ways is relatively low.
Disclosure of Invention
In view of the above, an object of the present invention is to provide a reading assistance method and apparatus, and an electronic device. The reading material processing device can assist a user in processing the reading material, so that the reading effect is improved.
In a first aspect, an embodiment provides a reading assistance method, including:
acquiring first image data in a target area, wherein the first image data comprises an indication action image of a target object, and reading content is displayed in the target area;
identifying the indication action image to determine a character string to be identified corresponding to the indication action image;
performing designated processing on the character string to be identified to determine a target character string corresponding to the character string to be identified;
and displaying the target character string in the target area.
In an optional embodiment, the step of displaying the target character string in the target area includes:
acquiring second image data of the target area;
determining whether the target area contains a blank area by identifying the second image data;
if the target area comprises a blank area, the target character string is projected to the blank area of the target area for displaying.
According to the reading assisting method provided by the embodiment of the application, the target character string can be displayed in the blank area, the situation that the target character string blocks the content which the user possibly needs to read can be avoided, the display effect of the reading materials is reduced, and the experience of the user is influenced.
In an alternative embodiment, the method further comprises:
acquiring third image data of the target area according to a preset period;
when the third image data is acquired, identifying whether the third image data has updating content relative to the image data acquired at the previous moment of the current moment;
and if the updated content exists, storing the updated content.
The reading assisting method provided by the embodiment of the application can also store the updated content when the updated content exists, so that the user can conveniently inquire the content generated in the reading process subsequently.
In an optional embodiment, the step of storing the updated content if the updated content exists includes:
and if the projected electronic reading materials are detected to be displayed in the target area, storing the updated content in association with the electronic reading materials.
According to the reading assisting method provided by the embodiment of the application, the updated content and the electronic reading are stored in an associated mode, so that a user can conveniently inquire the updated content in an associated mode when looking up the electronic reading.
In an optional embodiment, the step of displaying the target character string in the target area includes:
if the fact that the reading materials are placed in the target area is detected, acquiring fourth image data of the target area;
determining a display surface corresponding to the entity reading object according to the fourth image data;
and projecting the target character string to the display surface for displaying.
According to the reading auxiliary method provided by the embodiment of the application, as the reading object of the entity is probably not a standard plane, if the phenomena of character string dislocation and the like are possibly caused according to the real target character string of the plane, the display effect can better meet the visual effect required by human eyes by firstly determining the display surface and then displaying the target character string based on the display surface.
In an optional implementation manner, the step of performing a designation process on the character string to be recognized to determine a target character string corresponding to the character string to be recognized includes:
translating the character string to be recognized to obtain a target character string of the character string to be recognized in a target language, or/and,
and retrieving an explanation document corresponding to the character string to be identified, and taking the explanation document as a target character string corresponding to the character string to be identified.
The reading auxiliary method provided by the embodiment of the application can also translate or explain the character string to be recognized, can reduce the query operation of a user in the reading process, and improves the reading experience.
In an alternative embodiment, the method further comprises:
and projecting the electronic reading material to the target area, wherein the character string to be identified is the character string in the electronic reading material.
The reading auxiliary method provided by the embodiment of the application can also be used for directly projecting the electronic reading materials to be read, so that a user can read more contents conveniently.
In a second aspect, embodiments provide a reading aid comprising:
the system comprises an acquisition module, a display module and a processing module, wherein the acquisition module is used for acquiring first image data in a target area, and the first image data comprises an indication action image of a target object;
the identification module is used for identifying the indication action image so as to determine a character string to be identified corresponding to the indication action image;
the processing module is used for carrying out designated processing on the character string to be identified so as to determine a target character string corresponding to the character string to be identified;
and the first projection module is used for displaying the target character string in the target area.
In a third aspect, an embodiment provides an electronic device, including: a processor, a memory storing machine readable instructions executable by the processor, the machine readable instructions when executed by the processor perform the steps of the method of any of the preceding embodiments when the electronic device is run.
In a fourth aspect, embodiments provide a computer-readable storage medium having a computer program stored thereon, where the computer program is executed by a processor to perform the steps of the method according to any of the previous embodiments.
According to the reading assisting method, the reading assisting device, the electronic equipment and the computer readable storage medium, the designated processing of the character string to be recognized can be achieved through the indication action, the target character string corresponding to the character string to be recognized can be obtained when a user reads the target character string conveniently, and the reading assisting method, the reading assisting device, the electronic equipment and the computer readable storage medium can assist in understanding the character string to be recognized during reading.
In order to make the aforementioned objects, features and advantages of the present application more comprehensible, embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained from the drawings without inventive effort.
Fig. 1 is a block diagram of an electronic device according to an embodiment of the present disclosure.
Fig. 2 is a flowchart of a reading assistance method according to an embodiment of the present application.
Fig. 3 is a detailed flowchart of step 204 of the reading assistance method according to the embodiment of the present application.
Fig. 4 is a detailed flowchart of step 204 of the reading assistance method according to the embodiment of the present application.
Fig. 5 is a partial flowchart of a reading assistance method according to an embodiment of the present application.
Fig. 6 is a functional block diagram of a reading assistance device according to an embodiment of the present application.
Detailed Description
The technical solution in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures. Meanwhile, in the description of the present application, the terms "first", "second", and the like are used only for distinguishing the description, and are not to be construed as indicating or implying relative importance.
Example one
To facilitate understanding of the embodiment, first, an electronic device for performing a reading assistance method disclosed in the embodiment of the present application will be described in detail.
As shown in fig. 1, is a block schematic diagram of an electronic device. The electronic device 100 may include a memory 111, a memory controller 112, a processor 113, a peripheral interface 114, an input output unit 115, an acquisition unit 116, a projector 117, and a radio frequency unit 118. It will be understood by those of ordinary skill in the art that the structure shown in fig. 1 is merely exemplary and is not intended to limit the structure of the electronic device 100. For example, electronic device 100 may also include more or fewer components than shown in FIG. 1, or have a different configuration than shown in FIG. 1.
The above-mentioned memory 111, the memory controller 112, the processor 113, the peripheral interface 114, the input/output unit 115 and the acquisition unit 116 are electrically connected to each other directly or indirectly, so as to implement data transmission or interaction. For example, the components may be electrically connected to each other via one or more communication buses or signal lines. The processor 113 is used to execute the executable modules stored in the memory.
The Memory 111 may be, but is not limited to, a Random Access Memory (RAM), a Read Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable Read-Only Memory (EPROM), an electrically Erasable Read-Only Memory (EEPROM), and the like. The memory 111 is configured to store a program, and the processor 113 executes the program after receiving an execution instruction, and the method executed by the electronic device 100 defined by the process disclosed in any embodiment of the present application may be applied to the processor 113, or implemented by the processor 113.
The processor 113 may be an integrated circuit chip having signal processing capability. The Processor 113 may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; the Integrated Circuit may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, a discrete gate or transistor logic device, or a discrete hardware component. The various methods, steps, and logic blocks disclosed in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The peripheral interface 114 couples various input/output devices to the processor 113 and memory 111. In some embodiments, the peripheral interface 114, the processor 113, and the memory controller 112 may be implemented in a single chip. In other examples, they may be implemented separately from the individual chips.
The input/output unit 115 is used to provide input data to the user. The input/output unit 115 may be, but is not limited to, a mouse, a keyboard, and the like.
The capture unit 116 is used to capture images (e.g., photographs, videos, etc.) and store the captured images for use by other components. Alternatively, the acquisition unit 116 may be an RGB-D (Red Green Blue-Deep) camera. The acquisition unit 116 may be used to capture depth images.
Optionally, the electronic device 100 in this embodiment may further include a projector 117, where the projector 117 includes a light source, a projection optical system, and other projection elements, and is used to implement projection of a picture.
A Radio Frequency (RF) unit 118 is used for receiving and transmitting electromagnetic waves, and implementing interconversion between the electromagnetic waves and electrical signals, so as to communicate with a communication network or other devices. The radio frequency unit 118 may include various existing circuit elements for performing these functions, such as an antenna, a radio frequency transceiver, a digital signal processor, an encryption/decryption chip, a Subscriber Identity Module (SIM) card, memory, and so forth. The rf unit 118 may communicate with various networks such as the internet, an intranet, a wireless network, or with other devices via a wireless network. The wireless network may comprise a cellular telephone network, a wireless local area network, or a metropolitan area network. The Wireless network may use various Communication standards, protocols and technologies, including, but not limited to, Global System for Mobile Communication (GSM), Enhanced Mobile Communication (Enhanced Data GSM Environment, EDGE), wideband Code division multiple Access (W-CDMA), Code Division Multiple Access (CDMA), Time Division Multiple Access (TDMA), Wireless Fidelity (WiFi) (e.g., IEEE802.11a, IEEE802.11 b, IEEE802.11g and/or IEEE802.11 n), Voice over internet protocol (VoIP), world wide mail Access (Microwave for Wireless communications, Max), and other short message Communication protocols, as well as any other suitable communication protocols, and may even include those that have not yet been developed. In this embodiment, the radio frequency unit 118 may be used to implement communication between the electronic device 100 and an external device.
The electronic device 100 in this embodiment may be configured to perform each step in each method provided in this embodiment. The implementation of the reading aid method is described in detail below by means of several embodiments.
Example two
Please refer to fig. 2, which is a flowchart illustrating a reading assistance method according to an embodiment of the present application. The specific process shown in fig. 2 will be described in detail below.
Step 201, first image data in a target area is acquired.
The first image data comprises an indication action image of the target object. The target object described above may be any object capable of pointing to the content in the target area. Illustratively, the target object may be a pen, a wand, a user's finger, or the like. Indicating that the action image may be a target object touching a character string; or the target object may be located below a character string, etc.
In this embodiment, reading content is displayed in the target area. The reading content can be the content in the electronic reading material of the projection display or the printed content in the physical reading material arranged in the target area.
Alternatively, the target area may be a desktop. The desktop can be provided with physical reading materials. Illustratively, the physical reading can be a novel, a foreign language, a learning material, and the like.
Alternatively, the target area may be displayed with an electronic reading. The content of the electronic reading material can be learning content, novel fragments and the like. Optionally, the target area carrying the electronic reading material may be a solid wall surface, a solid white paper, or any interface that can be used to display a projection picture.
In an embodiment, if the target area displays a projection, the method in this embodiment further includes: projecting an electronic reading into the target area.
Illustratively, the first image data may include a character string in the electronic reading material.
Step 202, identifying the indication action image to determine a character string to be identified corresponding to the indication action image.
In an alternative embodiment, step 202 may include: identifying the position of the target object, and determining the target position pointed by the target object; and performing character recognition on the content around the target position to extract a character string to be recognized.
For example, the following describes detecting the position of the target object by taking the target object as the user's finger as an example.
Alternatively, the pointing motion image may be detected by edge detection to determine the edges of the user's finger. Illustratively, the determined specified orientation of the edge of the user's finger may be taken as the target position. For example, the detected upper edge may be set as the target position.
Optionally, the content in the image of the pointing motion may be classified by using a classification model implemented by a neural network to filter out the user's finger in the image of the pointing motion and the area where the user's finger is located. Illustratively, the specified position of the area where the determined finger of the user is located can be used as the target position. For example, the upper left edge of the area where the user's finger is located may be taken as the target position.
Optionally, performing character recognition on the content around the target position to extract the character string to be recognized may be implemented as: and identifying the content around the target position by using a neural network model so as to extract a character string to be identified.
Optionally, performing character recognition on the content around the target position to extract the character string to be recognized may be implemented as: and recognizing the content around the target position by using an OCR (Optical Character Recognition) model to extract a Character string to be recognized.
Optionally, the text recognition may be performed on an area around the target position that is not covered by the target object. Alternatively, only one line of character strings closest to the target position may be identified. Alternatively, it is also possible to perform character recognition on only one word or one sentence closest to the target position.
In an embodiment, if the target area can display an electronic reading material, the character string to be recognized is a character string in the electronic reading material.
In an embodiment, if the target area is provided with a physical reading material, the character string to be recognized is a character string extracted from the physical reading material.
Step 203, performing designation processing on the character string to be recognized to determine a target character string corresponding to the character string to be recognized.
In one embodiment, step 203 may comprise: and translating the character string to be recognized to obtain a target character string of the character string to be recognized in a target language.
For example, the character string to be recognized may be a word in a first language, and the target character string may be a word in a second language. The above-described specifying process may be translation of a character string to be recognized. Alternatively, the character string to be recognized may be a text in english, french, italian, or the like. The target string may be chinese text. For example, the character string to be recognized may be "patent" and the target character string may be "patent". It can be appreciated that the languages corresponding to the character string to be recognized and the target character string are merely exemplary, and the embodiments of the present application do not limit the languages corresponding to the character string to be recognized and the target character string.
Optionally, the electronic device may have a translation application secured therein, and may also store a language database. For example, in a non-network environment, the translating the character string to be recognized to obtain a target character string of the character string to be recognized in a target language may include: and performing off-line translation on the character string to be recognized through the translation application program to obtain a target character string of the character string to be recognized in a target language. For example, if the electronic device is in a networking state, the translating the character string to be recognized to obtain the target character string of the character string to be recognized in the target language may include: and performing online translation on the character string to be recognized through the translation application program to obtain a target character string of the character string to be recognized in a target language.
In another embodiment, step 203 may comprise: and retrieving an explanation document corresponding to the character string to be identified, and taking the explanation document as a target character string corresponding to the character string to be identified.
For example, the character string to be recognized may be an abbreviation of a proper noun, and the target character string may be a full name corresponding to the abbreviation. For example, the string to be recognized may be "IP", and the corresponding target string may be "Internet Protocol (IP)". Illustratively, if an abbreviation corresponds to multiple full names, all full names may be displayed. For example, the character string to be recognized may be "CNN", and the corresponding target character string may be "Cable News Networks (CNN)", "Convolutional Neural Networks (CNN)", or the like.
For example, the character string to be recognized may be a idiom, and the target character string may be a meaning, or a cause, or a provenance corresponding to the idiom. For example, the character string to be recognized may be "voice like a bell", and the corresponding target character string may be "voice like a bell knocking" that speaks or sings aloud. Optionally, the target string may also include idiomatic provenance. For example, the target string in the above example may also include "ming von menglong" soul of eastern week: yi xu mu like lightning, and Yi like hong bell. "
Illustratively, the character string to be recognized may be a professional word, and the target character string may be an interpretation corresponding to the idiom. For example, the character string to be recognized may be "lever principle" in the field of physics, and the target character string may be an explanation of the lever principle, which is also referred to as "lever balance condition", in which "levers are also referred to as" effort-saving levers, and equal-arm levers. To balance the lever, the two moments (force multiplied by moment arm) acting on the lever must be equal in magnitude ".
Optionally, a search engine may be installed in the electronic device. For example, if the electronic device is in a networked state, the retrieving the interpretation document corresponding to the character string to be identified may include: and retrieving an explanation document corresponding to the character string to be identified by using a search engine, and taking the explanation document as a target character string corresponding to the character string to be identified.
Optionally, a local database may also be stored in the electronic device. For example, the above-mentioned retrieving the interpretation documents corresponding to the character strings to be identified, and taking the interpretation documents as the target character strings corresponding to the character strings to be identified may include: and inquiring an explanation document corresponding to the character string to be identified in a local database, and taking the explanation document as a target character string corresponding to the character string to be identified.
And 204, displaying the target character string in the target area.
Alternatively, the target character string may be displayed in an area that does not obscure the portion of text in the electronic reading displayed in the target area, or in a physical reading. Alternatively, the target character string may be displayed in the read region, for example, the target character string may be displayed above the position of the character string to be recognized.
Alternatively, the target character string may be projected to the target area for display by means of projection. Alternatively, the target area may be an electronic display, and the target string may be displayed in the electronic display.
Optionally, as shown in fig. 3, step 204 may include the following steps.
Step 2041, second image data of the target area is acquired.
Step 2042, identifying the second image data to determine whether the target area contains a blank area.
For example, the blank area may indicate an area of the target area to which no character or image is mapped.
Alternatively, the blank area of the target area may be determined by identifying the color of each pixel point in the second image data. Illustratively, the color of the target area may be a designated color. The designated color may be white, green, black, etc. For example, if the pixel points of the color value in the value range corresponding to the designated color are determined as the designated color, and all the pixels in the region formed by the designated number of pixel points are determined as the designated color, the region formed by the designated number of pixel points can be determined as the designated color region, that is, the blank region can be determined.
Step 2043, if the target area includes a blank area, projecting the target character string to the blank area of the target area for display.
Step 2044, if the target area does not contain a blank area, projecting the target character string to the position of the character string in the designated row before the character string to be recognized in the target area.
Optionally, a projection optical system of a projector of the electronic device may be adjusted to adjust a projection angle of the projected target character string, so as to enable the target character string to be projected to the blank area of the target area or a position where a character string in a specified row before the character string to be recognized is located for display.
By the display mode, the problem that the displayed target character string shields the area which needs to be read by the user can be avoided.
Alternatively, the target string may be matched to a surface bearing the display target string. For example, when the surface bearing the display target character string is a plane, the target character string may be displayed on the plane according to the display standard of the plane. For another example, when the surface bearing and displaying the target character string is a curved surface, the target character string may be mapped on the curved surface and displayed according to the shape of the curved surface.
Alternatively, if the physical reading matter is placed in the target area, when the physical reading matter is in the open state, a surface on the physical reading object that can be used for projecting the picture may be a non-plane surface, and on the basis, the projected image may be corrected according to the non-plane surface presented by the physical reading object. As shown in fig. 4, step 204 may include the following steps.
Step 2045, if it is detected that an entity reading material is placed in the target area, acquiring fourth image data of the target area.
Alternatively, the fourth image data may be a depth image. Each pixel in the fourth image data represents a distance of the acquisition device from an object in the fourth image data.
Alternatively, it may be determined whether a physical reading is placed in the target region by capturing a depth image of the target region. For example, the acquisition unit may acquire a target depth image when any plane containing a character string to be read in the target region is parallel, determine whether pixel values of pixels corresponding to the target region in the depth image are equal, and indicate that an entity reading material is placed in the target region if the pixel values of the pixels are not equal; if the pixel values of the pixels are equal, it indicates that no object is placed in the target area, and the target area may have electronic readings projected thereon.
Step 2046, determining a display surface corresponding to the entity reading object according to the fourth image data.
Optionally, the position of the physical reading object and the plane formed by the surface of the physical reading object are determined by the pixel of each pixel point.
Step 2047, projecting the target character string to the display surface for displaying.
Illustratively, the display surface can be subjected to grid subdivision, a pre-distortion matrix of the projector is solved, and geometric correction is carried out in real time.
Illustratively, in the corresponding display surface, for each triangular mesh, three vertices of the triangular mesh and at least two adjacent nodes are used to find a corresponding pre-warping matrix M by using a least square method, and finally, a corresponding area in a buffer area before projection is pre-warped by using a method similar to a plane curtain.
Optionally, as shown in fig. 5, the reading assistance method in this embodiment may further include the following steps.
Step 205, acquiring third image data of the target area according to a preset period.
And step 206, when the third image data is acquired, identifying whether the third image data has updated content relative to the image data acquired at the previous moment of the current moment.
In an embodiment, the third image data may be compared with image data acquired at a time previous to the current time in pixels to determine whether there is a difference between the third image data and the image data acquired at the time previous to the current time, and if there is a difference, it may be determined that the updated content exists in the third image data.
In another embodiment, a first image feature of the third image data may be extracted; extracting a second image characteristic of image data acquired at a time before the current time; and then, calculating the Euclidean distance between the first image characteristic and the second image characteristic, and if the Euclidean distance is greater than a preset value, judging that the updated content exists in the third image data.
For example, if the third image data is identical to the image data acquired at the previous time of the current time, the calculated euclidean distance is zero. However, since the images captured at different times may have light errors, the images captured at different times may have differences even if the two identical frames are used. Thus, the predetermined value may be a positive number greater than zero. The preset value can be set as required, and the embodiment of the application is not limited to the preset value.
Illustratively, the updated content may be notes taken in the target area. Illustratively, the update content described above may not include the target object mentioned in step 201.
And step 207, if the updated content exists, storing the updated content.
Optionally, step 207 may comprise: and if the projected electronic reading materials are detected to be displayed in the target area, storing the updated content in association with the electronic reading materials.
Alternatively, the generation time of the update content may also be stored in association with the update content.
Optionally, step 207 may also be implemented as: and if the updated content exists and the specified action is detected in the third image data, storing the updated content.
Alternatively, the specified action may be directed to updating the content.
Alternatively, the specified action may be to point to a time to update the content for a specified duration. For example, the third image data may include a plurality of pictures, and when a specified number of successively acquired images each include the target object pointing to the updated content, it may be determined that the time pointing to the updated content lasts for a specified duration. For example, the third image data may be a video, and when the target object is pointed to the updated content in the video of the specified duration, the duration pointed to the updated content may be determined to last for the specified duration.
By electronizing the paper updated content and storing the electronized updated content, the user can conveniently inquire the content generated in the reading process.
The detailed procedure of the method in the present embodiment is described below by two practical application scenarios.
In one example, the method provided by the embodiment of the application can be used for teaching; the target area may be a teaching blackboard or whiteboard. Teaching materials are currently projected in the target area. When the pointer points to any content, the annotation corresponding to the content can be displayed. For example, an english lesson is displayed in the current target area, and when the pointer points to an english word, the chinese meaning corresponding to the english word can be displayed on the blank area of the blackboard or whiteboard. Further, if the teacher writes some teaching notes on the blackboard, the teaching notes can be stored in association with the teaching material currently being displayed in projection.
In another embodiment, the method provided by the embodiment of the application can be used for personal reading; the target area is provided with a book, when a user points to some words and sentences in the book, the corresponding meanings of the words and sentences can be inquired from each search website, and the searched contents are displayed in the blank area of the book. Further, if the user writes some notes in the blank area of the book, the notes may be saved to the storage space pointed to by the specified account number. For example, the specified account may be an account of a cloud storage space.
By the method in the embodiment, the designated processing of the character string to be recognized can be realized by the indication action, so that a user can conveniently obtain the target character string corresponding to the character string to be recognized when reading, and the reading assistance can be improved. Furthermore, new information generated in the reading or teaching process can be stored, and useful information generated in the reading or teaching process can be subsequently and randomly viewed by a user.
EXAMPLE III
Based on the same application concept, a reading assistance device corresponding to the reading assistance method is further provided in the embodiments of the present application, and since the principle of solving the problem of the device in the embodiments of the present application is similar to that of the embodiment of the reading assistance method, the implementation of the device in the embodiments of the present application may refer to the description in the embodiments of the method, and repeated details are not repeated.
Please refer to fig. 6, which is a schematic diagram of functional modules of a reading assistance device according to an embodiment of the present application. The reading aid in this embodiment is configured to perform the steps of the method embodiments. The reading aid comprises: an acquisition module 301, an identification module 302, a processing module 303 and a first projection module 304; wherein the content of the first and second substances,
the acquisition module 301 is configured to acquire first image data in a target area, where the first image data includes an indication action image of a target object, and reading content is displayed in the target area;
the identification module 302 is configured to identify the indication action image to determine a to-be-identified character string corresponding to the indication action image;
the processing module 303 is configured to perform assignment processing on the character string to be identified to determine a target character string corresponding to the character string to be identified;
a first projection module 304, configured to display the target character string in the target area.
In one possible implementation, the first projection module 304 is configured to:
acquiring second image data of the target area;
determining whether the target area contains a blank area by identifying the second image data;
if the target area comprises a blank area, the target character string is projected to the blank area of the target area for displaying.
In one possible embodiment, the reading aid may further include a storage module 305 for:
acquiring third image data of the target area according to a preset period;
when the third image data is acquired, identifying whether the third image data has updating content relative to the image data acquired at the previous moment of the current moment;
and if the updated content exists, storing the updated content.
In a possible implementation, the storage module 305 is further configured to:
and if the projected electronic reading materials are detected to be displayed in the target area, storing the updated content in association with the electronic reading materials.
In one possible implementation, the first projection module 304 is configured to:
if the fact that the reading materials are placed in the target area is detected, acquiring fourth image data of the target area;
determining a display surface corresponding to the entity reading object according to the fourth image data;
and projecting the target character string to the display surface for displaying.
In a possible implementation, the processing module 303 is configured to:
translating the character string to be recognized to obtain a target character string of the character string to be recognized in a target language, or/and,
and retrieving an explanation document corresponding to the character string to be identified, and taking the explanation document as a target character string corresponding to the character string to be identified.
In one possible embodiment, the reading aid may further comprise:
the second projection module 306 is configured to project the electronic reading material into the target area, where the character string to be recognized is a character string in the electronic reading material.
In addition, an embodiment of the present application further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program performs the steps of the reading assistance method in the foregoing method embodiment.
The computer program product of the reading assistance method provided in the embodiment of the present application includes a computer-readable storage medium storing a program code, where instructions included in the program code may be used to execute the steps of the reading assistance method in the above method embodiment, which may be referred to in the above method embodiment specifically, and are not described herein again.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method can be implemented in other ways. The apparatus embodiments described above are merely illustrative, and for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, functional modules in the embodiments of the present application may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes. It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The above description is only a preferred embodiment of the present application and is not intended to limit the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application. It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A reading assistance method, comprising:
acquiring first image data in a target area, wherein the first image data comprises an indication action image of a target object, and reading content is displayed in the target area;
identifying the indication action image to determine a character string to be identified corresponding to the indication action image;
performing designated processing on the character string to be identified to determine a target character string corresponding to the character string to be identified;
and displaying the target character string in the target area.
2. The method of claim 1, wherein the step of displaying the target character string in the target area comprises:
acquiring second image data of the target area;
determining whether the target area contains a blank area by identifying the second image data;
if the target area comprises a blank area, the target character string is projected to the blank area of the target area for displaying.
3. The method of claim 1, further comprising:
acquiring third image data of the target area according to a preset period;
when the third image data is acquired, identifying whether the third image data has updating content relative to the image data acquired at the previous moment of the current moment;
and if the updated content exists, storing the updated content.
4. The method of claim 3, wherein the step of storing the updated content if the updated content exists comprises:
and if the projected electronic reading materials are detected to be displayed in the target area, storing the updated content in association with the electronic reading materials.
5. The method of claim 1, wherein the step of displaying the target character string in the target area comprises:
if the fact that the reading materials are placed in the target area is detected, acquiring fourth image data of the target area;
determining a display surface corresponding to the entity reading object according to the fourth image data;
and projecting the target character string to the display surface for displaying.
6. The method according to claim 1, wherein the step of performing a designation process on the character string to be recognized to determine a target character string corresponding to the character string to be recognized comprises:
translating the character string to be recognized to obtain a target character string of the character string to be recognized in a target language, or/and,
and retrieving an explanation document corresponding to the character string to be identified, and taking the explanation document as a target character string corresponding to the character string to be identified.
7. The method of claim 1, further comprising:
and projecting the electronic reading material to the target area, wherein the character string to be identified is the character string in the electronic reading material.
8. A reading aid, comprising:
the device comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring first image data in a target area, the first image data comprises an indication action image of a target object, and reading content is displayed in the target area;
the identification module is used for identifying the indication action image so as to determine a character string to be identified corresponding to the indication action image;
the processing module is used for carrying out designated processing on the character string to be identified so as to determine a target character string corresponding to the character string to be identified;
and the first projection module is used for displaying the target character string in the target area.
9. An electronic device, comprising: a processor, a memory storing machine-readable instructions executable by the processor, the machine-readable instructions when executed by the processor performing the steps of the method of any of claims 1 to 7 when the electronic device is run.
10. A computer-readable storage medium, having stored thereon a computer program which, when being executed by a processor, is adapted to carry out the steps of the method according to any one of claims 1 to 7.
CN201911305405.5A 2019-12-16 2019-12-16 Reading assistance method and device and electronic equipment Pending CN110990107A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201911305405.5A CN110990107A (en) 2019-12-16 2019-12-16 Reading assistance method and device and electronic equipment
PCT/CN2020/079181 WO2021120420A1 (en) 2019-12-16 2020-03-13 Reading assistance method and apparatus, and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911305405.5A CN110990107A (en) 2019-12-16 2019-12-16 Reading assistance method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN110990107A true CN110990107A (en) 2020-04-10

Family

ID=70094885

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911305405.5A Pending CN110990107A (en) 2019-12-16 2019-12-16 Reading assistance method and device and electronic equipment

Country Status (2)

Country Link
CN (1) CN110990107A (en)
WO (1) WO2021120420A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030152293A1 (en) * 2002-01-24 2003-08-14 Joel Bresler Method and system for locating position in printed texts and delivering multimedia information
CN105027562A (en) * 2012-12-28 2015-11-04 Metaio有限公司 Method of and system for projecting digital information on a real object in a real environment
CN108665742A (en) * 2018-05-11 2018-10-16 亮风台(上海)信息科技有限公司 A kind of method and apparatus read by arrangement for reading

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015158869A (en) * 2014-02-25 2015-09-03 カシオ計算機株式会社 projection display device and program
EP3191918B1 (en) * 2014-09-12 2020-03-18 Hewlett-Packard Development Company, L.P. Developing contextual information from an image
CN108681393A (en) * 2018-04-16 2018-10-19 优视科技有限公司 Translation display methods, device, computing device and medium based on augmented reality

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030152293A1 (en) * 2002-01-24 2003-08-14 Joel Bresler Method and system for locating position in printed texts and delivering multimedia information
CN105027562A (en) * 2012-12-28 2015-11-04 Metaio有限公司 Method of and system for projecting digital information on a real object in a real environment
CN108665742A (en) * 2018-05-11 2018-10-16 亮风台(上海)信息科技有限公司 A kind of method and apparatus read by arrangement for reading

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
百度经验: "https://jingyan.baidu.com/article/c910274be07265cd361d2de9.html", 《有道词典摄像头取词功能怎么用》 *

Also Published As

Publication number Publication date
WO2021120420A1 (en) 2021-06-24

Similar Documents

Publication Publication Date Title
US11645826B2 (en) Generating searchable text for documents portrayed in a repository of digital images utilizing orientation and text prediction neural networks
US20210192202A1 (en) Recognizing text in image data
US11270099B2 (en) Method and apparatus for generating facial feature
US20210056256A1 (en) Method for displaying handwritten note in electronic book, electronic device and computer storage medium
US20130108160A1 (en) Character recognition device, character recognition method, character recognition system, and character recognition program
US20090285482A1 (en) Detecting text using stroke width based text detection
CN111353501A (en) Book point-reading method and system based on deep learning
CN112434690A (en) Method, system and storage medium for automatically capturing and understanding elements of dynamically analyzing text image characteristic phenomena
JP2020046819A (en) Information processing apparatus and program
CN112686257A (en) Storefront character recognition method and system based on OCR
CN115641594A (en) OCR technology-based identification card recognition method, storage medium and device
CN111881900B (en) Corpus generation method, corpus translation model training method, corpus translation model translation method, corpus translation device, corpus translation equipment and corpus translation medium
CN111126372B (en) Logo region marking method and device in video and electronic equipment
KR102043693B1 (en) Machine learning based document management system
CN111723213A (en) Learning data acquisition method, electronic device and computer-readable storage medium
CN110990107A (en) Reading assistance method and device and electronic equipment
RU2657181C1 (en) Method of improving quality of separate frame recognition
CN114332882A (en) Text translation method and device, electronic equipment and storage medium
CN112633283A (en) Method and system for identifying and translating English mail address
CN108021918B (en) Character recognition method and device
CN110019661A (en) Text search method, apparatus and electronic equipment based on office documents
CN107609195A (en) One kind searches topic method and device
KR100983779B1 (en) Book information service apparatus and method thereof
CN110580359A (en) Chinese character and Arabic intercommunication mutual identification technical method
JP2004252810A (en) Method, device and program for matching image and document

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination