CN115248650A - Screen reading method and device - Google Patents

Screen reading method and device Download PDF

Info

Publication number
CN115248650A
CN115248650A CN202210733636.1A CN202210733636A CN115248650A CN 115248650 A CN115248650 A CN 115248650A CN 202210733636 A CN202210733636 A CN 202210733636A CN 115248650 A CN115248650 A CN 115248650A
Authority
CN
China
Prior art keywords
processed
popup
area
display interface
reading
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210733636.1A
Other languages
Chinese (zh)
Other versions
CN115248650B (en
Inventor
王何皇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Weibo Software Technology Co ltd
Original Assignee
Nanjing Weibo Software Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Weibo Software Technology Co ltd filed Critical Nanjing Weibo Software Technology Co ltd
Priority to CN202210733636.1A priority Critical patent/CN115248650B/en
Publication of CN115248650A publication Critical patent/CN115248650A/en
Application granted granted Critical
Publication of CN115248650B publication Critical patent/CN115248650B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04847Interaction techniques to control parameter settings, e.g. interaction with sliders or dials

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses a screen reading method and device, and belongs to the technical field of communication. The method comprises the following steps: the method comprises the steps of receiving a first input of a user to the electronic equipment, responding to the first input, determining at least one to-be-processed area on a display interface of the electronic equipment, receiving a second input of the user to the electronic equipment, responding to the second input, determining a first target area from the to-be-processed area, and reading text content in the first target area.

Description

Screen reading method and device
Technical Field
The application belongs to the technical field of communication, and particularly relates to a screen reading method and device.
Background
At present, most electronic devices have a screen reading function, but many application interfaces of the existing electronic devices do not support the traditional screen reading function, such as: the user does not know which application interfaces support the screen reading function and which application interfaces do not support the screen reading function, so that the screen reading function is very inconvenient to use. In addition, the existing screen reading function can automatically play all text messages of the current display interface, and a great amount of noise interference items may be generated, such as: the user cannot select the content which the user wants to read aloud, and the experience is poor.
Disclosure of Invention
The embodiment of the application aims to provide a screen reading method and device, and the problem that a user with the existing screen reading function cannot independently select screen contents to be read, so that user experience is poor can be solved.
In a first aspect, an embodiment of the present application provides a screen reading method, where the method includes: the method comprises the steps of receiving a first input of a user to the electronic equipment, responding to the first input, determining at least one to-be-processed area on a display interface of the electronic equipment, receiving a second input of the user to the electronic equipment, responding to the second input, determining a first target area from the to-be-processed area, and reading text content in the first target area.
Optionally, the determining at least one to-be-processed area on the display interface of the electronic device includes: and under the condition that the display interface supports screen reading, determining at least one region to be processed on the display interface according to the interface container node of the display interface.
Optionally, the determining at least one to-be-processed area on the display interface of the electronic device includes: and under the condition that the display interface does not support screen reading, intercepting an image of the display interface, determining at least one region to be processed in the image of the display interface, and identifying text content in the region to be processed.
Optionally, after the display interface of the electronic device determines at least one to-be-processed area, the method further includes: and adding a covering layer on the display interface, selecting the at least one to-be-processed area by using a wire frame, highlighting the wire frame, and displaying the name of the current area in the wire frame of each to-be-processed area.
Optionally, the method further includes determining popup category information of a popup occurring in the display interface when the text content in the first target region is read aloud, suspending reading the text content in the first target region when the popup category information is call type popup information, and resuming reading the text content in the first target region after the user finishes the call.
Optionally, the method further includes, when the popup category information is non-call popup information, determining a popup region as a popup region to be processed, in response to a third input of the electronic device by the user, pausing reading text content in the first target region, determining a second target region from the popup region to be processed, and reading text content in the second target region.
In a second aspect, an embodiment of the present application provides a screen reading device, including: the electronic device comprises a first receiving module, a first response module and a second response module, wherein the first receiving module is used for receiving a first input of a user to the electronic device, the first response module is used for responding to the first input and determining at least one region to be processed in a display interface of the electronic device, the second receiving module is used for receiving a second input of the user to the electronic device, and the second response module is used for responding to the second input, determining a first target region from the region to be processed and reading text content in the first target region.
Optionally, the first response module is further configured to determine, according to an interface container node of the display interface, at least one region to be processed on the display interface when the display interface supports screen reading.
Optionally, the first response module is further configured to: and under the condition that the display interface does not support screen reading, intercepting the image of the display interface, determining at least one region to be processed in the image of the display interface, and identifying text content in the region to be processed.
Optionally, after the display interface of the electronic device determines at least one region to be processed, the first response module is further configured to: and adding a covering layer on the display interface, selecting the at least one to-be-processed area by using a wire frame, highlighting the wire frame, and displaying the name of the current area in the wire frame of each to-be-processed area.
Optionally, the apparatus further comprises: the device comprises a popup classification module used for determining popup category information of the popup appearing on the display interface when the text content in the first target area is read aloud, a first popup processing module used for suspending reading aloud the text content in the first target area under the condition that the popup category information is conversation type popup information, and a recovery module used for recovering reading aloud the text content in the first target area after a user finishes conversation.
Optionally, the apparatus further comprises: the system comprises a first popup processing module used for determining the popup area as the popup area to be processed under the condition that the popup category information is the non-conversation type popup information, and a third response module used for responding to a third input of a user to the electronic equipment, pausing and reading the text content in the first target area, determining a second target area from the popup area to be processed, and reading the text content in the second target area.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor, and when executed by the processor, the program or instructions implement the steps of the method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method according to the first aspect.
In the embodiment of the application, after receiving a first input of a user, the electronic device enters a screen reading mode, at this time, at least one region to be processed may be determined in an interface displayed by the electronic device, a first target region is determined from the region to be processed through a second input of the user to the electronic device, and text content in the first target region is read aloud. In this way, the user can select the content in the interface which needs to be read on the screen, and the unnecessary interference items are shielded, and all the interfaces can be read by using the method, so that the use experience of the user is improved.
Drawings
FIG. 1 is a flowchart of a method for reading a screen according to an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of an interface displayed by an electronic device provided by an embodiment of the present application;
FIG. 3 is a schematic view of an interface displayed by another electronic device provided by an embodiment of the present application;
FIG. 4 is a schematic structural diagram of a screen reading device according to an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of an electronic device provided in an embodiment of the present application;
fig. 6 is a schematic hardware structure diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described clearly below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments that can be derived by one of ordinary skill in the art from the embodiments given herein are intended to be within the scope of the present disclosure.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced in sequences other than those illustrated or described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense and do not limit the number of terms, e.g., the first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The screen reading method provided by the embodiment of the present application is described in detail below with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
Please refer to fig. 1, which is a flowchart illustrating a screen reading method according to an embodiment of the present application. The method can be applied to electronic equipment, and the electronic equipment can be a mobile phone, a tablet computer, a notebook computer and the like. As shown in fig. 1, the method may include steps S11 to S14, which will be described in detail below.
Step S11, receiving a first input of the electronic device from a user.
And S12, responding to the first input, and determining at least one region to be processed on a display interface of the electronic equipment.
In one example of the embodiment, the first input of the electronic device by the user may be an operation of the user selecting to enter a screen reading mode. The first input to the electronic device by the user may be the user speaking a voice command to enter the screen speaking mode. Or after entering the screen reading by the screen reading voice instruction, prompting the user to perform the screen reading by an interface or voice mode, or entering the screen reading mode by the user by a voice or touch click interactive mode, or predefining an operation which is consistent with the user entering the screen reading mode, for example, for an electronic device which performs touch control through a touch display screen, a touch gesture which is consistent with the user entering the screen reading mode may be predefined, and once it is detected that the touch gesture of the user on the electronic device is consistent with the predefined touch gesture, the electronic device may be considered to enter the screen reading mode operation. According to a specific application scene or user requirements, the operation which accords with the screen reading mode is predefined, and the personalized requirements of the user can be met.
In one example of the embodiment, the interface displayed by the electronic device may be any interface, including an interface supporting the screen reading function and an interface not supporting the screen reading function. The number of the areas to be processed can be one or more, and can be determined according to the current interface.
In an example of this embodiment, determining, at a display interface of an electronic device, at least one region to be processed includes: and under the condition that the display interface supports the screen reading function, determining at least one region to be processed on the display interface according to the interface container node of the display interface.
In one example of the present embodiment, the interface displayed by the electronic device may be an application interface, a web page interface, or the like. For an application interface, the interface container node in the interface can be obtained by traversing a View (View) tree of the current interface. For example, as for the interface shown in the electronic device in fig. 2, in the current interface, there are two primary interface container nodes, namely a linear layout node 21 (linear layout) and a frame layout node 25 (FrameLayout), and the linear layout node 21 includes a textbox node 22 (TextView), a textbox node 23, a textbox node 24, and three secondary interface container nodes. Or may be specific to a secondary interface container node in the current interface, and the current interface is determined as four regions to be processed, namely a text box node 22, a text box node 23, a text box node 24, a frame layout node 25, and so on. For the web interface, similar to the application interface, the area may also be determined by analyzing each node after the web interface is analyzed.
It should be noted that, although the example describes a specific example of determining the area to be processed according to the interface container node, those skilled in the art can understand that the type or number of the specific interface container node can be flexibly set according to the actual situation.
In an example of this embodiment, determining at least one to-be-processed area on a display interface of an electronic device includes: and under the condition that the display interface does not support the screen reading, intercepting the image of the display interface, determining at least one region to be processed in the image of the display interface, and identifying the text content in the region to be processed.
In an example of this embodiment, if the current display interface does not support the screen reading function, an image of the current interface may be intercepted in the background, and the interface may be determined to be at least one to-be-processed area through an image recognition technology. For example, it may be that, by using an image recognition technology, portions including characters in the interface image are recognized, and different portions including characters in the image are respectively determined as the regions to be processed. After the regions to be processed are determined, specific text content in each region can be further identified for reading.
It should be noted that although the example describes a specific example of determining the region to be processed according to the image recognition technology, those skilled in the art can understand that those skilled in the art can flexibly set the determination according to the actual situation.
In an example of this embodiment, after the display interface of the electronic device determines at least one to-be-processed area, the method includes: adding a covering layer on a display interface; selecting at least one region to be processed by using a wire frame, and highlighting the wire frame; the name of the current region is displayed within the wireframe of each region to be processed.
In this example, as shown in the interface of the electronic device in fig. 3, the wire frame is the outline frame 26 displayed on the display interface, and after the region to be processed is determined, the wire frame of the outline of the region can be added on the display interface. In an example of this embodiment, after the to-be-processed area is determined on the interface, a masking layer may be further added on the current interface, and then the to-be-processed area is selected by using the frame, and the frame in the interface is highlighted, so that the to-be-processed area that can be selected is highlighted, and the user can select the to-be-processed area conveniently. And, within the wireframe of each region to be processed, the name of the current region may also be displayed. The name of the current region may be the name of the interface container node or the name of a region named according to a naming rule, for example, as shown in fig. 3, the regions to be processed may be named in the order from top to bottom, the uppermost region is region 1, and the lowermost region is region 3, and each region to be processed is selected by using a wire frame in the drawing. In another example, wire frames of different colors may also be used for different regions for differentiation.
In this embodiment, after the to-be-processed area is determined, the position and the content of the to-be-processed area may be highlighted by adding a masking layer and selecting the to-be-processed area using a highlight frame, so that the user selects an area desired to be read aloud by the second input, and the user experience is improved.
And step S13, receiving a second input of the user to the electronic equipment.
And S14, responding to the second input, determining a first target area from the area to be processed, and reading the text content in the first target area.
In this embodiment, the first target area is an area where the user wants to perform screen reading, and the number of the first target areas may be one or more. When the text content in the first target area is read aloud, the aloud reading may be performed in sequence according to the name order of the specific first target area. For example, reading is performed starting from region 1.
In one example of the embodiment, the text content in the interface can be obtained through the node text in the interface container node or through image recognition to recognize characters in the picture.
In one example of this embodiment, the second input comprises: the user inputs a voice instruction including the name of the current region.
In an example of the embodiment, the second input of the electronic device by the user is an operation of the user to determine the first target area from the area to be processed. The second input of the user to the electronic device may be a voice instruction for the user to speak an operation of determining the first target area from the to-be-processed areas, for example, the user issues a voice instruction of "select area 1" to the electronic device, and at this time, the electronic device selects area 1 from the to-be-processed areas as the first target area in response to the voice instruction.
In another example, the second input to the electronic device by the user may also be an operation of clicking on the screen of the electronic device to determine the first target area. For example, the user clicks the position of the area 1, and at this time, the electronic device selects the area 1 as the first target area from the areas to be processed in response to a click instruction of the user to the electronic device.
It should be noted that although the example describes a specific example of the second input of the user, those skilled in the art can understand that the specific form of the second input can be flexibly set according to the actual situation.
In the embodiment of the application, after receiving a first input of a user, the electronic device enters a screen reading mode, at this time, at least one to-be-processed area can be determined on an interface displayed by the electronic device, a first target area is determined from the to-be-processed area through a second input of the user to the electronic device, and text content in the first target area is read aloud. By the method, the user can select the content needing to be read aloud on the screen in the interface, the unwanted interference items are shielded, all the interfaces can be read aloud by the method, and the use experience of the user is improved.
In an example of this embodiment, when reading the text content in the first target area, it is determined that pop-up window category information of a pop-up window appears on the display interface, and when the pop-up window category information is call-type pop-up window information, the reading of the text content in the first target area is suspended, and after the user ends the call, the reading of the text content in the first target area is resumed.
In an example of this embodiment, after the user selects the first target area, the electronic device reads content in the first target area, and at this time, if the electronic device receives a popup, for example, a phone popup, a voice call popup, or an application message popup, the type of the popup may be determined first. If the category information of the popup is call popup information such as a telephone popup and a voice call popup, the reading of the text content in the first target area may be suspended first, and the user continues to read the text content in the first target area after finishing the call. The ending of the call may be that the user accepts the call and completes the call or directly rejects the call.
In this example, when the user uses the screen reading function, if a popup window of a call type appears, the electronic device may suspend reading of the target area first, and resume reading of the target area after the call is ended, thereby improving the user experience.
In an example of this embodiment, in a case that the popup category information is non-call popup information, determining an area of the popup as a popup area to be processed, pausing reading text content in the first target area in response to a third input of the user to the electronic device, determining a second target area from the popup area to be processed, and reading text content in the second target area.
In an example of this embodiment, the non-call type popup information may be one or more pieces of information, and at this time, the area where the popup occurs may be determined as the popup module to be processed by the foregoing method, for example: the area where the non-call popup information is located can be determined as a popup module to be processed according to the node of the popup container or an image identification mode.
The third input of the user is the input of selecting the second target area needing to be read from the popup area to be processed by the user, and the specific mode of the third input is similar to that of the first input and the second input. And when the user selects the second target area from the popup area to be processed through the third input, pausing and reading the text content in the first target area, and preferentially carrying out reading on the text content of the second target area, namely the popup area selected by the user. In addition, the number of the second target regions selected by the user may be one or more.
In this example, when the user uses the screen reading function, if a non-call type popup occurs, the electronic device may suspend reading of the target area, determine the area where the popup is located as a module to be processed, and read the text content of the popup in response to the user operation, thereby improving the user experience.
In an example of this embodiment, in a case that the popup category information is non-call popup information, determining an area where the electronic device displays the popup as a popup area to be processed includes: and determining the level of each piece of non-call popup information under the condition that the number of the pieces of non-call popup information is multiple, wherein the popup which can be completely displayed in the electronic equipment is a top popup, and the area of the top popup displayed by the electronic equipment is determined as an area of the popup to be processed.
In an example of this embodiment, if the number of the non-call type popup information is multiple, a situation that the popup may be blocked by each other may occur, at this time, the popup level of each piece of popup information may be determined first, for example, the interface where the popup currently occurs may be intercepted, and through an image recognition technology, the popup that completes the display is identified as the top popup, and the popup that is blocked by the top popup is the next popup. After all top pop windows are identified, the area where the top pop windows are located is determined as the pop window area to be processed for selection by the user. The number of pop-up window areas to be processed may be the same as the number of pop-up windows on the top layer.
In this example, when the user uses the screen reading function, if a plurality of pieces of popup information of non-call types appear, the popup may be blocked from each other, and at this time, the level of the popup may be identified by an image recognition technology, and only the top popup that can be completely displayed is determined for selection by the subsequent user. Through the mode, the problem that the determined area is inaccurate due to the fact that the pop-up windows are mutually shielded can be solved, the accuracy of the content read aloud by the screen is further improved, interference of other areas is eliminated, and user experience is improved.
In one example of the present embodiment, after reading the text content in the second target area, the top-level popup is closed, and the step of determining the level of each non-call type popup is re-executed.
In an example of this embodiment, after the text content in the second target region is read, that is, the content of the top pop that the user wants to read on the screen has already been read, at this time, the top pop may be closed, and the step of determining the pop level is performed again, so as to perform region determination on the pop blocked by the previous top pop, and further use the screen reading function to read the pop content. The user experience is improved.
Corresponding to the above embodiment, referring to fig. 4, an embodiment of the present application further provides a screen reading device 100, including: the electronic device comprises a first receiving module 101, a first response module 102, a second receiving module 103 and a second response module 104, wherein the first receiving module 101 is used for receiving a first input of a user to the electronic device, the first response module 102 is used for determining at least one to-be-processed area on a display interface of the electronic device in response to the first input, the second receiving module is used for receiving a second input of the user to the electronic device, and the second response module 104 is used for determining a first target area from the to-be-processed area and reading text content in the first target area.
Optionally, the first response module is further configured to determine, according to an interface container node of the display interface, at least one to-be-processed area on the display interface when the display interface supports screen reading.
Optionally, the first response module is further configured to: and under the condition that the display interface does not support screen reading, intercepting the image of the display interface, determining at least one to-be-processed area in the image of the display interface, and identifying text contents in the to-be-processed area.
Optionally, after the display interface of the electronic device determines at least one to-be-processed area, the first response module is further configured to: adding a covering layer on a display interface, selecting at least one to-be-processed area by using a wire frame, highlighting the wire frame, and displaying the name of the current area in the wire frame of each to-be-processed area.
Optionally, the apparatus further comprises: the device comprises a popup classification module, a first popup processing module and a recovery module, wherein the popup classification module is used for determining popup category information of a popup appearing in a display interface when text content in a first target area is read aloud, the first popup processing module is used for suspending reading the text content in the first target area when the popup category information is call type popup information, and the recovery module is used for recovering reading the text content in the first target area after a user finishes a call.
Optionally, the apparatus further comprises: the system comprises a first popup processing module, a second popup processing module and a third response module, wherein the first popup processing module is used for determining a popup area to be processed under the condition that popup type information is non-conversation type popup information, and the third response module is used for responding to third input of a user to the electronic equipment, pausing and reading text content in a first target area, determining a second target area from the popup area to be processed, and reading the text content in the second target area.
In this example, a device is provided to enable the electronic device to enter a screen reading mode after receiving a first input from a user, at which time at least one region to be processed may be determined by an interface displayed by the electronic device, a first target region may be determined from the region to be processed by a second input from the user to the electronic device, and text content in the first target region may be read. By the method, the user can select the content needing to be read aloud on the screen in the interface, the unwanted interference items are shielded, all the interfaces can be read aloud by the method, and the use experience of the user is improved.
The screen reading device in the embodiment of the present application may be a device, or may be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine or a self-service machine, and the like, and the embodiment of the present application is not particularly limited.
The screen reading device in the embodiment of the present application may be a device having an operating system. The operating system may be an Android operating system (Android), an iOS operating system, or other possible operating systems, which is not specifically limited in the embodiment of the present application.
The screen reading device provided by the embodiment of the application can realize each process realized by the method embodiment, and is not repeated here to avoid repetition.
Corresponding to the foregoing embodiment, optionally, as shown in fig. 5, an electronic device 800 is further provided in this embodiment of the present application, and includes a processor 910, a memory 909, and a program or an instruction stored in the memory 929 and executable on the processor 910, where the program or the instruction, when executed by the processor 910, implements each process of the foregoing screen reading method embodiment, and can achieve the same technical effect, and details are not repeated here to avoid repetition.
It should be noted that the electronic device in the embodiment of the present application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 6 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 900 includes, but is not limited to: a radio frequency unit 901, a network module 902, an audio output unit 903, an input unit 904, a sensor 905, a display unit 906, a user input unit 907, an interface unit 908, a memory 909, and a processor 910.
Those skilled in the art will appreciate that the electronic device 900 may further include a power source (e.g., a battery) for supplying power to the various components, and the power source may be logically connected to the processor 910 through a power management system, so that the functions of managing charging, discharging, and power consumption are implemented through the power management system. The electronic device structure shown in fig. 6 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is omitted here.
The processor 910 is configured to determine at least one to-be-processed region on a display interface of the electronic device in response to a first input, receive a second input to the electronic device from a user, determine a first target region from the to-be-processed region in response to the second input, and read text content in the first target region.
Optionally, the processor 910 is configured to determine, according to an interface container node of the display interface, at least one to-be-processed area on the display interface when the display interface supports screen reading.
Optionally, the processor 910 is configured to, when the display interface does not support screen reading, intercept an image of the display interface, determine at least one to-be-processed area in the image of the display interface, and identify text content in the to-be-processed area.
Optionally, the processor 910 is configured to, after the at least one to-be-processed region is determined on the display interface of the electronic device, add a masking layer on the display interface, select the at least one to-be-processed region by using a wire frame, highlight the wire frame, and display a name of the current region in the wire frame of each to-be-processed region.
Optionally, the processor 910 is configured to determine, when the text content in the first target area is read aloud, pop-up type information of a pop-up window appearing on the display interface, suspend reading aloud of the text content in the first target area when the pop-up type information is call type pop-up window information, and resume reading aloud of the text content in the first target area after the user ends the call.
Optionally, the processor 910 is configured to determine the popup area as a popup area to be processed when the popup category information is non-call type popup information, suspend reading text content in the first target area in response to a third input of the user to the electronic device, determine a second target area from the popup area to be processed, and read text content in the second target area.
In this example, an electronic device is provided, where a processor controls the electronic device to enter a screen reading mode after receiving a first input from a user, where at least one region to be processed may be determined on an interface displayed by the electronic device, a first target region is determined from the regions to be processed through a second input from the user to the electronic device, and text content in the first target region is read aloud. By the method, the user can select the content needing to be read aloud on the screen in the interface, the unwanted interference items are shielded, all the interfaces can be read aloud by the method, and the use experience of the user is improved.
It should be understood that, in the embodiment of the present application, the input Unit 904 may include a Graphics Processing Unit (GPU) 9041 and a microphone 9042, and the Graphics Processing Unit 9041 processes image data of a still picture or a video obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The display unit 906 may include a display panel 9061, and the display panel 9061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 907 includes a touch panel 9071 and other input devices 9072. A touch panel 9071 also referred to as a touch screen. The touch panel 9071 may include two parts, a touch detection device and a touch controller. Other input devices 9072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein. The memory 909 may be used to store software programs as well as various data, including but not limited to application programs and operating systems. The processor 910 may integrate an application processor, which mainly handles operating systems, user interfaces, applications, etc., and a modem processor, which mainly handles wireless communications. It is to be appreciated that the modem processor described above may not be integrated into processor 910.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the foregoing screen reading method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and the like.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to run a program or an instruction to implement each process of the above-mentioned screen reading method embodiment, and can achieve the same technical effect, and in order to avoid repetition, the details are not repeated here.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as a system-on-chip, or a system-on-chip.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising a … …" does not exclude the presence of another identical element in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus in the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions recited, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the above embodiment method can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better embodiment. Based on such understanding, the technical solutions of the present application may be embodied in the form of a computer software product stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk), and including instructions for enabling a terminal (e.g., mobile phone, computer, server, or network device) to execute the methods according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the scope of the invention as defined by the appended claims.

Claims (12)

1. A method for reading a screen is applied to an electronic device, and is characterized in that the method comprises the following steps:
receiving a first input of a user to the electronic equipment;
in response to the first input, determining at least one to-be-processed area on a display interface of the electronic equipment;
receiving a second input of the electronic equipment from a user;
and responding to the second input, determining a first target area from the to-be-processed area, and reading text content in the first target area.
2. The method of claim 1, wherein the determining at least one area to be processed on the display interface of the electronic device comprises:
and under the condition that the display interface supports screen reading, determining at least one region to be processed on the display interface according to the interface container node of the display interface.
3. The method of claim 1, wherein the determining at least one area to be processed on the display interface of the electronic device comprises:
intercepting an image of the display interface under the condition that the display interface does not support screen reading;
and determining at least one region to be processed in the image of the display interface, and identifying text content in the region to be processed.
4. The method according to claim 1, wherein after the display interface of the electronic device determines the at least one region to be processed, the method further comprises:
adding a covering layer on the display interface;
selecting the at least one region to be processed by using a wire frame, and highlighting the wire frame;
and displaying the name of the current area in the wire frame of each area to be processed.
5. The method of claim 1, further comprising:
when the text content in the first target area is read aloud, determining popup category information of popup occurring in the display interface;
under the condition that the popup type information is the call type popup information, pausing reading of text content in the first target area;
and after the user finishes the call, resuming to read the text content in the first target area.
6. The method of claim 5, further comprising:
determining the area of the popup as a popup area to be processed under the condition that the popup type information is non-conversation type popup information;
in response to a third input of the electronic device by the user, pausing the reading of the text content in the first target area, determining a second target area from the to-be-processed popup area, and reading the text content in the second target area.
7. A device for reading aloud screen, applied to electronic equipment, the device comprising:
the first receiving module is used for receiving a first input of a user to the electronic equipment;
the first response module is used for responding to the first input and determining at least one area to be processed on a display interface of the electronic equipment;
the second receiving module is used for receiving a second input of the user to the electronic equipment;
and the second response module is used for responding to the second input, determining a first target area from the to-be-processed area and reading text content in the first target area.
8. The apparatus of claim 7, wherein the first response module is further configured to determine at least one to-be-processed area in the display interface according to an interface container node of the display interface if the display interface supports screen reading.
9. The apparatus of claim 7, wherein the first response module is further configured to: intercepting an image of the display interface under the condition that the display interface does not support screen reading;
and determining at least one region to be processed in the image of the display interface, and identifying text content in the region to be processed.
10. The apparatus of claim 7, wherein after the display interface of the electronic device determines the at least one region to be processed, the first response module is further configured to:
adding a covering layer on the display interface;
selecting the at least one region to be processed by using a wire frame, and highlighting the wire frame;
and displaying the name of the current area in the wire frame of each area to be processed.
11. The apparatus of claim 7, further comprising:
the popup classification module is used for determining popup category information of popup appearing on the display interface when the text content in the first target area is read aloud;
the first popup processing module is used for pausing reading of the text content in the first target area under the condition that the popup category information is the call popup information;
and the restoring module is used for restoring the text content in the first target area after the user finishes the call.
12. The apparatus of claim 11, further comprising:
the second popup processing module is used for determining the area of the popup as a popup area to be processed under the condition that the popup type information is non-conversation type popup information;
and the third response module is used for responding to a third input of the user to the electronic equipment, pausing the reading of the text content in the first target area, determining a second target area from the to-be-processed popup area, and reading the text content in the second target area.
CN202210733636.1A 2022-06-24 2022-06-24 Screen reading method and device Active CN115248650B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210733636.1A CN115248650B (en) 2022-06-24 2022-06-24 Screen reading method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210733636.1A CN115248650B (en) 2022-06-24 2022-06-24 Screen reading method and device

Publications (2)

Publication Number Publication Date
CN115248650A true CN115248650A (en) 2022-10-28
CN115248650B CN115248650B (en) 2024-05-24

Family

ID=83699640

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210733636.1A Active CN115248650B (en) 2022-06-24 2022-06-24 Screen reading method and device

Country Status (1)

Country Link
CN (1) CN115248650B (en)

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5945973A (en) * 1995-05-12 1999-08-31 Hitachi, Ltd. Position reader
JP2001265566A (en) * 2000-03-15 2001-09-28 Casio Comput Co Ltd Electronic book device and sound reproduction system
CN101377797A (en) * 2008-09-28 2009-03-04 腾讯科技(深圳)有限公司 Method for controlling game system by voice
CN102099778A (en) * 2008-07-18 2011-06-15 夏普株式会社 Content display device, content display method, program, and recording medium
CN103957447A (en) * 2014-05-08 2014-07-30 济南四叶草信息技术有限公司 Multi-window floating playing system
CN104049847A (en) * 2014-06-30 2014-09-17 宇龙计算机通信科技(深圳)有限公司 Information prompt method and system of mobile terminal
CN104409076A (en) * 2014-12-02 2015-03-11 上海语知义信息技术有限公司 Voice control system and voice control method for chess and card games
CN105513594A (en) * 2015-11-26 2016-04-20 许传平 Voice control system
US20170237791A1 (en) * 2016-02-17 2017-08-17 Quickbiz Holdings Limited, Apia User interface content state synchronization across devices
CN107436748A (en) * 2017-07-13 2017-12-05 普联技术有限公司 Handle method, apparatus, terminal device and the computer-readable recording medium of third-party application message
CN109150692A (en) * 2018-07-28 2019-01-04 北京旺马科技有限公司 Message automatic broadcasting method, system, car-mounted terminal and handheld device
CN109407946A (en) * 2018-09-11 2019-03-01 昆明理工大学 Graphical interfaces target selecting method based on speech recognition
CN109803050A (en) * 2019-01-14 2019-05-24 南京点明软件科技有限公司 A kind of full frame guidance click method suitable for operation by blind mobile phone
CN112399237A (en) * 2020-10-22 2021-02-23 维沃移动通信(杭州)有限公司 Screen display control method and device and electronic equipment
CN114461170A (en) * 2022-01-27 2022-05-10 山东省城市商业银行合作联盟有限公司 Page reading method and system for mobile banking application program

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5945973A (en) * 1995-05-12 1999-08-31 Hitachi, Ltd. Position reader
JP2001265566A (en) * 2000-03-15 2001-09-28 Casio Comput Co Ltd Electronic book device and sound reproduction system
CN102099778A (en) * 2008-07-18 2011-06-15 夏普株式会社 Content display device, content display method, program, and recording medium
CN101377797A (en) * 2008-09-28 2009-03-04 腾讯科技(深圳)有限公司 Method for controlling game system by voice
CN103957447A (en) * 2014-05-08 2014-07-30 济南四叶草信息技术有限公司 Multi-window floating playing system
CN104049847A (en) * 2014-06-30 2014-09-17 宇龙计算机通信科技(深圳)有限公司 Information prompt method and system of mobile terminal
CN104409076A (en) * 2014-12-02 2015-03-11 上海语知义信息技术有限公司 Voice control system and voice control method for chess and card games
CN105513594A (en) * 2015-11-26 2016-04-20 许传平 Voice control system
US20170237791A1 (en) * 2016-02-17 2017-08-17 Quickbiz Holdings Limited, Apia User interface content state synchronization across devices
CN107436748A (en) * 2017-07-13 2017-12-05 普联技术有限公司 Handle method, apparatus, terminal device and the computer-readable recording medium of third-party application message
CN109150692A (en) * 2018-07-28 2019-01-04 北京旺马科技有限公司 Message automatic broadcasting method, system, car-mounted terminal and handheld device
CN109407946A (en) * 2018-09-11 2019-03-01 昆明理工大学 Graphical interfaces target selecting method based on speech recognition
CN109803050A (en) * 2019-01-14 2019-05-24 南京点明软件科技有限公司 A kind of full frame guidance click method suitable for operation by blind mobile phone
CN112399237A (en) * 2020-10-22 2021-02-23 维沃移动通信(杭州)有限公司 Screen display control method and device and electronic equipment
CN114461170A (en) * 2022-01-27 2022-05-10 山东省城市商业银行合作联盟有限公司 Page reading method and system for mobile banking application program

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
"基于Android平台的盲人手机系统设计与开发" *
KUEI-CHUN LIU 等: "Voice Helper: A Mobile Assistive System for Visually Impaired Persons", 《2015 IEEE 》 *
大名张无忌: "【无障碍】自动朗读的弹窗和浮层实现_aria-modal-CSDN博客", 《HTTPS://BLOG.CSDN.NET/QQ_40029828/ARTICLE/DETAILS/121428539 》, pages 1 *
敢心工程师: "华为手机双指识屏朗读怎么选择区域", 《HTTPS://WEN.BAIDU.COM/QUESTION/1648370629804001820.HTML》, pages 1 *

Also Published As

Publication number Publication date
CN115248650B (en) 2024-05-24

Similar Documents

Publication Publication Date Title
CN107256109B (en) Information display method and device and terminal
CN106484266B (en) Text processing method and device
KR20170000329A (en) Method and electronic device for tab management based on context
TW201923547A (en) Processing method, device, apparatus, and machine-readable medium
CN104765522A (en) Chat window display control method and system
CN112988006B (en) Display method, display device, electronic equipment and storage medium
CN112099684A (en) Search display method and device and electronic equipment
TW201923630A (en) Processing method, device, apparatus, and machine-readable medium
US20180239748A1 (en) Enhanced pivot table creation and interaction
WO2023045927A1 (en) Object moving method and electronic device
CN112882623A (en) Text processing method and device, electronic equipment and storage medium
JP7462070B2 (en) INTERACTION INFORMATION PROCESSING METHOD, APPARATUS, ELECTRONIC DEVICE, AND STORAGE MEDIUM
CN112099714B (en) Screenshot method and device, electronic equipment and readable storage medium
US20240045570A1 (en) Method for displaying sidebar, terminal and storage medium
CN112399010B (en) Page display method and device and electronic equipment
CN113552977A (en) Data processing method and device, electronic equipment and computer storage medium
CN106776634A (en) A kind of method for network access, device and terminal device
CN115248650B (en) Screen reading method and device
CN113783770B (en) Image sharing method, image sharing device and electronic equipment
CN111796736B (en) Application sharing method and device and electronic equipment
CN114518821A (en) Application icon management method and device and electronic equipment
CN113778595A (en) Document generation method and device and electronic equipment
CN113805709A (en) Information input method and device
CN109190097B (en) Method and apparatus for outputting information
CN113010072A (en) Searching method and device, electronic equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant