CN112905078A - Page element processing method and device and electronic equipment - Google Patents

Page element processing method and device and electronic equipment Download PDF

Info

Publication number
CN112905078A
CN112905078A CN202110492470.4A CN202110492470A CN112905078A CN 112905078 A CN112905078 A CN 112905078A CN 202110492470 A CN202110492470 A CN 202110492470A CN 112905078 A CN112905078 A CN 112905078A
Authority
CN
China
Prior art keywords
page
target
focus
target element
application
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110492470.4A
Other languages
Chinese (zh)
Inventor
江雷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Koubei Network Technology Co Ltd
Original Assignee
Zhejiang Koubei Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Koubei Network Technology Co Ltd filed Critical Zhejiang Koubei Network Technology Co Ltd
Priority to CN202110492470.4A priority Critical patent/CN112905078A/en
Publication of CN112905078A publication Critical patent/CN112905078A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04817Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures

Abstract

The embodiment of the application provides two page element processing methods and devices, electronic equipment and a computer storage medium. In the first surface element processing method, when a target application is in a visual impairment user operation mode and a trigger operation for a target element displayed in a first page of the target application is detected, whether the target element is a focus element is judged; and when the target element is judged to be the focus element, displaying the target element in the form of the focus element in the first page. Therefore, under the operation mode of the vision disorder user, the target element can be displayed in the form of the focus element based on the trigger operation of the vision disorder user on the target element, and then the target element can be identified in the subsequent process. The problem of how to accurately identify page elements such as images, icons, characters or buttons and the like in the takeaway application page is solved.

Description

Page element processing method and device and electronic equipment
Technical Field
The application relates to the technical field of computers, in particular to two page element processing methods. The application also relates to two page element processing devices, an electronic device and a computer storage medium.
Background
With the rapid development of science and technology, various terminal applications come into play, and the terminal applications bring various convenience to the life of people. Taking the takeaway application as an example, when the visually impaired user uses the takeaway application to order, the voice-over mode of the terminal can be opened to help the visually impaired user to order. Specifically, in this mode, when the visually impaired user clicks an image, an icon, text, or a button on a page of the takeaway application, the terminal performs voice playing on the recognition result information of the clicked image, icon, text, or button to help the visually impaired user to know information in the page of the takeaway application.
However, there may be situations where some images, icons, text, or buttons, etc. in the takeaway application page may not be independently touched, for example, images, icons, text, or buttons in a nested relationship in the takeaway application page may not be independently touched. Therefore, in the takeaway application, the images, icons, characters or buttons which cannot be independently touched cannot be separately recognized, so that the information in the takeaway application page is not accurately recognized in the side-white mode. Therefore, how to accurately identify page elements such as images, icons, characters, or buttons in the takeaway application page becomes a problem to be solved at present.
Disclosure of Invention
The embodiment of the application provides a page element processing method, which is used for solving the problem of accurately identifying page elements such as images, icons, characters or buttons and the like in a takeout application page. The embodiment of the application also provides another page element processing method, two page element processing devices, electronic equipment and a computer storage medium.
The embodiment of the application provides a page element processing method, which comprises the following steps:
when a target application is in a vision-impairment user operation mode, in response to detecting a trigger operation for a target element displayed in a first page of the target application, determining whether the target element is a focus element;
if the target element is a focus element, displaying the target element in the form of the focus element in the first page;
the target element is an element which cannot be independently touched in a second page of the target application in a non-vision-impairment user operation mode, wherein the page content of the second page corresponds to the page content of the first page.
Optionally, the determining whether the target element is a focus element includes: determining whether the target element is a focus element based on a focus element information list, the focus element information list being a list for storing target element information.
Optionally, the determining whether the target element is a focus element based on the focus element information list includes:
and judging whether the target element information corresponding to the target element can be found in the focus element information list.
Optionally, the page element processing method further includes: obtaining the focus element information list.
Optionally, the page element processing method further includes: sending a request message for requesting to obtain the focus element information list to a server;
the obtaining the focus element information list includes:
and obtaining the focus element information list provided by the server aiming at the request message.
Optionally, the obtaining the focus element information list includes:
obtaining elements which cannot be independently touched in the second page;
and adding the elements which cannot be independently touched into a focus element information list which is created in advance to obtain an updated focus element information list.
Optionally, the page element processing method further includes: and judging whether the elements in the second page can be independently touched.
Optionally, the element that cannot be independently touched includes at least one of the following elements:
elements in a combination of elements having a side-by-side relationship;
elements in a combination of elements having a nested relationship.
Optionally, the page element processing method further includes: and playing target information in a voice mode, wherein the target information is obtained according to information displayed in the first page after the target element is triggered when the target application is in a visual disorder user operation mode.
Optionally, the target element is an element for providing a delivery service for the visually impaired user;
the information displayed in the first page after the target element is triggered comprises: and after the target element is triggered, displaying information for providing delivery service for the visually impaired user in the first page.
An embodiment of the present application provides another page element processing method, including:
obtaining target element information of a target element which cannot be independently touched in a second page of a target application in a non-vision-impairment user operation mode;
adding the target element information into a focus element information list corresponding to a first page of the target application in a visually impaired user operation mode, wherein the first page comprises the target element, and the page content of the first page corresponds to the page content of the second page.
Optionally, the target element information of the target element that cannot be independently touched includes at least one of the following elements:
elements in a combination of elements having a side-by-side relationship;
elements in a combination of elements having a nested relationship.
Optionally, the page element processing method further includes: in response to detecting a trigger operation for the target element while the target application is in a visually impaired user mode of operation, presenting the target element in the first page in the form of a focus element;
and playing target information in a voice mode, wherein the target information is obtained according to information displayed in the first page after the target element is triggered when the target application is in a vision-impairment user operation mode.
Optionally, the target element is an element for providing a delivery service for the visually impaired user;
the information displayed in the first page after the target element is triggered comprises: and after the target element is triggered, displaying information for providing delivery service for the visually impaired user in the first page.
Correspondingly, an embodiment of the present application provides a page element processing apparatus, including:
the device comprises a focus element judgment unit, a target application display unit and a display unit, wherein the focus element judgment unit is used for responding to the detection of the trigger operation of a target element displayed in a first page of the target application when the target application is in a vision disorder user operation mode and judging whether the target element is a focus element;
the display unit is used for displaying the target element in the form of a focus element in the first page if the target element is the focus element;
the target element is an element which cannot be independently touched in a second page of the target application in a non-vision-impairment user operation mode, wherein the page content of the second page corresponds to the page content of the first page.
Correspondingly, an embodiment of the present application provides another page element processing apparatus, including:
the target element information obtaining unit is used for obtaining target element information of a target element which cannot be independently touched in a second page of a target application in a non-vision-impairment user operation mode;
and the adding unit is used for adding the target element information into a focus element information list corresponding to a first page of the target application in a vision-impairment user operation mode, wherein the first page comprises the target element, and the page content of the first page corresponds to the page content of the second page.
Correspondingly, an embodiment of the present application provides an electronic device, including:
a processor;
and the memory is used for storing a computer program which is executed by the processor and executes the two page element processing methods.
Correspondingly, the embodiment of the application provides a computer storage medium, wherein a computer program is stored in the computer storage medium, and the computer program is run by a processor and executes the two page element processing methods.
Compared with the prior art, the embodiment of the application has the following advantages:
the embodiment of the application provides a page element processing method, which comprises the following steps: when the target application is in a vision-impairment user operation mode, in response to detecting a trigger operation for a target element displayed in a first page of the target application, determining whether the target element is a focus element; if the target element is the focus element, displaying the target element in the form of the focus element in the first page; the target element is an element which cannot be independently touched in a second page of the target application in the non-vision-impairment user operation mode, wherein the page content of the second page corresponds to the page content of the first page. In the embodiment, when the target application is in the visually impaired user operation mode and the trigger operation for the target element displayed in the first page of the target application is detected, whether the target element is the focus element is judged; and when the target element is judged to be the focus element, displaying the target element in the form of the focus element in the first page. Therefore, under the operation mode of the vision disorder user, the target element can be displayed in the form of the focus element based on the trigger operation of the vision disorder user on the target element, and then the target element can be identified in the subsequent process. The problem of how to accurately identify page elements such as images, icons, characters or buttons and the like in the takeaway application page is solved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the present application, and other drawings can be obtained by those skilled in the art according to the drawings.
Fig. 1 is a first schematic diagram of an application scenario of a page element processing method provided in the present application.
Fig. 2 is a second schematic diagram of an application scenario of the page element processing method provided in the present application.
Fig. 3 is a third schematic view of an application scenario of the page element processing method provided in the present application.
Fig. 4 is a flowchart of a page element processing method according to a first embodiment of the present application.
Fig. 5 is a flowchart of a page element processing method according to a second embodiment of the present application.
Fig. 6 is a schematic diagram of a page element processing apparatus according to a third embodiment of the present application.
Fig. 7 is a schematic diagram of a page element processing apparatus according to a fourth embodiment of the present application.
Fig. 8 is a schematic view of an electronic device according to a fifth embodiment of the present application.
Detailed Description
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application. This application is capable of implementation in many different ways than those herein set forth and of similar import by those skilled in the art without departing from the spirit of this application and is therefore not limited to the specific implementations disclosed below.
The application provides two page element processing methods, and some embodiments of the page element processing methods provided by the application can be applied to terminal application. Taking a takeout application for ordering takeout or a shopping application for ordering goods as an example of a terminal application, when the terminal application is in a visually impaired user operation mode during taking out or goods ordering, the terminal application can perform voice playing based on an element triggered by a visually impaired user in a page to help the visually impaired user to know content in the page currently displayed by the application.
When the terminal application is in the operation mode of the visually impaired user, the page element processing method of the embodiment performs voice playing on the element triggered by the visually impaired user in the following manner.
First, when the terminal application is in the visually impaired user operation mode, in response to detecting a trigger operation for a target element displayed in a first page of the terminal application, it is determined whether the target element is a focus element.
Specifically, the first page is a currently displayed page when the terminal application is in the visually impaired user operation mode. As shown in fig. 1, fig. 1 is a first schematic diagram of an application scenario of a page element processing method provided in the present application. The page illustrated in fig. 1 is the first page. The first page is a meal ordering page corresponding to the meal provided by the visually impaired user shopping merchant a, namely: the visually impaired user may order through the first page.
When the terminal application is in the operation mode of the vision-impaired user, and the vision-impaired user clicks the 'ordering' element in the first page, the 'ordering' element in the first page is displayed in the form of a focus element. Specifically, the form of the focus element may be represented by the form of a black box shown in fig. 1. For example, "red-cooked eggplant" in fig. 1 is an example in which "red-cooked eggplant" is shown in the form of a focus element by the form of a black box.
In the prior art, only some page elements in a page can be touched independently when a terminal application is in a visually impaired user operation mode. Taking fig. 1 as an example, the elements such as "order", "evaluation", "business", "recommended food", "new meal item", "necessary order item", "fresh fruit", "lunch" and the like are elements that can be independently touched, that is, the elements listed above such as "order", "evaluation", "business", "recommended food", "new meal item", "necessary order item", "fresh fruit", "lunch" and the like can be independently touched. The terminal application can play element information corresponding to the independently touched element according to the element touched by the vision disorder user and the terminal application voice through the independently touched element.
For example, assuming that the vision-impaired user touches "business", the terminal application can play the business voice information in voice and introduce the relevant information of the business. The information on the merchant may be time information on which the merchant a is established, source information on which the merchant a is established, or the like.
The elements of ordering, evaluating, selling company, recommended food, new food pushing, necessary food ordering, delicious fruit, and lunch tea can be independently touched when the terminal application is in the operation mode of the visually impaired user. Accordingly, these elements are also independently touchable when the terminal application is in a non-visually impaired user mode of operation.
Taking the ordering page displayed by the terminal application as a second page when the terminal application is in a non-vision-impairment user operation mode; and taking the ordering page displayed by the terminal application as a first page when the terminal application is in the visual disorder user operation mode. The second page is the page shown in fig. 2, and fig. 2 is a second schematic diagram of an application scenario of the page element processing method provided by the present application. In this embodiment, the contents displayed in the first page and the second page are corresponding, and actually, the contents displayed in the first page and the second page may be the same. Only in the first page, i.e. when the terminal application is in the visually impaired user operation mode, the triggered element in the page is presented in the form of a focus element.
But for some elements of the second page, namely: when the terminal application is in a non-vision-impairment user operation mode, the user cannot independently touch the elements. For example, the elements shown in fig. 2, such as "braised eggplant in brown sauce", "1 st braised eggplant in a block a", "1 ten thousand of blooms +," good score of 99.99% ", and" press 28 ", are elements that cannot be touched independently. Because the elements cannot be independently touched due to the nesting relation, when the terminal application is in a non-vision-impairment user operation mode, the user can view the contents through the information displayed on the page.
However, when the terminal application is in the operation mode of the vision-impaired user, the vision-impaired user cannot view the content according to the information displayed on the page, and the vision-impaired user touches the element capable of being independently touched in the first page, so that the terminal application plays the element information corresponding to the element capable of being independently touched in a voice mode.
Since the elements having the nesting relationship cannot be independently touched in the prior art, when the terminal application is switched from the non-vision-impaired user operation mode to the vision-impaired user operation mode, in the prior art, the elements that cannot be independently touched in the second page cannot be independently touched in the first page. Therefore, the elements that cannot be independently touched in the prior art cannot naturally play the element information in a voice manner.
In order to solve the above problem, in this embodiment, the element information of the element that cannot be independently touched in the second page is added to the pre-created focus element information list, so as to determine whether the element that cannot be independently touched is a focus element based on the focus element information list when the element that cannot be independently touched is triggered. If yes, target element information corresponding to the elements which cannot be independently touched can be inquired in the focus element information list. And displaying the triggered elements which cannot be independently touched in the first page in the form of focus elements.
Specifically, an element that cannot be touched independently in the second page is taken as a target element. When the terminal application is in the vision-impaired user operation mode, when a trigger operation for a target element shown in the first page is detected, whether the target element is a focus element is determined. For example, when "eggplant burned in brown sauce" shown in fig. 1 is triggered by a visually impaired user, if it can be queried in the focus element information list that the target element information corresponds to "eggplant burned in brown sauce", it is determined that the target element is a focus element.
Then, if the target element is judged to be the focus element, the target element is shown in the form of the focus element in the first page. For example, in fig. 1, "braised eggplant" is the focus element, and "braised eggplant" is shown in the form of the focus element.
When determining whether the target element is the focus element, the determining process is completed based on the interaction process between the terminal application and the server shown in fig. 3, and fig. 3 is a third schematic diagram of an application scenario of the page element processing method provided by the present application. Firstly, when the 'hot-stewed eggplant' is triggered, the terminal application sends a request message for judging whether the triggered target element, namely the 'hot-stewed eggplant' is a focus element to the server. And then, the server judges whether the red-cooked eggplant is the focus element or not based on the prestored focus element information list, and the red-cooked eggplant is the focus element after judgment. And finally, the server side provides the judgment result information for judging the 'eggplant braised in soy sauce' as the focus element for the terminal application.
And after the terminal application obtains the judgment result provided by the server side, displaying the red-cooked eggplants on the first page in a focus element mode.
Fig. 1 to fig. 3 introduced above are diagrams of an application scenario of the page element processing method according to the present application, and an application scenario of the page element processing method is not specifically limited in the embodiment of the present application, and the application scenario of the page element processing method is only one embodiment of the application scenario of the page element processing method provided by the present application, and the application scenario is provided to facilitate understanding of the page element processing method provided by the present application, and is not used to limit the page element processing method provided by the present application. Other application scenarios of the page element processing method in the embodiment of the application are not repeated one by one.
First embodiment
A first embodiment of the present application provides a page element processing method, which is described below with reference to fig. 4.
Please refer to fig. 4, which is a flowchart illustrating a page element processing method according to a first embodiment of the present application.
The page element processing method of the embodiment of the application comprises the following steps:
step S401: and when the target application is in the vision-impairment user operation mode, in response to detecting a trigger operation for a target element displayed in a first page of the target application, determining whether the target element is a focus element.
In this embodiment, a takeaway application is mainly used as an example of a target application, and a visually impaired user orders through a page shown in the takeaway application as a scene, so as to describe in detail the page element processing method of this embodiment.
As shown in fig. 1 and fig. 2, the page diagrams correspond to the first page and the second page displayed in the takeout application in the visually impaired user operation mode and in the non-visually impaired user operation mode, respectively, for the takeout application. The first page is a page displayed when the takeaway application is in the visually impaired user operation mode, and the second page is a page displayed when the takeaway application is in the non-visually impaired user operation mode. In fact, the content presented in the first page and the second page may be the same. Only in the first page, i.e. when the terminal application is in the visually impaired user operation mode, the triggered element in the page is presented in the form of a focus element.
In this embodiment, when the takeaway application is in the visually impaired user operation mode, the visually impaired user may trigger the target element in the first page, and with the page element processing method of this embodiment, after the target element is triggered, it is determined whether the target element is a focus element.
The target element may refer to any page element in the first page. For example, the elements may be elements such as "order", "evaluation", "business", "recommended food", "new meal", "must order", "fresh fruit", "lunch" and the like, or elements such as "braised eggplant in brown sauce", "braised eggplant in brown sauce in a section a sold 1 st", "sold 1 ten thousand per month +", "good evaluation rate 99.99%" and "pick 28".
In the present embodiment, the focus element refers to an element that can be independently touched. Specifically, the elements that can be independently touched may refer to elements that can be selected.
It can be seen in fig. 2 that in the take-away application is in the non-visually impaired user mode of operation, all elements in the page can be viewed by the non-visually impaired user through the information presented in fig. 2. The element information of the elements in the page does not need to be played through voice.
In contrast, when the takeaway application is in the visually impaired user operation mode, the takeaway application assists the visually impaired user in ordering food by responding to the detection of the target element triggered by the visually impaired user on the first page and playing the target element information of the target element in a voice manner.
However, not all elements in the page displayed by the take-away application can be independently touched, and when the vision-impaired user triggers a target element that cannot be independently touched, the target element information of the target element cannot be played in voice.
Based on this situation, in response to detecting a trigger operation for a target element displayed in a first page of the take-away application while the take-away application is in the visually impaired user mode of operation, it is determined whether the target element is a focus element.
Specifically, when a target element in a page as shown in fig. 1 is triggered, it is determined whether the target element is a focus element.
As one way to determine whether the target element is a focus element: whether the target element is a focus element is determined based on the focus element information list. In the present embodiment, the focus element information list is a list for storing target element information.
The above-described determination of whether the target element is the focus element based on the focus element information list may be performed in a manner as described below: and judging whether the target element information corresponding to the target element can be found in the focus element information list. If the target element information corresponding to the target element is found in the focus element information list, the target element is the focus element; if the target element information corresponding to the target element is not found in the focus element information list, the target element is not the focus element.
In order to be able to determine whether or not the target element is a focus element by the focus element information list, the focus element information list needs to be obtained in advance.
As an embodiment of obtaining the focus element information list, it may refer to: sending a request message for requesting to obtain a focus element information list to a server; and obtaining a focus element information list provided by the server aiming at the request message. Specifically, the execution subject for obtaining the focus element information list may be a terminal application, in which case, the preconfigured focus element information list is pre-stored in the server. Therefore, when the terminal application wants to obtain the focus element information list, the terminal application only needs to directly send a request message for requesting to obtain the focus element information list to the server.
As another embodiment of obtaining the focus element information list, it is also possible that the terminal application directly stores the focus element information list. In this approach, the terminal application pre-configures a list of focus element information.
Specifically, the pre-configuring of the focus element information list may be in a manner as described below. Firstly, obtaining elements which cannot be independently touched in a second page; and then, adding the elements which cannot be independently touched into the focus element information list which is created in advance, and obtaining the updated focus element information list.
As one way to obtain the elements that cannot be independently touched in the second page, all the elements in the second page may be obtained, and whether the elements in the second page can be independently touched may be determined. And obtaining the elements which cannot be independently touched in the second page based on the judgment result of whether the elements in the second page can be independently touched.
In this embodiment, some elements in the takeaway application are elements that can be independently touched, and some elements are elements that cannot be independently touched. For example, the elements "order", "evaluation", "merchant", "recommended food", "new meal", "ordered food", "fresh fruit", "lunch" and the like shown in fig. 2 all belong to elements that can be independently touched. Assuming that "order" is triggered, the "order" information can be played in voice when the take-away application is in the visually impaired user mode of operation.
While the elements in fig. 2, such as "braised eggplant in brown sauce", "braising eggplant in red sauce in area a for selling 1 st", "selling 1 ten thousand +", "good-rated 99.99%", and "cutting 28" cannot be touched independently when the takeaway application is in the non-visually impaired user operation mode. For example, the "eggplant cooked in brown sauce" cannot be independently touched, and at this time, the element information corresponding to the "eggplant cooked in brown sauce" element is added to the focus element information list created in advance, so as to obtain the updated focus element information list. Then, the element information of the eggplant braised in soy sauce can exist in the updated focus element information list.
When the takeaway application is in the visually impaired user operation mode and the "red-cooked eggplant" element shown in fig. 1 is triggered, the "red-cooked eggplant" element information can be found in the updated focus element information list.
Step S402: if the target element is a focus element, the target element is presented in the form of a focus element in the first page.
After the triggered target element in the page shown in fig. 1 is judged to be able to query the corresponding target element information in the updated focus element information list, it may be determined that the target element is the focus element. For example, when the takeaway application is in the visually impaired user operation mode and "red-cooked eggplant" is triggered, the "red-cooked eggplant" element information is found in the updated focus element information list, and it can be confirmed that "red-cooked eggplant" is the focus element.
And after confirming that the triggered target element is the focus element, showing the target element in the form of the focus element. For example, in fig. 1, "braised eggplant" is shown in the page shown in fig. 1 in the form of a focus element.
It can be understood that, in the present embodiment, the target element may refer to an element that cannot be independently touched in the second page of the target application in the non-vision-impairment user operation mode, that is: the target element in this embodiment is an element that cannot be touched independently when in the non-visually impaired user operation mode. And the page content of the second page corresponds to the page content of the first page.
In this embodiment, the elements that cannot be independently touched include at least one of the following elements: elements in a combination of elements having a side-by-side relationship; elements in a combination of elements having a nested relationship.
For example, the combination of elements having a parallel relationship may be a combination of elements including "braised eggplant in brown sauce", "1 st of the sale of braised eggplant in a region a", "1 ten thousand of the sale of the month", "good score 99.99%", and "cut 28". The braised eggplant is as follows: elements in a combination of elements having a side-by-side relationship.
The element combination with the nesting relationship can be a red-cooked eggplant picture and a round frame in a page shown in fig. 1, the red-cooked eggplant picture is displayed inside the round frame, and the element combination formed by the two elements is the element combination with the nesting relationship. The picture of the braised eggplant is as follows: elements in a combination of elements having a side-by-side nested relationship.
While the target element is presented in the form of a focus element, in this embodiment, the target information is played in a voice manner, and the target information is obtained according to information presented in the first page after the target element is triggered when the target application is in the visually impaired user operation mode. For example, in fig. 1, the information displayed after the "red-cooked eggplant" is triggered is the name of "red-cooked eggplant", that is: and (5) braising the eggplants in brown sauce. At the moment, the 'braised eggplant' can be played directly by voice.
Of course, in the present embodiment, the target element is an element for providing the delivery service to the visually impaired user. For example, the target element may refer to "red-cooked eggplant" which belongs to elements related to distribution services.
Therefore, the information displayed in the first page after the target element is triggered may refer to: the target element is triggered to present information for providing delivery services to the visually impaired user in a first page.
In the embodiment, when the target application is in the visually impaired user operation mode and the trigger operation for the target element displayed in the first page of the target application is detected, whether the target element is the focus element is judged; and when the target element is judged to be the focus element, displaying the target element in the form of the focus element in the first page. Therefore, under the operation mode of the vision disorder user, the target element can be displayed in the form of the focus element based on the trigger operation of the vision disorder user on the target element, and then the target element can be identified in the subsequent process. The problem of how to accurately identify page elements such as images, icons, characters or buttons and the like in the takeaway application page is solved.
Second embodiment
A second embodiment of the present application provides another page element processing method, which is described below with reference to fig. 5. Since the main contents of the page element processing method of the second embodiment have already been embodied in the first embodiment, that is: the process of obtaining the updated focus element information list is only exemplary, and reference may be made to the related description of the second embodiment for the relevant point.
Please refer to fig. 5, which is a flowchart illustrating a page element processing method according to a second embodiment of the present application.
The page element processing method of the embodiment of the application comprises the following steps:
step S501: and obtaining target element information of a target element which cannot be independently touched in a second page of the target application in the non-vision-impairment user operation mode.
Step S502: and adding the target element information into a focus element information list corresponding to a first page of the target application in the vision-impairment user operation mode, wherein the first page comprises the target element, and the page content of the first page corresponds to the page content of the second page.
Similarly to the first embodiment, in the present embodiment, the target element information of the target element that cannot be independently touched includes at least one of the following elements: elements in a combination of elements having a side-by-side relationship; elements in a combination of elements having a nested relationship.
In this embodiment, after obtaining the focus element information list, in response to detecting a trigger operation for the target element while the target application is in the visually impaired user operation mode, the target element is presented in the form of a focus element in the first page. Of course, as in the first embodiment, before presenting a target element in the form of a focus element, it is necessary to determine whether the target element is a focus element based on the focus element information list. If so, the target element is presented in the first page in the form of a focus element.
The target information can be played in a voice mode while the target element is displayed in the first page in the form of the focus element, and the target information is obtained according to the information displayed in the first page after the target element is triggered when the target application is in the visual disturbance user operation mode.
In this embodiment, the target element may also refer to an element for providing a delivery service to the visually impaired user. Correspondingly, the information presented in the first page after the target element is triggered may refer to: the target element is triggered to present information for providing delivery services to the visually impaired user in a first page.
In this embodiment, the target element information of the target element that cannot be independently touched in the second page of the target application in the non-visually impaired user operation mode is obtained, and the target element information is added to the focus element information list corresponding to the first page of the target application in the visually impaired user operation mode. Therefore, in the operation mode of the vision disorder user, whether the target element is the focus element or not can be judged based on the focus element information list and when the target element is triggered in the subsequent process, and when the target element is judged to be the focus element, the target element can be displayed in the first page in the form of the focus element. And further enabling the target element to be displayed and identified in the form of the focus element based on the trigger operation of the vision disorder user on the target element in the operation mode of the vision disorder user. The problem of how to accurately identify page elements such as images, icons, characters or buttons and the like in the takeaway application page is solved.
Third embodiment
Corresponding to the page element processing method provided in the first embodiment of the present application, a third embodiment of the present application correspondingly provides a page element processing apparatus. Since the device embodiment is substantially similar to the first embodiment, it is relatively simple to describe, and reference may be made to some descriptions of the first embodiment for relevant points. The device embodiments described below are merely illustrative.
Please refer to fig. 6, which is a diagram illustrating a page element processing apparatus according to a third embodiment of the present application.
The page element processing apparatus includes:
a focus element determination unit 601, configured to, when a target application is in a visually impaired user operation mode, determine whether a target element shown in a first page of the target application is a focus element in response to detecting a trigger operation for the target element;
a presentation unit 602, configured to, if the target element is a focus element, present the target element in the first page in the form of a focus element;
the target element is an element which cannot be independently touched in a second page of the target application in a non-vision-impairment user operation mode, wherein the page content of the second page corresponds to the page content of the first page.
Optionally, the focus element determining unit is specifically configured to: determining whether the target element is a focus element based on a focus element information list, the focus element information list being a list for storing target element information.
Optionally, the focus element determining unit is specifically configured to:
and judging whether the target element information corresponding to the target element can be found in the focus element information list.
Optionally, the page element processing apparatus further includes: a focus element information list obtaining unit; the focus element information list obtaining unit is specifically configured to: obtaining the focus element information list.
Optionally, the page element processing apparatus further includes: a request message transmitting unit; the request message sending unit is specifically configured to: sending a request message for requesting to obtain the focus element information list to a server;
the focus element information list obtaining unit is specifically configured to: and obtaining the focus element information list provided by the server aiming at the request message.
Optionally, the focus element information list obtaining unit is specifically configured to:
obtaining elements which cannot be independently touched in the second page;
and adding the elements which cannot be independently touched into a focus element information list which is created in advance to obtain an updated focus element information list.
Optionally, the page element processing apparatus further includes: a touch judgment unit; the touch determination unit is specifically configured to: and judging whether the elements in the second page can be independently touched.
Optionally, the element that cannot be independently touched includes at least one of the following elements:
elements in a combination of elements having a side-by-side relationship;
elements in a combination of elements having a nested relationship.
Optionally, the page element processing apparatus further includes: a voice playing unit; the voice playing unit is specifically configured to: and playing target information in a voice mode, wherein the target information is obtained according to information displayed in the first page after the target element is triggered when the target application is in a visual disorder user operation mode.
Optionally, the target element is an element for providing a delivery service for the visually impaired user;
the information displayed in the first page after the target element is triggered comprises: and after the target element is triggered, displaying information for providing delivery service for the visually impaired user in the first page.
Fourth embodiment
Corresponding to the page element processing method provided in the second embodiment of the present application, a fourth embodiment of the present application correspondingly provides a page element processing apparatus. Since the apparatus embodiment is substantially similar to the second embodiment, it is relatively simple to describe, and reference may be made to some descriptions of the second embodiment for relevant points. The device embodiments described below are merely illustrative.
Please refer to fig. 7, which is a diagram illustrating a page element processing apparatus according to a fourth embodiment of the present application.
The page element processing apparatus includes:
a target element information obtaining unit 701, configured to obtain target element information of a target element that cannot be independently touched in a second page of a target application in a non-visual impairment user operation mode;
an adding unit 702, configured to add the target element information into a focus element information list corresponding to a first page of the target application in a visually impaired user operation mode, where the first page includes the target element, and a page content of the first page corresponds to a page content of the second page.
Optionally, the target element information of the target element that cannot be independently touched includes at least one of the following elements:
elements in a combination of elements having a side-by-side relationship;
elements in a combination of elements having a nested relationship.
Optionally, the page element processing apparatus further includes: the display unit and the voice playing unit; the display unit is specifically configured to: in response to detecting a trigger operation for the target element while the target application is in a visually impaired user mode of operation, presenting the target element in the first page in the form of a focus element;
the voice playing unit is specifically configured to: and playing target information in a voice mode, wherein the target information is obtained according to information displayed in the first page after the target element is triggered when the target application is in a vision-impairment user operation mode.
Optionally, the target element is an element for providing a delivery service for the visually impaired user;
the information displayed in the first page after the target element is triggered comprises: and after the target element is triggered, displaying information for providing delivery service for the visually impaired user in the first page.
Fifth embodiment
Corresponding to the methods of the first to second embodiments of the present application, a fifth embodiment of the present application further provides an electronic device.
As shown in fig. 8, fig. 8 is a schematic view of an electronic device provided in a fifth embodiment of the present application.
The electronic device includes: a processor 801; the memory 802 is used for storing a computer program executed by the processor to execute the page element processing method according to the first embodiment to the second embodiment.
Sixth embodiment
A sixth embodiment of the present application also provides a computer storage medium storing a computer program that is executed by a processor to execute the page element processing methods of the first to second embodiments, corresponding to the methods of the first to second embodiments of the present application.
Although the present application has been described with reference to the preferred embodiments, it is not intended to limit the present application, and those skilled in the art can make variations and modifications without departing from the spirit and scope of the present application, therefore, the scope of the present application should be determined by the claims that follow.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory. The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
1. Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer-readable medium does not include non-transitory computer-readable storage media (non-transitory computer readable storage media), such as modulated data signals and carrier waves.
2. As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.

Claims (15)

1. A page element processing method is characterized by comprising the following steps:
when a target application is in a vision-impairment user operation mode, in response to detecting a trigger operation for a target element displayed in a first page of the target application, determining whether the target element is a focus element;
if the target element is a focus element, displaying the target element in the form of the focus element in the first page;
the target element is an element which cannot be independently touched in a second page of the target application in a non-vision-impairment user operation mode, wherein the page content of the second page corresponds to the page content of the first page.
2. The method of claim 1, wherein the determining whether the target element is a focus element comprises: determining whether the target element is a focus element based on a focus element information list, the focus element information list being a list for storing target element information.
3. The method of claim 2, wherein the determining whether the target element is a focus element based on a focus element information list comprises:
and judging whether the target element information corresponding to the target element can be found in the focus element information list.
4. The method of claim 2, further comprising: obtaining the focus element information list.
5. The method of claim 4, further comprising: sending a request message for requesting to obtain the focus element information list to a server;
the obtaining the focus element information list includes:
and obtaining the focus element information list provided by the server aiming at the request message.
6. The method of claim 4, wherein the obtaining the list of focus element information comprises:
obtaining elements which cannot be independently touched in the second page;
and adding the elements which cannot be independently touched into a focus element information list which is created in advance to obtain an updated focus element information list.
7. The method of claim 6, further comprising: and judging whether the elements in the second page can be independently touched.
8. The method of claim 1, wherein the independently touchable element comprises at least one of:
elements in a combination of elements having a side-by-side relationship;
elements in a combination of elements having a nested relationship.
9. The method of claim 1, further comprising: and playing target information in a voice mode, wherein the target information is obtained according to information displayed in the first page after the target element is triggered when the target application is in a visual disorder user operation mode.
10. The method of claim 9, wherein the target element is an element for providing a delivery service for visually impaired users;
the information displayed in the first page after the target element is triggered comprises: and after the target element is triggered, displaying information for providing delivery service for the visually impaired user in the first page.
11. A page element processing method is characterized by comprising the following steps:
obtaining target element information of a target element which cannot be independently touched in a second page of a target application in a non-vision-impairment user operation mode;
adding the target element information into a focus element information list corresponding to a first page of the target application in a visually impaired user operation mode, wherein the first page comprises the target element, and the page content of the first page corresponds to the page content of the second page.
12. A page element processing apparatus, comprising:
the device comprises a focus element judgment unit, a target application display unit and a display unit, wherein the focus element judgment unit is used for responding to the detection of the trigger operation of a target element displayed in a first page of the target application when the target application is in a vision disorder user operation mode and judging whether the target element is a focus element;
the display unit is used for displaying the target element in the form of a focus element in the first page if the target element is the focus element;
the target element is an element which cannot be independently touched in a second page of the target application in a non-vision-impairment user operation mode, wherein the page content of the second page corresponds to the page content of the first page.
13. A page element processing apparatus, comprising:
the target element information obtaining unit is used for obtaining target element information of a target element which cannot be independently touched in a second page of a target application in a non-vision-impairment user operation mode;
and the adding unit is used for adding the target element information into a focus element information list corresponding to a first page of the target application in a vision-impairment user operation mode, wherein the first page comprises the target element, and the page content of the first page corresponds to the page content of the second page.
14. An electronic device, comprising:
a processor;
a memory for storing a computer program for execution by the processor to perform the method of any one of claims 1 to 11.
15. A computer storage medium, characterized in that it stores a computer program that is executed by a processor to perform the method of any one of claims 1-11.
CN202110492470.4A 2021-05-06 2021-05-06 Page element processing method and device and electronic equipment Pending CN112905078A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110492470.4A CN112905078A (en) 2021-05-06 2021-05-06 Page element processing method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110492470.4A CN112905078A (en) 2021-05-06 2021-05-06 Page element processing method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN112905078A true CN112905078A (en) 2021-06-04

Family

ID=76108979

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110492470.4A Pending CN112905078A (en) 2021-05-06 2021-05-06 Page element processing method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN112905078A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140282002A1 (en) * 2013-03-15 2014-09-18 Verizon Patent And Licensing Inc. Method and Apparatus for Facilitating Use of Touchscreen Devices
CN106406867A (en) * 2016-09-05 2017-02-15 深圳市联谛信息无障碍有限责任公司 Android system-based screen reading method and apparatus
US20180285028A1 (en) * 2017-03-31 2018-10-04 Canon Kabushiki Kaisha Job processing apparatus, method of controlling job processing apparatus, and recording medium
CN109117047A (en) * 2017-06-22 2019-01-01 西安中兴新软件有限责任公司 terminal control method and device, mobile terminal and computer readable storage medium
CN109947388A (en) * 2019-04-15 2019-06-28 腾讯科技(深圳)有限公司 The page broadcasts control method, device, electronic equipment and the storage medium of reading
CN111324275A (en) * 2018-12-17 2020-06-23 腾讯科技(深圳)有限公司 Broadcasting method and device for elements in display picture

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140282002A1 (en) * 2013-03-15 2014-09-18 Verizon Patent And Licensing Inc. Method and Apparatus for Facilitating Use of Touchscreen Devices
CN106406867A (en) * 2016-09-05 2017-02-15 深圳市联谛信息无障碍有限责任公司 Android system-based screen reading method and apparatus
US20180285028A1 (en) * 2017-03-31 2018-10-04 Canon Kabushiki Kaisha Job processing apparatus, method of controlling job processing apparatus, and recording medium
CN109117047A (en) * 2017-06-22 2019-01-01 西安中兴新软件有限责任公司 terminal control method and device, mobile terminal and computer readable storage medium
CN111324275A (en) * 2018-12-17 2020-06-23 腾讯科技(深圳)有限公司 Broadcasting method and device for elements in display picture
CN109947388A (en) * 2019-04-15 2019-06-28 腾讯科技(深圳)有限公司 The page broadcasts control method, device, electronic equipment and the storage medium of reading

Similar Documents

Publication Publication Date Title
US20210208770A1 (en) Apparatuses, methods and systems for hierarchical multidimensional information interfaces
KR102033189B1 (en) Gesture-based tagging to view related content
US9723037B2 (en) Communication associated with a webpage
RU2662632C2 (en) Presenting fixed format documents in reflowed format
US20140249935A1 (en) Systems and methods for forwarding users to merchant websites
JP6185216B1 (en) Information providing system, information providing apparatus, information providing method, and program
US10565385B1 (en) Substitute web content generation for detection and avoidance of automated agent interaction
CN106997372B (en) Method and device for realizing business operation based on picture
CN112948521B (en) Object handling method and device
CN106384264A (en) Information query method and terminal
US7634544B2 (en) Location based messaging
US9037501B1 (en) Presenting alternative shopping options
CN116610765A (en) Object handling method and device
CN112988108A (en) Information playing method and device, electronic equipment and storage medium
CN111382373A (en) Catering merchant information display method, management system, electronic device and storage medium
CN112989243A (en) Information playing method, information to be played obtaining method, device and electronic equipment
CN112905078A (en) Page element processing method and device and electronic equipment
CN111383034A (en) Catering merchant information display method, management system, electronic device and storage medium
US10198415B2 (en) Webform monitoring
US10965781B2 (en) Method and server for displaying access content
CN113190697A (en) Image information playing method and device
JP5969158B1 (en) Server apparatus, control method, program, and recording medium
JP2018077885A (en) Shopping cart input button method
CN111966891B (en) Information processing method and device and electronic equipment
TW201523423A (en) Employing page links to merge pages of articles

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination