CN112486451B - Voice broadcasting method, computing device and computer storage medium - Google Patents

Voice broadcasting method, computing device and computer storage medium Download PDF

Info

Publication number
CN112486451B
CN112486451B CN202011364189.4A CN202011364189A CN112486451B CN 112486451 B CN112486451 B CN 112486451B CN 202011364189 A CN202011364189 A CN 202011364189A CN 112486451 B CN112486451 B CN 112486451B
Authority
CN
China
Prior art keywords
control
controls
voice
user
broadcasting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011364189.4A
Other languages
Chinese (zh)
Other versions
CN112486451A (en
Inventor
杨树彬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhangyue Technology Co Ltd
Original Assignee
Zhangyue Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhangyue Technology Co Ltd filed Critical Zhangyue Technology Co Ltd
Priority to CN202011364189.4A priority Critical patent/CN112486451B/en
Publication of CN112486451A publication Critical patent/CN112486451A/en
Priority to PCT/CN2021/121810 priority patent/WO2022111047A1/en
Application granted granted Critical
Publication of CN112486451B publication Critical patent/CN112486451B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04886Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures by partitioning the display area of the touch-screen or the surface of the digitising tablet into independently controllable areas, e.g. virtual keyboards or menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/445Program loading or initiating
    • G06F9/44521Dynamic linking or loading; Link editing at or after load time, e.g. Java class loading
    • G06F9/44526Plug-ins; Add-ons

Abstract

The invention discloses a voice broadcasting method, a computing device and a computer storage medium, wherein the method comprises the following steps: judging whether the at least two controls meet the association condition or not aiming at the at least two controls in the preset area in the application page; if so, performing association processing on at least two controls; and responding to the triggering operation of a user on any control in the application page, determining at least one control associated with the control, and broadcasting the control focus contents of at least two associated controls in sequence by voice. According to the scheme provided by the invention, at least two controls meeting the association conditions are associated, so that the effect of voice broadcasting the focus contents of a plurality of associated controls through one-time triggering operation is realized, the user operation is simplified, the user can quickly find the control to be operated without frequently triggering the controls, the frequent switching between the triggering operation and the voice broadcasting is avoided, and the user experience is improved.

Description

Voice broadcasting method, computing device and computer storage medium
Technical Field
The invention relates to the technical field of computers, in particular to a voice broadcasting method, computing equipment and a computer storage medium.
Background
With the development of science and technology, many modern intelligent terminals are generated, such as smart phones and tablet computers, which almost all use touch screens, and the adopted operation control method is that program influence is generated by touch, the program influence is displayed on the screen, and then the control and the operation are performed after the display effect is seen by eyesight. However, for the blind or the user with poor eyesight, the screen display cannot be seen or clearly seen, so that the touch operation on the intelligent terminal is difficult. In order to enable the personnel to conveniently use the intelligent terminal, the intelligent terminal generally has a voice broadcasting function, and a user can know which controls are clicked in a voice broadcasting mode, however, the existing voice broadcasting mode is that the user triggers one control, and in response to the triggering operation of the user, voice broadcasting is performed on the control focus content of the control.
Disclosure of Invention
In view of the above problems, the present invention has been made to provide a voice broadcasting method, a computing device, and a computer storage medium that overcome or at least partially solve the above problems.
According to an aspect of the present invention, there is provided a voice broadcasting method including:
judging whether the at least two controls meet the association condition or not aiming at the at least two controls in the preset area in the application page;
if so, performing association processing on at least two controls;
and responding to the triggering operation of a user on any control in the application page, determining at least one control associated with the control, and broadcasting the control focus contents of at least two associated controls in sequence by voice.
According to another aspect of the present invention, there is provided a computing device comprising: the processor, the memory and the communication interface complete mutual communication through the communication bus;
the memory is used for storing at least one executable instruction, and the executable instruction causes the processor to execute the following operations:
judging whether the at least two controls meet the association condition or not aiming at the at least two controls in the preset area in the application page;
if so, performing association processing on at least two controls;
and responding to the triggering operation of a user on any control in the application page, determining at least one control associated with the control, and broadcasting the control focus contents of at least two associated controls in sequence by voice.
According to another aspect of the present invention, there is provided a computer storage medium having at least one executable instruction stored therein, where the executable instruction causes a processor to perform operations corresponding to the above-mentioned voice broadcasting method.
According to the scheme provided by the invention, at least two controls meeting the association condition are associated, so that a realization basis is provided for the control focus contents of a plurality of controls to be broadcasted by voice through one-time triggering operation; the method and the device have the advantages that the triggering operation of any control in the application page by a user is responded, at least one control associated with the control is determined, the control focus contents of at least two associated controls are broadcasted in a voice mode in sequence, the effect of voice broadcasting the control focus contents of a plurality of associated controls through one-time triggering operation is achieved, the user operation is simplified, the user can quickly find the control to be operated without frequently triggering the controls, frequent switching between the triggering operation and the voice broadcasting is avoided, and the user use experience is improved.
The foregoing description is only an overview of the technical solutions of the present invention, and the embodiments of the present invention are described below in order to make the technical means of the present invention more clearly understood and to make the above and other objects, features, and advantages of the present invention more clearly understandable.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
fig. 1 is a schematic flow chart illustrating a voice broadcasting method according to an embodiment of the present invention;
fig. 2A is a schematic flow chart illustrating a voice broadcasting method according to another embodiment of the present invention;
FIG. 2B is a diagram illustrating an operation region corresponding to a control;
FIG. 2C is a schematic diagram of merge operation zones;
FIG. 2D is a schematic diagram of an activation control;
FIG. 3 shows a schematic structural diagram of a computing device according to one embodiment of the invention.
Detailed Description
Exemplary embodiments of the present invention will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the invention are shown in the drawings, it should be understood that the invention can be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art.
Fig. 1 is a flowchart illustrating a voice broadcasting method according to an embodiment of the present invention. As shown in fig. 1, the method comprises the steps of:
step S101, aiming at least two controls in a preset area in an application page, judging whether the at least two controls meet an association condition; if yes, go to step S102.
Generally, an application page includes a plurality of controls distributed at various positions in the application page, the application page is divided into a plurality of regions, each region includes a plurality of controls, and the size of each region can be flexibly set, for example, the sizes of the plurality of regions may be the same or different, and are not specifically limited herein. And judging whether the at least two controls meet association conditions or not aiming at the at least two controls in a preset area in the application page, wherein the association conditions are conditions which are met by the at least two controls when the association is established. If the at least two controls meet the association condition, executing step S102; if the association condition is not satisfied, the method ends.
And step S102, performing association processing on at least two controls.
When it is determined that the at least two controls satisfy the association condition, the at least two controls may be associated, for example, an association relationship between the at least two controls is recorded, or the at least two controls are associated in another manner, for example, the operation regions corresponding to the at least two controls are merged. The embodiment does not specifically limit the method used for associating the at least two controls, and other methods capable of associating the at least two controls also belong to the scope of the present invention. The at least two controls are associated, so that the at least two controls are associated, and a realization basis is provided for the control focus content of the plurality of controls to be broadcasted by voice through one-time triggering operation.
And step S103, responding to the triggering operation of the user on any control in the application page, determining at least one control associated with the control, and broadcasting the control focus contents of at least two controls associated with the control in sequence by voice.
When a user wants to know what control the control is specifically, the user can click the corresponding control in the application page, after the triggering operation of the user on any control in the application page is monitored, the user needs to respond to the triggering operation executed by the user, specifically, after the user triggers any control in the application page, at least one control associated with the control needs to be determined, for example, the at least one control associated with the control can be determined in an inquiry mode, after the at least one control associated with the control is determined, the control focus contents of at least two controls associated with the control exist are broadcasted in sequence by voice, wherein the control focus contents are used for explaining what control the control is specifically, for example, the control name and the like, and the effect of broadcasting the focus contents of a plurality of controls by voice is realized by one-time triggering operation, the user operation is simplified, the user does not need to frequently trigger the control, and therefore the user experience is improved.
According to the method provided by the embodiment of the invention, at least two controls meeting the association condition are associated, so that a realization basis is provided for broadcasting the control focus contents of a plurality of controls by voice through one-time triggering operation; the method and the device have the advantages that the triggering operation of any control in the application page by a user is responded, at least one control associated with the control is determined, the control focus contents of at least two associated controls are broadcasted in a voice mode in sequence, the effect of voice broadcasting the control focus contents of a plurality of associated controls through one-time triggering operation is achieved, the user operation is simplified, the user can quickly find the control to be operated without frequently triggering the controls, frequent switching between the triggering operation and the voice broadcasting is avoided, and the user use experience is improved.
Fig. 2A is a flowchart illustrating a voice broadcasting method according to another embodiment of the present invention. As shown in fig. 2A, the method includes the steps of:
generally, an application page includes a plurality of controls distributed at various positions in the application page, the application page is divided into a plurality of regions, each region includes a plurality of controls, and the size of each region can be flexibly set, for example, the sizes of the plurality of regions may be the same or different, and are not specifically limited herein.
Controls within the application page include: the system comprises a non-click event control and a click event control, wherein the non-click event control means that a user has no subsequent response after clicking the control, and usually the controls do not have corresponding jump addresses; the click event control means that a user can respond to a click action after clicking the control, and usually the click event control corresponds to a jump address and can realize page jump according to the jump address. Any control in the application page corresponds to an operation area, and the trigger operation on any position of the operation area is regarded as the trigger operation on the control.
The voice broadcasting method provided by the embodiment can be applied to the scene of visual impairment, namely, when the blind or the person with visual impairment uses the intelligent terminal, the blind or the person with visual impairment cannot see the screen clearly, so that the voice broadcasting method is used for assisting the corresponding person to know what the triggered control is specifically. Of course, the method is not limited to the vision-impaired scene, and other scenes needing voice broadcasting are all applicable.
In step S201, if at least two controls in a preset region in the application page are non-click event controls, it is determined whether a distance between operation regions corresponding to the at least two controls is smaller than a preset distance, and if so, step S205 is executed.
If at least two controls in a preset region in the application page are non-click event controls, judging whether the at least two controls can be associated specifically by judging whether the distance between the operation regions corresponding to the at least two controls is smaller than a preset distance, wherein the preset distance can be flexibly set according to actual needs, the preset distance is a critical value, and when the distance between the operation regions corresponding to the at least two controls is smaller than the preset distance, it is indicated that the at least two controls are very close, and the at least two controls meet the association condition, that is, the step S205 is executed by skipping. And when the distance between the operation areas corresponding to the at least two controls is greater than or equal to the preset distance, the at least two controls are a little far apart, and the at least two controls do not meet the association condition, and the method is ended. In general, if the positional relationship of at least two controls is an up-down relationship or a left-right relationship, the corresponding operation regions are also in an up-down relationship or a left-right relationship, and the distance between the operation regions herein refers to an up-down distance or a left-right distance.
Step S202, if the at least two controls in the preset area in the application page comprise a non-click event control and a click event control, judging whether the control focus content of the non-click event control and the control focus content of the click event control have relevance, and if so, executing step S203.
The non-click event control means that a user has no subsequent response after clicking the control, and generally the control does not have a corresponding jump address; the click event control means that a user can respond to a click action after clicking the control, and usually the click event control corresponds to a jump address and can realize page jump according to the jump address. The non-click event control does not correspond to the jump address, and therefore, if at least two controls in a preset area in the application page include both the non-click event control and the click event control, whether the at least two controls can be associated or not cannot be determined based on the jump address.
In general, in order to ensure the association of multiple control focus contents of voice broadcast, if at least two controls in a preset area in an application page include a non-click event control and a click event control, it is further required to determine whether there is an association between the control focus contents of the non-click event control and the control focus contents of the click event control, for example, it may be determined whether there is an association between the control focus contents of the non-click event control and the control focus contents of the click event control by performing semantic analysis on the control focus contents, and if the control focus contents corresponding to at least two controls represent the same or similar semantics, it may be determined that there is an association; or, if the control focus content of the non-click event control is used to explain that the click event control exists, it may be determined that the control focus contents corresponding to at least two controls have a correlation, for example, the control focus content corresponding to the click event control is "cake reading", the control focus content corresponding to the non-click event control is "9242", and "9242" is used to explain the number of "cake reading". Under the condition that the relevance exists between the control focus content of the non-click event control and the control focus content of the click event control, the step S205 can be skipped to; and under the condition that the control focus content of the non-click event control and the control focus content of the click event control are not related, ending the method.
Step S203, judging whether the distance between the operation areas corresponding to the at least two controls is smaller than a preset distance; if yes, go to step S205.
The preset distance may be flexibly set according to actual needs, where the preset distance is a critical value, and when the distance between the operation regions corresponding to the at least two controls is smaller than the preset distance, it indicates that the at least two controls are very close, and the at least two controls satisfy the association condition, that is, the step S205 is executed by skipping. And when the distance between the operation areas corresponding to the at least two controls is greater than or equal to the preset distance, the at least two controls are a little far apart, and the at least two controls do not meet the association condition, and the method is ended.
Step S204, if at least two controls in the preset area in the application page are click event controls, judging whether the jump addresses corresponding to the at least two controls are the same, if so, executing step S205.
The clicked event controls all correspond to a jump address, and therefore, whether to associate at least two controls can be determined based on the jump address, and specifically, for at least two clicked event controls in a preset area in an application page, whether the at least two controls satisfy an association condition can be determined by the following method: judging whether the jump addresses corresponding to the at least two controls are the same jump address, if so, indicating that the responses made after clicking the at least two controls respectively are the same, namely, jumping to the same page, determining that the at least two controls meet the association condition, namely, jumping to execute step S205; if the at least two controls are not the same jump address, the method is determined that the at least two controls do not meet the association condition and the method is ended, wherein the jump address indicates that the at least two controls jump to different pages after clicking the at least two controls respectively.
Step S205, determining that at least two controls meet the association condition.
If the distance between the operation areas corresponding to the at least two controls is judged to be smaller than the preset distance, or the jump addresses corresponding to the at least two controls are judged to be the same, it can be determined that the at least two controls meet the association condition, and therefore the at least two controls can be associated.
And step S206, merging the operation areas corresponding to the at least two controls to obtain a merged operation area.
For at least two controls meeting the association condition, the association of the at least two controls can be realized in a mode of combining the operation areas corresponding to the at least two controls, the operable area of a user can be enlarged by combining the at least two controls, so that the user does not need to be aware of the control at every time and can effectively click, the problem that the user is inconvenient to click when the operation area corresponding to the controls is too small can be effectively solved, and the user use experience is improved for the visually impaired.
In an optional implementation manner of the present invention, the operation area corresponding to each control is fixed, and therefore, the minimum rectangle coverage principle may be utilized to implement merging processing on the operation areas corresponding to at least two controls, specifically, a minimum rectangle area including the operation areas corresponding to at least two controls is determined, the minimum rectangle area is determined as a merging operation area, and after the merging operation area is determined, area coordinates corresponding to the merging operation area may also be determined, so as to subsequently determine the control covered by the merging operation area. Or recording the association relationship between the merging operation area and at least two controls.
For convenience of understanding, the following description is made with reference to a specific legend, and fig. 2B is a schematic diagram of an operation region corresponding to a control, and as shown in fig. 2B, the avatar control, the level control, and the nickname control all correspond to corresponding operation regions, and the three controls correspond to the same jump address, so that the operation regions corresponding to the three controls can be merged, and the merged operation region obtained after merging is shown in fig. 2C.
Step S207, in response to a trigger operation of the user on any merging operation area in the application page, determining at least two controls covered by the merging operation area, and sequentially broadcasting control focus contents of the at least two controls covered by the merging operation area by voice.
After a user clicks in an application page, in order to facilitate the user to know what the clicked specific operation is, after a clicking action occurs, position coordinates of a clicking position of the user are acquired, and therefore, it is possible to determine which merging operation area the user is to trigger, and in response to the user triggering any merging operation area in the application page, at least two controls covered by the merging operation area need to be determined, each control corresponds to a respective operation area and corresponds to a position coordinate, and therefore, it is possible to determine at least two controls covered by the merging operation area by determining which control position coordinates fall within the range of the position coordinates of the merging operation area, or it is also possible to determine at least two controls associated with the merging operation area by a query method, and after determining the at least two controls covered by the merging operation area, the control focus contents of at least two controls covered by the operation area are combined through voice broadcasting in sequence, the effect of voice broadcasting the focus contents of the controls is achieved through one-time triggering operation, user operation is simplified, the controls do not need to be frequently triggered by a user, and therefore user use experience is improved.
And step S208, responding to the triggering operation of the user on any merging operation area in the application page again, and performing page skipping processing according to the skipping address corresponding to the control covered by the merging operation area.
If the at least two controls covered by the merging operation area correspond to a certain jump address, after the focus content of the controls of the at least two controls covered by the merging operation area is reported by voice, if a user wants to continuously know more content or enter a jump page corresponding to the control, the user can execute a trigger operation on the merging operation area again, respond to the trigger operation of the user on any merging operation area in the application page again, perform page jump processing according to the jump address corresponding to the control covered by the merging operation area, and jump to the page corresponding to the jump address.
For example, the combined operation area covers a head portrait control, a nickname control and a level control, the three controls correspond to the same jump address, the jump address corresponds to the setting page, the page jump is realized in response to the triggering operation of the user on the combined operation area again, and the page after the jump is the setting page.
In an optional embodiment of the present invention, the application page further includes an activation control, the activation control is usually represented by an "more" button or an "┇" button, and the control focus content can be called by clicking the "more" button or the "┇" button, and is usually represented in a form of a pull-down list, as shown in fig. 2D, in order to enable a user to know which control focus contents the activation control includes, in response to a trigger operation of the user on any activation control in the application page, the control focus contents corresponding to the activation control are sequentially broadcasted by voice, that is, after activation, the control focus contents in the pull-down list are sequentially broadcasted all over again, for example, more buttons in the bookshelf page are activation controls, and the activation controls correspond to the following control focus contents: the method comprises the steps of importing a local book, wirelessly transmitting the book, a cloud bookshelf, a sorting bookshelf, book sorting, scanning, backup and synchronization, responding to triggering operation of a user on more buttons in a bookshelf page after monitoring triggering operation, sequentially conducting voice broadcast on the local book, wirelessly transmitting the book, the cloud bookshelf, the sorting bookshelf, the book sorting, scanning, backup and synchronization, not only broadcasting the focus content of a first control (importing the local book), but also not broadcasting the focus content of other controls in a pull-down list. Carry out voice broadcast in proper order through the controlling part focus content that corresponds to the activation controlling part for the user can clearly understand which functions that concrete can be realized to every controlling part, and the follow-up use of the user of being convenient for to looking the barrier personage, through the controlling part focus content that voice broadcast activation controlling part corresponds in proper order, and no longer just voice broadcast first controlling part focus content, can richen barrier personage's service function.
In an alternative embodiment of the invention, the method further comprises: responding to the triggering operation of the user on the editing control, broadcasting the control focus content corresponding to the editing control by voice, and switching the input mode from the keyboard input mode to the voice input mode.
The editing control in the optional embodiment refers to a control providing text input operation, for example, the editing control may be a search control or a comment control, and when a user wants to input a specific text, the editing control may be triggered, for example, the editing control is clicked, and in response to a trigger operation of the user on the editing control, a control focus content corresponding to the editing control may be voice-broadcasted to the user in a voice-broadcasted manner, so as to inform the user of what control is triggered.
For example, when a user enters a search page and wants to search whether an electronic book "changan guest" exists in a local area or a book city, the user is usually required to input the "changan guest" in a search input box, in order to facilitate the operation of visually impaired people, after the triggering operation of the user on a "search input box" control is monitored, in response to the triggering operation of the user on the "search input box" control, the focus content of the control corresponding to the "search input box" control is broadcasted by voice, for example, the "search input box" is broadcasted by voice, the input mode is switched from the keyboard input mode to the voice input mode, the user can describe the "search input box" by voice to realize search, and after a search result is obtained, the search result can be broadcasted by voice in sequence.
The alternative embodiment may also be applied to other scenarios, for example, scenarios of writing ideas or book reviews for a certain electronic book, or replying comments of others, and details are not repeated here.
In an alternative embodiment of the invention, the method further comprises: responding to the triggering operation of a user on the specified electronic book control, and broadcasting the electronic book identification corresponding to the specified electronic book control by voice;
responding to the triggering operation of the user on the appointed electronic book control again, and judging whether the database stores the audio book matched with the electronic book identification corresponding to the appointed electronic book control;
if yes, playing the audio book by voice;
if not, downloading the voice reading plug-in to read the appointed electronic book by the voice reading plug-in.
Specifically, for the case that the application page is an electronic book page, when a user wants to read an electronic book, the user may trigger the designated electronic book control, and in response to the triggering operation of the user on the designated electronic book control, perform voice broadcast on the electronic book identifier corresponding to the designated electronic book control, so that the user determines whether the electronic book is the electronic book that the user wants to read through a voice broadcast manner. If the electronic book is the electronic book which the user wants to read, the user can trigger the designated electronic book control again, whether the sound book matched with the electronic book identification corresponding to the designated electronic book control is stored in the database or not is judged in response to the triggering operation of the user on the designated electronic book control again, for example, the database corresponding to the sound book can be inquired according to the electronic book identification corresponding to the designated electronic book control, the electronic book identification corresponding to the designated electronic book control is matched with the identification of the sound book, and if the electronic book identification is matched, the sound book matched with the electronic book identification can be determined to exist, and the sound book is controlled to be played in a voice mode; if the electronic books are not matched, the voice reading plug-in is downloaded and installed if the matched audio books do not exist, and the specified electronic book is read through voice by using the voice reading plug-in after the voice reading plug-in is installed.
According to the method provided by the embodiment of the invention, at least two controls meeting the association condition are associated, so that a realization basis is provided for broadcasting the control focus contents of a plurality of controls by voice through one-time triggering operation; by combining the operation areas corresponding to the at least two controls meeting the association condition, the user operable area is enlarged, so that a user can effectively click without aiming at the position of the control every time, the problem that the user is inconvenient to click when the operation area corresponding to the control is too small can be effectively solved, and the user experience is improved for visually impaired people; the method and the device have the advantages that the triggering operation of the user on the merging operation area is responded, the at least two controls covered by the merging operation area are determined, the control focus contents of the at least two controls covered by the merging operation area are broadcasted in a voice mode in sequence, the effect of voice broadcasting the multiple associated control focus contents through one-time triggering operation is achieved, the user operation is simplified, the user can quickly find the control to be operated without frequently triggering the controls, frequent switching between the triggering operation and the voice broadcasting is avoided, and accordingly the use experience of the user is improved.
The embodiment of the invention also provides a nonvolatile computer storage medium, wherein the computer storage medium stores at least one executable instruction, and the computer executable instruction can execute the voice broadcast method in any method embodiment.
The executable instructions may be specifically configured to cause the processor to:
judging whether the at least two controls meet the association condition or not aiming at the at least two controls in the preset area in the application page;
if so, performing association processing on at least two controls;
and responding to the triggering operation of a user on any control in the application page, determining at least one control associated with the control, and broadcasting the control focus contents of at least two associated controls in sequence by voice.
In an optional mode, any control corresponds to an operation area;
the executable instructions further cause the processor to:
if the at least two controls are non-click event controls, judging whether the distance between the operation areas corresponding to the at least two controls is smaller than a preset distance;
and if so, determining that the at least two controls meet the association condition.
In an optional mode, any control corresponds to an operation area;
the executable instructions further cause the processor to:
if the at least two controls comprise a non-click event control and a click event control, judging whether the control focus content of the non-click event control and the control focus content of the click event control have relevance or not;
if the correlation exists, judging whether the distance between the operation areas corresponding to the at least two controls is smaller than a preset distance;
and if the distance is smaller than the preset distance, determining that the at least two controls meet the association condition.
In an alternative, the executable instructions further cause the processor to:
if the at least two controls are click event controls, judging whether the jump addresses corresponding to the at least two controls are the same;
and if so, determining that the at least two controls meet the association condition.
In an optional mode, any control corresponds to an operation area;
the executable instructions further cause the processor to:
merging the operation areas corresponding to the at least two controls to obtain a merged operation area;
and responding to the triggering operation of a user on any merging operation area in the application page, determining at least two controls covered by the merging operation area, and broadcasting the control focus contents of the at least two controls covered by the merging operation area in a voice mode in sequence.
In an alternative, the executable instructions further cause the processor to:
and determining a minimum rectangular area containing the operation areas corresponding to the at least two controls, and determining the minimum rectangular area as a combined operation area.
In an alternative, the executable instructions further cause the processor to:
and responding to the triggering operation of the user on any merging operation area in the application page again, and performing page skipping processing according to the skipping address corresponding to the control covered by the merging operation area.
In an alternative, the executable instructions further cause the processor to:
responding to the triggering operation of any activation control in the application page by the user, and sequentially voice-broadcasting the control focus content corresponding to the activation control.
In an alternative, the executable instructions further cause the processor to:
responding to the triggering operation of the user on the editing control, broadcasting the control focus content corresponding to the editing control by voice, and switching the input mode from the keyboard input mode to the voice input mode.
In an alternative, the executable instructions further cause the processor to:
responding to the triggering operation of a user on the specified electronic book control, and broadcasting the electronic book identification corresponding to the specified electronic book control by voice;
responding to the triggering operation of the user on the appointed electronic book control again, and judging whether the database stores the audio book matched with the electronic book identification corresponding to the appointed electronic book control;
if yes, playing the audio book by voice;
if not, downloading the voice reading plug-in to read the appointed electronic book by the voice reading plug-in.
Fig. 3 is a schematic structural diagram of a computing device according to an embodiment of the present invention, and the specific embodiment of the present invention does not limit the specific implementation of the computing device.
As shown in fig. 3, the computing device may include: a processor (processor)302, a communication Interface 304, a memory 306, and a communication bus 308.
Wherein: the processor 302, communication interface 304, and memory 306 communicate with each other via a communication bus 308.
A communication interface 304 for communicating with network elements of other devices, such as clients or other servers.
The processor 302 is configured to execute the program 310, and may specifically execute relevant steps in the foregoing voice broadcasting method embodiment.
In particular, program 310 may include program code comprising computer operating instructions.
The processor 302 may be a central processing unit CPU, or an Application Specific Integrated Circuit (ASIC), or one or more Integrated circuits configured to implement an embodiment of the present invention. The computing device includes one or more processors, which may be the same type of processor, such as one or more CPUs; or may be different types of processors such as one or more CPUs and one or more ASICs.
And a memory 306 for storing a program 310. Memory 306 may comprise high-speed RAM memory and may also include non-volatile memory (non-volatile memory), such as at least one disk memory.
The program 310 may specifically be configured to cause the processor 302 to perform the following operations:
judging whether the at least two controls meet the association condition or not aiming at the at least two controls in the preset area in the application page;
if so, performing association processing on at least two controls;
and responding to the triggering operation of a user on any control in the application page, determining at least one control associated with the control, and broadcasting the control focus contents of at least two associated controls in sequence by voice.
In an optional mode, any control corresponds to an operation area;
program 310 further causes processor 302 to perform the following:
if the at least two controls are non-click event controls, judging whether the distance between the operation areas corresponding to the at least two controls is smaller than a preset distance;
and if so, determining that the at least two controls meet the association condition.
In an optional mode, any control corresponds to an operation area;
program 310 further causes processor 302 to perform the following:
if the at least two controls comprise a non-click event control and a click event control, judging whether the control focus content of the non-click event control and the control focus content of the click event control have relevance or not;
if the correlation exists, judging whether the distance between the operation areas corresponding to the at least two controls is smaller than a preset distance;
and if the distance is smaller than the preset distance, determining that the at least two controls meet the association condition.
In an alternative approach, the program 310 further causes the processor 302 to:
if the at least two controls are click event controls, judging whether the jump addresses corresponding to the at least two controls are the same;
and if so, determining that the at least two controls meet the association condition.
In an optional mode, any control corresponds to an operation area;
program 310 further causes processor 302 to perform the following:
merging the operation areas corresponding to the at least two controls to obtain a merged operation area;
and responding to the triggering operation of a user on any merging operation area in the application page, determining at least two controls covered by the merging operation area, and broadcasting the control focus contents of the at least two controls covered by the merging operation area in a voice mode in sequence.
In an alternative approach, the program 310 further causes the processor 302 to:
and determining a minimum rectangular area containing the operation areas corresponding to the at least two controls, and determining the minimum rectangular area as a combined operation area.
In an alternative approach, the program 310 also causes the processor 302 to:
and responding to the triggering operation of the user on any merging operation area in the application page again, and performing page skipping processing according to the skipping address corresponding to the control covered by the merging operation area.
In an alternative approach, the program 310 also causes the processor 302 to:
responding to the triggering operation of any activation control in the application page by the user, and sequentially voice-broadcasting the control focus content corresponding to the activation control.
In an alternative approach, the program 310 also causes the processor 302 to:
responding to the triggering operation of the user on the editing control, broadcasting the control focus content corresponding to the editing control by voice, and switching the input mode from the keyboard input mode to the voice input mode.
In an alternative approach, the program 310 also causes the processor 302 to:
responding to the triggering operation of a user on the specified electronic book control, and broadcasting the electronic book identification corresponding to the specified electronic book control by voice;
responding to the triggering operation of the user on the appointed electronic book control again, and judging whether the database stores the audio book matched with the electronic book identification corresponding to the appointed electronic book control;
if yes, playing the audio book by voice;
if not, downloading the voice reading plug-in to read the appointed electronic book by the voice reading plug-in.
The algorithms or displays presented herein are not inherently related to any particular computer, virtual system, or other apparatus. Various general purpose systems may also be used with the teachings herein. The required structure for constructing such a system will be apparent from the description above. In addition, embodiments of the present invention are not directed to any particular programming language. It is appreciated that a variety of programming languages may be used to implement the teachings of the present invention as described herein, and any descriptions of specific languages are provided above to disclose the best mode of the invention.
In the description provided herein, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the embodiments of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the invention and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be interpreted as reflecting an intention that: that the invention as claimed requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
Those skilled in the art will appreciate that the modules in the device in an embodiment may be adaptively changed and disposed in one or more devices different from the embodiment. The modules or units or components of the embodiments may be combined into one module or unit or component, and furthermore they may be divided into a plurality of sub-modules or sub-units or sub-components. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where at least some of such features and/or processes or elements are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the following claims, any of the claimed embodiments may be used in any combination.
The various component embodiments of the invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that a microprocessor or Digital Signal Processor (DSP) may be used in practice to implement some or all of the functionality of some or all of the components according to embodiments of the present invention. The present invention may also be embodied as apparatus or device programs (e.g., computer programs and computer program products) for performing a portion or all of the methods described herein. Such programs implementing the present invention may be stored on computer-readable media or may be in the form of one or more signals. Such a signal may be downloaded from an internet website or provided on a carrier signal or in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names. The steps in the above embodiments should not be construed as limiting the order of execution unless specified otherwise.

Claims (19)

1. A voice broadcasting method includes:
aiming at least two controls in a preset area in an application page, judging whether the at least two controls meet an association condition; the at least two controls comprise a non-click event control and/or a click event control; if the at least two controls are click event controls, judging whether the jump addresses corresponding to the at least two controls are the same; if yes, determining that the at least two controls meet the association conditions;
if yes, performing association processing on the at least two controls, wherein the association processing comprises: recording the incidence relation between the at least two controls, or combining the operation areas corresponding to the at least two controls;
and responding to the triggering operation of a user on any control in the application page, determining at least one control associated with the control, and broadcasting the control focus contents of at least two associated controls in sequence by voice.
2. The method of claim 1, wherein any control corresponds to an operating region;
determining whether the at least two controls satisfy the association condition further comprises:
if the at least two controls are non-click event controls, judging whether the distance between the operation areas corresponding to the at least two controls is smaller than a preset distance;
and if so, determining that the at least two controls meet the association condition.
3. The method of claim 1, wherein any control corresponds to an operating region;
determining whether the at least two controls satisfy the association condition further comprises:
if the at least two controls comprise a non-click event control and a click event control, judging whether the control focus content of the non-click event control and the control focus content of the click event control have relevance or not;
if the correlation exists, judging whether the distance between the operation areas corresponding to the at least two controls is smaller than a preset distance;
and if the distance is smaller than the preset distance, determining that the at least two controls meet the association condition.
4. The method according to any one of claims 1-3, wherein any control corresponds to an operating region;
associating the at least two controls further comprises: merging the operation areas corresponding to the at least two controls to obtain a merged operation area;
responding to the triggering operation of any control in the application page by the user, determining at least one control associated with the control, and sequentially broadcasting the control focus content of at least two controls by voice, wherein the control focus content further comprises:
responding to the triggering operation of a user on any merging operation area in the application page, determining at least two controls covered by the merging operation area, and broadcasting the control focus contents of the at least two controls covered by the merging operation area in sequence by voice.
5. The method of claim 4, wherein merging the operation regions corresponding to the at least two controls to obtain a merged operation region further comprises:
and determining a minimum rectangular area containing the operation areas corresponding to the at least two controls, and determining the minimum rectangular area as a combined operation area.
6. The method of claim 4, wherein after the control focus contents of the at least two controls covered by the merged operation area are voice-broadcasted in sequence, the method further comprises: and responding to the triggering operation of the user on any merging operation area in the application page again, and performing page skipping processing according to the skipping address corresponding to the control covered by the merging operation area.
7. The method according to any one of claims 1-3, wherein the method further comprises:
responding to the triggering operation of any activation control in the application page by the user, and sequentially broadcasting the control focus content corresponding to the activation control by voice.
8. The method according to any one of claims 1-3, wherein the method further comprises:
responding to the triggering operation of a user on the editing control, broadcasting control focus content corresponding to the editing control by voice, and switching the input mode from a keyboard input mode to a voice input mode.
9. The method according to any one of claims 1-3, wherein the method further comprises:
responding to the triggering operation of a user on the specified electronic book control, and broadcasting the electronic book identification corresponding to the specified electronic book control by voice;
responding to the triggering operation of the user on the appointed electronic book control again, and judging whether a sound book matched with the electronic book identification corresponding to the appointed electronic book control is stored in the database;
if yes, playing the audio book by voice;
and if not, downloading the voice reading plug-in so as to utilize the voice reading plug-in to read the appointed electronic book in a voice mode.
10. A computing device, comprising: the system comprises a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface complete mutual communication through the communication bus;
the memory is configured to store at least one executable instruction that causes the processor to:
aiming at least two controls in a preset area in an application page, judging whether the at least two controls meet an association condition; the at least two controls comprise a non-click event control and/or a click event control; if the at least two controls are click event controls, judging whether the jump addresses corresponding to the at least two controls are the same; if yes, determining that the at least two controls meet the association conditions;
if yes, performing association processing on the at least two controls, wherein the association processing comprises: recording the incidence relation between the at least two controls, or combining the operation areas corresponding to the at least two controls;
and responding to the triggering operation of a user on any control in the application page, determining at least one control associated with the control, and broadcasting the control focus contents of at least two associated controls in sequence by voice.
11. The computing device of claim 10, wherein any control corresponds to an operating region;
the executable instructions further cause the processor to:
if the at least two controls are non-click event controls, judging whether the distance between the operation areas corresponding to the at least two controls is smaller than a preset distance;
and if so, determining that the at least two controls meet the association condition.
12. The computing device of claim 10, wherein any control corresponds to an operating region;
the executable instructions further cause the processor to:
if the at least two controls comprise a non-click event control and a click event control, judging whether the control focus content of the non-click event control and the control focus content of the click event control have relevance or not;
if the correlation exists, judging whether the distance between the operation areas corresponding to the at least two controls is smaller than a preset distance;
and if the distance is smaller than the preset distance, determining that the at least two controls meet the association condition.
13. The computing device of any of claims 10-12, wherein any control corresponds to an operating region;
the executable instructions further cause the processor to:
merging the operation areas corresponding to the at least two controls to obtain a merged operation area;
responding to the triggering operation of a user on any merging operation area in the application page, determining at least two controls covered by the merging operation area, and broadcasting the control focus contents of the at least two controls covered by the merging operation area in sequence by voice.
14. The computing device of claim 13, wherein the executable instructions further cause the processor to:
and determining a minimum rectangular area containing the operation areas corresponding to the at least two controls, and determining the minimum rectangular area as a combined operation area.
15. The computing device of claim 13, wherein the executable instructions further cause the processor to:
and responding to the triggering operation of the user on any merging operation area in the application page again, and performing page skipping processing according to the skipping address corresponding to the control covered by the merging operation area.
16. The computing device of any of claims 10-12, wherein the executable instructions further cause the processor to:
responding to the triggering operation of any activation control in the application page by the user, and sequentially broadcasting the control focus content corresponding to the activation control by voice.
17. The computing device of any of claims 10-12, wherein the executable instructions further cause the processor to:
responding to the triggering operation of a user on the editing control, broadcasting control focus content corresponding to the editing control by voice, and switching the input mode from a keyboard input mode to a voice input mode.
18. The computing device of any of claims 10-12, wherein the executable instructions further cause the processor to:
responding to the triggering operation of a user on the specified electronic book control, and broadcasting the electronic book identification corresponding to the specified electronic book control by voice;
responding to the triggering operation of the user on the appointed electronic book control again, and judging whether a sound book matched with the electronic book identification corresponding to the appointed electronic book control is stored in the database;
if yes, playing the audio book by voice;
and if not, downloading the voice reading plug-in so as to utilize the voice reading plug-in to read the appointed electronic book in a voice mode.
19. A computer storage medium having at least one executable instruction stored therein, the executable instruction causing a processor to perform an operation corresponding to the voice broadcasting method according to any one of claims 1 to 9.
CN202011364189.4A 2020-11-27 2020-11-27 Voice broadcasting method, computing device and computer storage medium Active CN112486451B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202011364189.4A CN112486451B (en) 2020-11-27 2020-11-27 Voice broadcasting method, computing device and computer storage medium
PCT/CN2021/121810 WO2022111047A1 (en) 2020-11-27 2021-09-29 Voice broadcasting method, computing device and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011364189.4A CN112486451B (en) 2020-11-27 2020-11-27 Voice broadcasting method, computing device and computer storage medium

Publications (2)

Publication Number Publication Date
CN112486451A CN112486451A (en) 2021-03-12
CN112486451B true CN112486451B (en) 2022-03-11

Family

ID=74936730

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011364189.4A Active CN112486451B (en) 2020-11-27 2020-11-27 Voice broadcasting method, computing device and computer storage medium

Country Status (2)

Country Link
CN (1) CN112486451B (en)
WO (1) WO2022111047A1 (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112486451B (en) * 2020-11-27 2022-03-11 掌阅科技股份有限公司 Voice broadcasting method, computing device and computer storage medium
CN112988108B (en) * 2021-05-07 2021-08-10 浙江口碑网络技术有限公司 Information playing method and device, electronic equipment and storage medium
CN112989243A (en) * 2021-05-07 2021-06-18 浙江口碑网络技术有限公司 Information playing method, information to be played obtaining method, device and electronic equipment
CN113034249A (en) * 2021-05-28 2021-06-25 浙江口碑网络技术有限公司 Information playing method and device and electronic equipment
CN113190697A (en) * 2021-06-02 2021-07-30 口碑(上海)信息技术有限公司 Image information playing method and device
CN113489833B (en) * 2021-06-29 2022-11-04 维沃移动通信有限公司 Information broadcasting method, device, equipment and storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103853355A (en) * 2014-03-17 2014-06-11 吕玉柱 Operation method for electronic equipment and control device thereof

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9405847B2 (en) * 2008-06-06 2016-08-02 Apple Inc. Contextual grouping of a page
KR101709510B1 (en) * 2011-06-03 2017-02-24 엘지전자 주식회사 Mobile terminal and method for controlling the same
KR101899819B1 (en) * 2012-08-03 2018-09-20 엘지전자 주식회사 Mobile terminal and method for controlling thereof
CN104461346B (en) * 2014-10-20 2017-10-31 天闻数媒科技(北京)有限公司 A kind of method of visually impaired people's Touch Screen, device and intelligent touch screen mobile terminal
CN107644647B (en) * 2016-07-21 2020-10-30 平安科技(深圳)有限公司 Voice return visit method and device
CN107273342A (en) * 2016-11-25 2017-10-20 深圳市联谛信息无障碍有限责任公司 A kind of method that HTML content is recognized in ancillary technique
CN108874356B (en) * 2018-05-31 2020-10-23 珠海格力电器股份有限公司 Voice broadcasting method and device, mobile terminal and storage medium
CN109893852B (en) * 2019-02-26 2022-07-26 北京心智互动科技有限公司 Interface information processing method and device
CN110264316A (en) * 2019-06-20 2019-09-20 浙江口碑网络技术有限公司 Item Information inquires methods of exhibiting and device
CN110618783B (en) * 2019-09-12 2021-04-13 北京小米移动软件有限公司 Text broadcasting method, device and medium
CN112486451B (en) * 2020-11-27 2022-03-11 掌阅科技股份有限公司 Voice broadcasting method, computing device and computer storage medium

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103853355A (en) * 2014-03-17 2014-06-11 吕玉柱 Operation method for electronic equipment and control device thereof

Also Published As

Publication number Publication date
CN112486451A (en) 2021-03-12
WO2022111047A1 (en) 2022-06-02

Similar Documents

Publication Publication Date Title
CN112486451B (en) Voice broadcasting method, computing device and computer storage medium
CN106951148B (en) Page switching method and device
CN109561271B (en) Method for guiding terminal operation, first terminal and second terminal
JP2022520094A (en) Interface display method and its devices, terminals and computer programs
CN111143739B (en) Page jump method, computing device and computer storage medium
CN107329659B (en) Permission setting method and device, electronic equipment and storage medium
CN112711372B (en) Page response method in visual impairment mode, computing device and computer storage medium
CN111782873B (en) Book recommendation method based on book video, electronic equipment and storage medium
US20160345049A1 (en) Method and device for switching channel
US11593981B2 (en) Method for processing a screenshot image, electronic device and computer storage medium
CN111783015A (en) Display method of shared display elements in page, electronic equipment and storage medium
CN107168974A (en) The display control method and device of message in display items related content, social class application
CN110806822A (en) Electronic book interaction method, computing device and computer storage medium
CN107786894B (en) User feedback data identification method, mobile terminal and storage medium
CN110087120B (en) Same-window switching method of online list and local list and computing equipment
CN109558225B (en) Page switching method and device
CN113158621B (en) Bookshelf page display method, computing device and computer storage medium
EP4343579A1 (en) Information replay method and apparatus, electronic device, computer storage medium, and product
CN106445286B (en) Method and device for determining focus of terminal screen based on split screen and terminal equipment
CN107862728B (en) Picture label adding method and device and computer readable storage medium
CN112162680B (en) Correlation method of reading service and live broadcast service, computing device and storage medium
CN111199136A (en) Document content display method, device and equipment
CN110989954B (en) Multi-screen linkage resource display method and device and electronic equipment
CN111596993B (en) Interface processing method, terminal equipment and storage medium
CN114268803A (en) Live video display method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant