CN112090058A - Information prompting method and device, storage medium and electronic equipment - Google Patents

Information prompting method and device, storage medium and electronic equipment Download PDF

Info

Publication number
CN112090058A
CN112090058A CN202010997209.5A CN202010997209A CN112090058A CN 112090058 A CN112090058 A CN 112090058A CN 202010997209 A CN202010997209 A CN 202010997209A CN 112090058 A CN112090058 A CN 112090058A
Authority
CN
China
Prior art keywords
content hot
user interface
graphical user
hot area
screen
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010997209.5A
Other languages
Chinese (zh)
Other versions
CN112090058B (en
Inventor
张泽权
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202010997209.5A priority Critical patent/CN112090058B/en
Publication of CN112090058A publication Critical patent/CN112090058A/en
Application granted granted Critical
Publication of CN112090058B publication Critical patent/CN112090058B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/21Input arrangements for video game devices characterised by their sensors, purposes or types
    • A63F13/214Input arrangements for video game devices characterised by their sensors, purposes or types for locating contacts on a surface, e.g. floor mats or touch pads
    • A63F13/2145Input arrangements for video game devices characterised by their sensors, purposes or types for locating contacts on a surface, e.g. floor mats or touch pads the surface being also a display device, e.g. touch screens
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/54Controlling the output signals based on the game progress involving acoustic signals, e.g. for simulating revolutions per minute [RPM] dependent engine sounds in a driving game or reverberation against a virtual wall
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/30Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by output arrangements for receiving control signals generated by the game device
    • A63F2300/308Details of the user interface

Abstract

The invention discloses and provides an information prompting method and device, a storage medium and electronic equipment, wherein the method comprises the following steps: providing the graphical user interface through first terminal equipment, wherein the graphical user interface at least comprises a content hot area, and the content hot area is an area which can trigger audio prompt in a screen recognition mode; controlling the graphical user interface to enter the screen-recognition mode in response to a trigger event of the screen-recognition mode; in the screen recognition mode, acquiring the current position of a touch point acting on the graphical user interface, and determining a target content hot area according to the current position; and performing audio prompt according to the distance between the target content hot area and the current position of the touch point. The technical problem that the user can not effectively operate through touch screen equipment due to the fact that the user can not obtain visual information of a screen through vision and can not obtain effective touch information from the smooth screen is solved, the user can control the touch screen more conveniently and reliably, user experience is improved, the original graphical user interface can not be influenced, and the cost of achieving barrier-free function during graphical user interface development is reduced.

Description

Information prompting method and device, storage medium and electronic equipment
Technical Field
The invention relates to the technical field of touch operation, in particular to an information prompting method and device, a storage medium and electronic equipment.
Background
Under the wave of the internet, the continuous development and evolution of hardware and software technologies have promoted the emergence of intelligent devices and software. The user can perform various operations such as playing games, watching movies online, logging in a mailbox, etc. through the touch screen terminal device.
However, in an extreme environment, when the screen visibility of the terminal device is very low, or when the screen of the terminal device fails to display due to a black screen fault, or when the terminal device is used by people with visual impairment, the user cannot acquire visual information of the screen through vision, or cannot acquire effective touch information from a smooth screen, and therefore, the user is difficult to effectively operate through the touch screen device. Especially, when a game application is run on a terminal device, effective operation is difficult for a user in the application scene, which causes a phenomenon of game interruption or forced quitting, and the user experience is not high.
It is to be noted that the information disclosed in the above background section is only for enhancement of understanding of the background of the present disclosure, and thus may include information that does not constitute prior art known to those of ordinary skill in the art.
Disclosure of Invention
The present disclosure is directed to an information prompting method and apparatus, a storage medium, and an electronic device, which overcome one or more of the problems due to the limitations and disadvantages of the related art, at least to some extent. According to one aspect of the disclosure, an information prompting method, in which a first terminal device provides a graphical user interface, the graphical user interface includes at least one content hot area, and the content hot area is an area that can trigger an audio prompt in a screen-recognition mode, includes:
controlling the graphical user interface to enter the screen-recognition mode in response to a trigger event of the screen-recognition mode;
in the screen recognition mode, acquiring the current position of a touch point acting on the graphical user interface, and determining a target content hot area according to the current position;
and performing audio prompt according to the distance between the target content hot area and the current position of the touch point.
According to another aspect of the present disclosure, there is provided an information presentation apparatus, the apparatus including:
the system comprises a graphical user interface, a display module and a display module, wherein the graphical user interface at least comprises a content hot area which is an area capable of triggering an audio prompt in a screen recognition mode;
the response module is used for responding to a trigger event of the screen recognition mode and controlling the graphical user interface to enter the screen recognition mode;
the determining module is used for acquiring the current position of a touch point acting on the graphical user interface in the screen identifying mode and determining a target content hot area according to the current position;
and the prompting module is used for performing audio prompting according to the distance between the target content hot area and the current position of the touch point.
According to another aspect of the present disclosure, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the information prompting method described in any one of the above.
According to another aspect of the present disclosure, there is provided an electronic device including:
a processor, a display device; and
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the information prompting method of any one of the preceding claims via execution of the executable instructions.
In an information prompting method provided by an exemplary embodiment of the present disclosure, a first terminal device provides a graphical user interface, where the graphical user interface at least includes a content hot area, and the content hot area is an area that can trigger an audio prompt in a screen recognition mode; controlling the graphical user interface to enter the screen-recognition mode in response to a trigger event of the screen-recognition mode; in the screen recognition mode, acquiring the current position of a touch point acting on the graphical user interface, and determining a target content hot area according to the current position; and performing audio prompt according to the distance between the target content hot area and the current position of the touch point. The technical problem that the user can not effectively operate through touch screen equipment due to the fact that the user can not obtain visual information of a screen through vision and can not obtain effective touch information from the smooth screen is solved, the user can control the touch screen more conveniently and reliably, user experience is effectively improved, the original graphical user interface can not be influenced, and the cost of achieving barrier-free function during graphical user interface development is reduced.
Drawings
The above and other features and advantages of the present disclosure will become more apparent by describing in detail exemplary embodiments thereof with reference to the attached drawings. It is to be understood that the drawings in the following description are merely exemplary of the disclosure, and that other drawings may be derived from those drawings by one of ordinary skill in the art without the exercise of inventive faculty. In the drawings:
FIG. 1 is a flow chart of a method of information prompting in an exemplary embodiment of the present disclosure;
FIG. 2 is a schematic diagram of a player employing a particular touch screen gesture in an exemplary embodiment of the present disclosure;
FIG. 3 is a diagram illustrating a target content hotspot being found by a touch point in an exemplary embodiment of the present disclosure;
FIG. 4 is a schematic illustration of the manner in which controls are operated in an exemplary embodiment of the disclosure;
FIG. 5 is a schematic illustration of a search range for a new target content hotspot in an exemplary embodiment of the present disclosure;
FIG. 6 is a schematic illustration of a search range for a new target content hotspot in another exemplary embodiment of the present disclosure;
FIG. 7 is a block diagram of an information prompt apparatus in an exemplary embodiment of the present disclosure;
FIG. 8 is a schematic diagram of a computer-readable storage medium in an exemplary embodiment of the disclosure;
fig. 9 is a block diagram of an electronic device in an exemplary embodiment of the disclosure.
Detailed Description
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present invention will be described in detail below with reference to the embodiments with reference to the attached drawings.
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged under appropriate circumstances in order to facilitate the description of the embodiments of the invention herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It should be further noted that various trigger events disclosed in this specification may be preset, and different trigger events may trigger different functions to be executed.
An information prompting method in one embodiment of the present disclosure may be executed in a terminal device or a server. The terminal device may be a local terminal device. When the interface display control method is operated on a server, the method can be implemented and executed based on a cloud interaction system, wherein the cloud interaction system comprises the server and a client device.
In an optional embodiment, various cloud applications may be run under the cloud interaction system, for example: and (5) cloud games. Taking a cloud game as an example, a cloud game refers to a game mode based on cloud computing. In the running mode of the cloud game, the running main body of the game program and the game picture presenting main body are separated, the storage and the running of the information prompting method are completed on a cloud game server, and the client equipment is used for receiving and sending data and presenting the game picture, for example, the client equipment can be display equipment with a data transmission function close to a user side, such as a mobile terminal, a television, a computer, a palm computer and the like; however, the terminal device performing the information processing is a cloud game server in the cloud. When a game is played, a player operates the client device to send an operation instruction to the cloud game server, the cloud game server runs the game according to the operation instruction, data such as game pictures and the like are encoded and compressed, the data are returned to the client device through a network, and finally the data are decoded through the client device and the game pictures are output.
In an alternative embodiment, the terminal device may be a local terminal device. Taking a game as an example, the local terminal device stores a game program and is used for presenting a game screen. The local terminal device is used for interacting with the player through a graphical user interface, namely, a game program is downloaded and installed and operated through an electronic device conventionally. The manner in which the local terminal device provides the graphical user interface to the player may include a variety of ways, for example, it may be rendered for display on a display screen of the terminal or provided to the player through holographic projection. For example, the local terminal device may include a display screen for presenting a graphical user interface including a game screen and a processor for running the game, generating the graphical user interface, and controlling display of the graphical user interface on the display screen.
In a possible implementation manner, an embodiment of the present invention provides an information prompting method, where a graphical user interface is provided by a first terminal device, where the first terminal device may be the aforementioned local terminal device, and may also be the aforementioned client device in a cloud interaction system.
In an optional implementation manner, the information prompting method can be applied to the condition that the screen visibility of the terminal device is very low in an extreme environment, and can also be applied to the condition that the screen of the terminal device fails to display due to a black screen fault, or the condition that people with visual disorder use the terminal device. In the application scene, the user cannot acquire visual information of the screen through vision, and cannot acquire effective touch information from a smooth screen.
And providing the graphical user interface through the first terminal equipment, wherein the graphical user interface at least comprises a content hot area, and the content hot area is an area which can trigger an audio prompt in a screen recognition mode. Fig. 1 is a flowchart of an information prompting method in an exemplary embodiment of the present disclosure. As shown in fig. 1, the method in this embodiment includes the steps of:
step S110, responding to a trigger event of the screen recognition mode, and controlling the graphical user interface to enter the screen recognition mode;
step S120, under the screen recognition mode, obtaining the current position of a touch point acting on the graphical user interface, and determining a target content hot area according to the current position;
step S130, audio prompt is carried out according to the distance between the target content hot area and the current position of the touch point.
The content hot area can be always displayed on the graphical user interface, and can also be displayed on the graphical user interface according to the triggering of a specific condition. The content hot areas may be provided in the graphical user interface at the same time or may be provided in the graphical user interface independently, and are not limited herein.
By the information prompting method in the exemplary embodiment, the graphical user interface is controlled to enter the screen-recognition mode by responding to a trigger event of the screen-recognition mode; in the screen recognition mode, acquiring the current position of a touch point acting on the graphical user interface, and determining a target content hot area according to the current position; and performing audio prompt according to the distance between the target content hot area and the current position of the touch point.
The technical problem that the user can not effectively operate through touch screen equipment due to the fact that the user can not obtain visual information of a screen through vision and can not obtain effective touch information from the smooth screen is solved, the user can control the touch screen more conveniently and reliably, user experience is effectively improved, the original graphical user interface can not be influenced, and the cost of achieving barrier-free function during graphical user interface development is reduced.
Next, the steps of the information presentation method in the present exemplary embodiment will be further described.
In the exemplary embodiment, a graphical user interface is provided by the first terminal device, the content displayed by the graphical user interface at least partially containing a content hot zone, which is an area where an audio prompt may be triggered in the screen-aware mode.
In an optional implementation manner, the screen recognition mode may be an operation mode in which the first terminal device performs all or part of operation steps such as machine reading, recognition, processing, and display on the content and/or the touch operation instruction displayed on the graphical user interface, and after the first terminal device enters the screen recognition mode, the first terminal device responds to the touch operation instruction of the current touch point of the player, and recognizes and reads a content hot area on the graphical user interface according to the position of the current touch point, and optionally broadcasts information recognized and read by the content hot area to inform the user; optionally, in the screen recognition mode, the first terminal device enters a game interface according to a touch operation instruction of the player, starts an app application, or recognizes related content on a graphical user interface to perform corresponding audio prompt broadcasting; optionally, displaying specific application content or content of a specific game on a graphical user interface, displaying information such as characters, graphics and controls for prompting the content on the graphical user interface, and also displaying controls or regions for receiving corresponding instructions executed by touch operations, in a normal mode, receiving the controls or regions for executing the corresponding instructions by touch operations, monitoring and receiving the touch operations in real time, determining corresponding controls or regions according to the touch operations, and executing the corresponding instructions; in the screen recognition mode, the control or the area for receiving the corresponding instruction executed by the touch operation suspends the execution of the corresponding instruction, so that the content of the specific application content or the specific game displayed on the graphical user interface, the information such as the text, the graph and the control for prompting the content, and the content hot area formed by the control or the area for receiving the corresponding instruction executed by the touch operation are recognized according to the received touch operation.
Alternatively, the audio prompt may be a simple alert tone that alerts the user, such as a "Di-" or a "Da-" prompt tone.
In an alternative embodiment, the content hot area may be an area that can trigger an audio prompt in the screen-recognition mode, and the content hot area may be a relatively large area range on the graphical user interface or a relatively small area range on the graphical user interface. The content hot area may be a square, rectangle, frame, or other shape (e.g., circle). The content presented by the graphical user interface may include all of the content hot-zone or may be a portion of the content hot-zone. For example, when the content hot area is displayed in an enlarged manner in the graphical user interface, the local content of the content hot area is displayed on the graphical user interface of the first terminal device. The content hot area may be displayed in the upper left, upper right, or other position in the graphical user interface, and the present exemplary embodiment is not limited thereto.
In an alternative embodiment, the content hot zone may contain operation controls, information controls, game content, and the like.
The content hot area may be one or more of a text, a graphic, a control and other information for prompting content displayed on the graphical user interface, a control or an area for receiving a touch operation and executing a corresponding instruction displayed on the graphical user interface according to the content of a specific application or a specific game displayed on the graphical user interface.
In an alternative embodiment, the content hotspot includes an operational control. The operation control is used for executing a function corresponding to the operation control according to the received touch operation, for example: the operation controls for controlling the movement of the virtual character, the orientation control area for controlling the turning of the character, the chat window for receiving input information, buttons, items available for picking up on the map, and the like may be displayed at the upper left, upper right, or other positions in the graphic user interface, and the present exemplary embodiment is not limited thereto.
In an alternative embodiment, the content hotspot includes an information control. The information control is used for visually prompting all or some aspect of specific application content or content of a specific game displayed on the graphical user interface, such as: blood bar identification, experience progress identification, description text, important map objects, time or place information and the like of the virtual character. The information control may be displayed in the graphical user interface at the top left, top right, or other locations, as the exemplary embodiment is not limiting.
In an alternative embodiment, the content hot zone includes game content. The game content may be any information that is presented to the player in the game, such as game characters, game scenes, in-game models, game storylines, game backgrounds, and the like. Optionally, the content hot zone includes application content, the application content may be any information presented to the user in the application, and the game content or the application content may be displayed in the upper left, the upper right, or other positions in the graphical user interface, which is not limited by the present exemplary embodiment.
In step S110, in response to a trigger event of the screen-recognition mode, controlling the graphical user interface to enter the screen-recognition mode.
In an alternative embodiment, the screen-recognition mode may be a full or partial operation step of performing machine reading, recognition, processing, and displaying on the content and/or the touch operation instruction displayed on the graphical user interface through the first terminal device. For example, in the screen recognition mode, the first terminal device enters a game interface according to a touch operation instruction of a player, starts an app application, or recognizes related content on a graphical user interface to perform corresponding audio prompt broadcasting.
The trigger event can be preset, or can be set in real time according to the operation instruction of the user in the program running process, and different trigger events can trigger execution of different functions.
The trigger event may be triggered by detecting a touch operation of a specific trigger control provided in the user graphical interface, or may be triggered according to a preset interaction condition, for example: pressing, double-clicking, shaking, voice input, etc. by the user.
In an optional embodiment, after the graphical user interface enters the screen recognition mode, the currently enabled screen recognition mode is broadcasted to the player through audio, and optionally, an audio introduction is carried out on the use method of the screen recognition mode; alternatively, the player may perform a simple use exercise, and during the introduction and exercise, the player may skip the above process at any time.
In an alternative embodiment, when the player triggers the screen recognition mode during the starting interface (the first interface after the game is running) of the game or during the playing process of the game, the game interface enters the screen recognition mode, and optionally, the player is reminded that the screen recognition mode has been entered. For example, in a single-player game, after a player triggers a screen recognition mode, the system plays audio and tells the player that the screen recognition mode is entered, and pauses a game picture, and the player can gradually learn the number, distribution, position and other related information of content hotspots of the graphical user interface through touch point operation. Optionally, in the process of the multiplayer game, after the player triggers the screen recognition mode, the position and the presentation information of the content hot area on the graphical user interface are updated in real time according to the execution picture of the game, and meanwhile, the recognition process of the content hot area is adjusted in real time according to the movement of the touch point.
In an alternative embodiment, in the screen recognition mode, when the player touches the graphical user interface in the screen recognition mode, the touch point disappears and the screen recognition mode is not exited. The screen recognition mode may be exited when the player performs a particular gesture operation in the screen recognition mode, such as a three-finger swipe from bottom to top.
And controlling the graphical user interface to enter the screen recognition mode in response to a trigger event of the screen recognition mode. Through specific touch screen gesture operation, conflict with conventional touch control operation of the first terminal device can be avoided, the screen recognition mode can be triggered at will in any operation process, the operation is simple, the operation is easy to master, and the operation efficiency of a player is improved.
In step S120, in the screen-recognition mode, a current position of a touch point acting on the gui is obtained, and a target content hot area is determined according to the current position.
In an optional implementation manner, in the screen-aware mode, the touch point acting on the graphical user interface may be controlled by a contact point of a finger, a stylus pen, or any touch medium with a screen of a first terminal device presenting the graphical user interface. The current position of the touch point may be optionally controlled by a touch operation of the player on the graphical user interface, and the current position of the touch point may be displayed at an upper left, an upper right, or other positions in the graphical user interface, which is not limited by the present exemplary embodiment.
In an optional implementation manner, a plurality of content hot areas are arranged on the graphical user interface, and in the screen-recognition mode, one of the content hot areas is determined to be selected as a target content hot area from the plurality of content hot areas according to the current position of a touch point acting on the graphical user interface, wherein the target content hot area is an area on the graphical user interface to be touched by a player. Determining a target content hot zone according to the current position of the touch point, optionally, there are various embodiments: the distance between the current position and the target content hot area can be determined by meeting a preset threshold value relationship, for example, when the distance between the current position and the target content hot area is smaller than a preset threshold value, it indicates that the current position of the touch point is within a set range of the target content hot area, that is, the target content hot area can be determined according to the current position; the current position may also be determined to be the same as the position direction of the target content hot area, for example, a content hot area in the same horizontal direction or vertical direction (the horizontal direction or the vertical direction is compared with the horizontal plane parallel to the graphical user interface) as the current position is the target content hot area; it may also be determined by a relative orientation relationship of the current location and the location of the target content hot zone, for example, a content hot zone at the upper left or lower right relative to the current location as the target content hot zone. And determining a target content hot area according to the current position, wherein the player can know the position relation between the current position and the target content hot area, so that the player can find the target content hot area quickly.
Step S130, audio prompt is carried out according to the distance between the target content hot area and the current position of the touch point.
Optionally, the distance between the target content hot area and the current position of the touch point may be a distance between a nearest edge position of the target content hot area and the current position of the touch point, or a distance between a center position of the target content hot area and the current position of the touch point, which is not limited in this exemplary embodiment.
In an optional implementation manner, a distance between the target content hot area and the current position of the touch point may be continuously changed from far to near, may also be continuously changed from near to far, and may also be irregularly and discretely changed, and a distance change relationship between the target content hot area and the current position of the touch point is not limited in this exemplary embodiment.
In an alternative embodiment, the audio prompt may be a simple alert tone that alerts the user, such as a "Di-" or "Da-" prompt tone.
In an optional implementation manner, the audio prompt is performed according to the distance between the target content hot area and the current position of the touch point, and the audio parameter may be adjusted correspondingly according to the change of the distance between the target content hot area and the current position of the touch point. The player can judge the distance between the current touch point and the target content hot area by sensing the change of the audio parameters so as to guide the player to determine the direction of the next touch, so that the player can find the target content hot area as soon as possible, the operation is simple, the mastering is easy, and the user experience is improved.
In an alternative embodiment, the audio parameter may be an audio size, an audio frequency, or the like.
In an alternative embodiment, the greater the distance, the lower the frequency of the emitted alert tone; or, the larger the distance is, the smaller the sent prompt tone is; alternatively, different types of alert tones, for example, "Di to Di" or "Da to Da" are set according to the different preset distance ranges. For example, when the distance is greater than a preset threshold, a prompt tone of 'Di-' is sent out; and when the distance is smaller than a preset threshold value, sending out Da-Da prompt tone.
In an optional implementation manner, in step S110, in response to a trigger event of the screen-recognition mode, the method further includes: determining a preset number of touch points acting on the graphical user interface, and moving towards a preset direction; or receiving audio information matched with a preset audio clip through the first terminal device.
In an alternative embodiment, the triggering event may be that a preset number of touch points acting on the gui are determined and moved to a preset direction.
Optionally, the preset number of the touch points acting on the graphical user interface is two or more, and the touch points may be implemented by a single finger or a single stylus, or may be implemented by two fingers.
In an alternative embodiment, the preset direction may be from left to right, from right to left, from top to bottom, from bottom to top, from top to bottom right, or from top to bottom left, or from top to bottom right, and any other preset directions in the horizontal direction of the gui, which is not limited in this exemplary embodiment.
In an alternative embodiment, taking fig. 2 as an example, fig. 2 is a schematic diagram of a player adopting a specific touch screen gesture in an exemplary embodiment of the disclosure. The specific touch screen gesture is determined according to the specific operation of the first terminal device, and the specific touch screen gesture operation needs to be prevented from being in conflict with the conventional control operation of the first terminal device. As shown in fig. 2, the player slides on the screen from top to bottom simultaneously by three fingers, that is, three touch points are used to perform a synchronous sliding operation on the gui from top to bottom, so as to control the gui to enter the screen recognition mode.
In an alternative embodiment, the triggering event may be the reception of audio information matching a preset audio clip by the first terminal device.
In an optional implementation manner, the audio information may be that a player sends an audio control instruction such as "start touch mode" or "start touch mode" to the first terminal device, and the first terminal device collects and recognizes the audio control instruction, and controls the graphical user interface to enter the screen recognition mode according to a correctly matched audio control instruction under the condition that the audio control instruction is verified to be valid according to a preset audio clip.
In an optional implementation manner, in step S110, controlling the graphical user interface to enter the screen-recognition mode further includes:
and controlling the graphical user interface to enter the screen recognition mode, and acquiring the screen coordinate position of the content hot area in the graphical user interface and the broadcast information corresponding to the content hot area.
In an optional implementation manner, after the graphical user interface is controlled to enter the screen recognition mode, the content of all content hot areas or part of content hot areas displayed on the graphical user interface and the screen coordinate position of the corresponding content hot area are obtained in real time, the corresponding broadcast information is generated according to the content hot areas, meanwhile, the content of all content hot areas or part of content hot areas and the screen coordinate position corresponding to the content hot areas are adjusted, determined and displayed on the graphical user interface in real time according to the continuous progress of application or games, and the corresponding broadcast information is generated in real time according to all content hot areas or part of content hot areas displayed on the current graphical user interface.
Optionally, the part of the content hot area is a content hot area which is away from the current position of the touch point by a preset threshold. For example: the partial content hot area may be a content hot area with a distance of less than 1 centimeter from the current position of the touch point; a content hot zone that is in the same horizontal or vertical direction (the horizontal or vertical direction is compared to a horizontal plane parallel to the graphical user interface) as the current position of the touch point; or a content hot area at the upper left or lower right relative to the current position of the touch point.
For example: when the game application is in the process of proceeding, and after the graphical user interface is controlled to enter the screen recognition mode, acquiring a content hot area A and a content hot area B displayed on the graphical user interface in real time, determining screen coordinate positions corresponding to the content hot area A and the content hot area B, such as the content hot area A (1,1) and the content hot area B (4,4), and respectively generating corresponding broadcast information according to the content hot area A and the content hot area B, such as: content hotspot a-enemy avatar; content hotspot B — virtual rocker. According to the continuous progress of the game, the content hot area displayed on the current graphical user interface changes: the content hot area a disappears and a new content hot area C appears, so that the content hot area B and the content hot area C are adjusted, determined and displayed in real time on the graphical user interface, and new screen coordinate positions corresponding to the content hot area B and the content hot area C, such as the content hot area B (4,4) and the content hot area C (2,2), are determined, and corresponding broadcast information is respectively generated according to the content hot area B and the content hot area C, such as: content hot zone B-virtual Rocker; content hot zone C-my virtual character. Optionally, the broadcast information corresponding to the content hot area may be stored in a cache, and is not required to be repeatedly generated.
In an optional embodiment, after the graphical user interface is controlled to enter the screen-recognition mode, the change of a game picture displayed on the graphical user interface is paused; and acquiring the screen coordinate positions of the content hot areas of all the content hot areas or part of the content hot areas in the graphical user interface and the corresponding broadcast information of all the content hot areas or part of the content hot areas.
Optionally, the part of the content hot area is a content hot area which is away from the current position of the touch point by a preset threshold. For example: the partial content hot area may be a content hot area with a distance of less than 1 centimeter from the current position of the touch point; a content hot zone that is in the same horizontal or vertical direction (the horizontal or vertical direction is compared to a horizontal plane parallel to the graphical user interface) as the current position of the touch point; or a content hot area at the upper left or lower right relative to the current position of the touch point. For example: when a game application is in the process of proceeding and the graphical user interface is controlled to enter the screen recognition mode, the change of a game picture displayed on the graphical user interface is paused, a content hot area A and a content hot area B displayed on the graphical user interface are obtained, the screen coordinate positions corresponding to the content hot area A and the content hot area B are determined, such as the content hot area A (1,1) and the content hot area B (4,4), and corresponding broadcast information is respectively generated according to the content hot area A and the content hot area B, such as: content hotspot a-enemy avatar; content hotspot B — virtual rocker. Through the implementation mode, the method is beneficial for visually-impaired people to know the starting process of the screen recognition mode, the screen coordinate position of the content hot area and the corresponding broadcast information.
In an optional implementation manner, in step S120, determining a target content hot area according to the current location further includes:
in step S121, the content hot area with the shortest distance to the current position is determined as the target content hot area.
Alternatively, there are various embodiments: the distance between the current position and the content hot area may be determined according to a preset threshold relationship, for example, when the distance between the current position and the content hot area is smaller than a preset threshold, it indicates that the current position of the touch point is within a set range of the target content hot area, and the content hot area with the closest distance between the current position and the content hot area within the set range is determined as the target content hot area; the current position may also be determined to be the same as the position direction of the target content hot area, for example, a content hot area closest to the current position in the same horizontal direction or vertical direction (the horizontal direction or the vertical direction is compared with the horizontal plane parallel to the graphical user interface) is used as the target content hot area; the current location may also be determined in a relative orientation relationship with the location of the target content hotspot, for example, the content hotspot closest to the top left or bottom right relative to the current location is determined as the target content hotspot.
And determining a target content hot area according to the current position, wherein the player can know the position relation between the current position and the target content hot area, so that the player can find the target content hot area quickly.
In an optional implementation manner, in step S121, the content hot zone with the shortest distance to the current location is determined as the target content hot zone, and the method further includes the following specific steps:
step S1211, using the current position as a reference point, obtaining at least one content hot area within a preset first range;
step S1212, determining a distance between the current location and the at least one content hot area;
in step S1213, the content hot area with the shortest distance is determined as the target content hot area.
In an optional implementation manner, in step S1211, taking the current position as a reference point, at least one content hot area within a preset first range is obtained; optionally, taking the top left corner vertex as a coordinate origin, taking the horizontal direction of the graphical user interface as an X axis, and taking the vertical direction as a Y axis, setting an XY coordinate system in the graphical user interface, taking the current position of the touch point as a reference point, and setting coordinates (X0, Y0) of the reference point in the XY coordinate system. Optionally, the first range is preset as a parameter D, and coordinates (x1, y1) of at least one content hot zone within the first range are preset.
In an alternative embodiment, in step S1212, the distance d between the current location and the at least one content hot area is determined:
d2=(x0-x1)2+(y0-y1)2 (1)
d≤D (2)
in an alternative embodiment, in step S1213, the content hot zone with the shortest distance is determined as the target content hot zone, and then the content hot zone Min (d1, d2, d3 …) with the shortest distance is determined as the target content hot zone.
In an alternative implementation manner, taking fig. 3 as an example, fig. 3 is a schematic diagram of finding a target content hot area through a touch point in an exemplary embodiment of the disclosure. As shown in fig. 3, three content hot zone buttons 1, 2 and a virtual character in the rectangular frame are selected as content hot zones in a preset range, and the distances from the current position of the touch point are d1, d2 and d3 respectively; the target hotspot Min (d1, d2, d3) is d1, and the distance d1 between the current position of the touch point and the virtual character is the closest, that is, the content hotspot where the virtual character is located is the target content hotspot.
And determining the content hot area with the shortest distance to the current position as a target content hot area, wherein the player can know the position relation between the current position and the target content hot area, so that the player can find the target content hot area as soon as possible, and the touch operation efficiency is improved.
In an alternative embodiment, in response to the movement of the touch point on the gui, a change in distance between the target content hot zone and the current location of the touch point is determined, and an audio parameter of the audio prompt is adjusted according to the change in distance. Optionally, the first broadcast determines a distance change between the target content hot area and the current position of the touch point by detecting a movement position change of the touch point on the graphical user interface in real time, and adjusts an audio parameter of the audio prompt according to the distance change.
In an optional implementation manner, a distance between the target content hot area and the current position of the touch point may be continuously changed from far to near, may also be continuously changed from near to far, and may also be irregularly and discretely changed, and a distance change relationship between the target content hot area and the current position of the touch point is not limited in this exemplary embodiment.
In an alternative embodiment, the audio prompt may be a simple alert tone that alerts the user, such as a "Di-" or "Da-" prompt tone.
In an optional implementation manner, the audio prompt is performed according to the distance between the target content hot area and the current position of the touch point, and the audio parameter may be adjusted correspondingly according to the change of the distance between the target content hot area and the current position of the touch point. The player can judge the distance between the current touch point and the target content hot area by sensing the change of the audio parameters so as to guide the player to determine the direction of the next touch, so that the player can find the target content hot area as soon as possible, the operation is simple, the mastering is easy, and the user experience is improved.
In an alternative embodiment, the audio parameter may be audio size, audio frequency or audio type, etc. Different audio parameters (such as frequency, volume and type) are set, various selectable reminding modes are provided for the player, the requirements of different crowds are met, the user experience is improved, and meanwhile, the player can find the target content hot area as soon as possible.
In an optional implementation manner, after step S130, the following specific steps are further included:
step S140, responding to the movement of the touch point to the target content hot area, and acquiring a broadcast message corresponding to the target content hot area;
and step S150, playing the broadcast message through the first terminal equipment.
In an optional implementation manner, in the screen recognition mode, the current touch operation of the player is detected in real time, when it is detected that the touch point moves into the target content hot area, a broadcast message corresponding to the target content hot area is obtained, and the broadcast message is played through the first terminal device. Optionally, the broadcast message corresponds to the target content hotspot, that is, the player may know information such as a name, a type, or a function of the target content hotspot entered through different broadcast messages, which is not limited in this exemplary embodiment.
Optionally, the broadcast message corresponding to the target content hot zone may be obtained by presetting mapping relationships between different hot zones and corresponding broadcast messages in advance, for example, if the target content hot zone is a directional control key, a "directional control key" of the broadcast message corresponding to the directional control key may be broadcast, and a player controls the front, back, left, right movement of the target and the like by touching the directional control key; the content may also be obtained by identifying the graphical user interface in real time, for example, the target content hot area is a game story line, and when it is detected that the touch point moves to the game story line, the text information displayed on the target content hot area may be identified and may be reported to the player through voice, which is not limited in the present exemplary embodiment.
In an alternative embodiment, after step S150, determining the target content hot zone as an operation control; and responding to the execution triggering event of the operation control, controlling the graphical user interface to exit the screen recognition mode, and executing an operation instruction corresponding to the operation control.
In an optional implementation manner, when a player touches a target content hot area information control or game content, the player releases the finger, that is, the touch point disappears, and then automatically exits the screen recognition mode without executing the operation instruction of the target content hot area; optionally, the first terminal device may execute the audio prompt upon exiting the screen recognition mode.
In an optional implementation manner, after exiting the screen recognition mode, the graphical user interface continues to display the current game screen, and in response to a touch operation event instruction of the player, the first terminal device executes a corresponding touch instruction, and may control the graphical user interface to enter the screen recognition mode again according to a trigger event of the screen recognition mode.
In an alternative implementation, taking fig. 4 as an example, fig. 4 is a schematic diagram of an operation manner of an operation control in an exemplary embodiment of the present disclosure. As shown in fig. 4, in the screen recognition mode, when the player touches a target content hot zone operation control (i.e., the button 1), and releases the finger, i.e., the touch point disappears, the screen recognition mode is automatically exited, and the operation command of the operation control is executed. That is, the player slides onto the button 1, when the finger is released, that is, the touch point disappears, the screen recognition mode is automatically exited, and the execution of the operation instruction of the button 1 is triggered by default; optionally, the first terminal device may execute the audio prompt upon exiting the screen recognition mode.
The specific operation mode of the operation control realizes seamless connection between the screen recognition mode and the touch operation under the screen recognition mode quitting, simplifies the operation steps of touch operation of the operation control, greatly improves the operation convenience of a player, reduces the operation complexity and enhances the user experience.
In an optional implementation manner, after step S150, when the current position of the touch point is located in the target content hot zone (i.e., the original target content hot zone), and when the player wants to continue searching for a new target content hot zone, and determines a search range of the new target content hot zone, an optional manner is that the search range of the new target content hot zone is all hot zones on the graphical user interface except the original target content hot zone, and the audio parameter of the audio prompt is adjusted according to the distance change between the new target content and the current position of the touch point, which specifically includes the following steps:
step S1601, acquiring a current position of a touch point acting on the graphical user interface, and determining other content hot areas except the target content hot area with the shortest distance to the current position as new target content hot areas;
step S1603, in response to the movement of the touch point on the gui, determining a distance change between the new target content hot area and the current position of the touch point, and adjusting an audio parameter of the audio prompt according to the distance change.
In an alternative embodiment, taking fig. 5 as an example, fig. 5 is a schematic diagram of a search range of a new target content hot zone in an exemplary embodiment of the present disclosure, that is, the search range of the new target content hot zone is all hot zones on the graphical user interface except for an original target content hot zone, as shown in fig. 5, when a touch point enters a target content hot zone a (the target content hot zone a is a frame-shaped area except for a frame-shaped area where a content hot zone button 3 is located, that is, the original target content hot zone), and when a player wants to continue searching for the new target content hot zone, the search range of the new target content hot zone is all hot zones on the graphical user interface except for the original target content hot zone, the new target content hot zone is determined from the content hot zone button 1, the button 2, and the button 3 (excluding the original target content hot zone a). The new determination mode of the target content hot area can realize real-time touch operation detection of the full-screen content hot area, and avoid a touch blind area caused by the fact that a player cannot know the position of the content hot area outside a larger content hot area when performing touch operation in the larger content hot area.
Wherein, in response to the movement of the touch point on the gui, a change in a distance between the new target content hot area and the current position of the touch point is determined, and an audio parameter of the audio prompt is adjusted according to the change in the distance, which are described above in detail and are not repeated herein.
In an alternative embodiment, after step S150, when the current position of the touch point is located in the target content hot zone (i.e. the original target content hot zone), and when the player wants to continue searching for a new target content hot zone and determines the search range of the new target content hot zone, the search range of the new target content hot zone may be alternatively all hot zones on the gui within the original target content hot zone.
In an alternative embodiment, illustrated in FIG. 6, FIG. 6 is a schematic diagram of a search range for a new target content hotspot in another exemplary embodiment of the present disclosure; the new target content hotspot is searched for all hotspots on the graphical user interface within the original target content hotspot and not including the original target content hotspot itself. As shown in fig. 6, the content hot zone buttons 1 and 2 are further included in the target content hot zone, and a new target content hot zone is determined from the content hot zone buttons 1 and 2 (excluding the original target content hot zone a). The new determination mode of the target content hot area avoids the condition of low detection efficiency of the target content hot area caused by the detection of the real-time full-screen content hot area when a player performs touch operation in some larger content hot areas, can realize the shielding of the touch operation detection of the local content hot area, and further improves the search efficiency of the target content hot area.
According to the distance change between the new target content and the current position of the touch point, adjusting the audio parameters of the audio prompt according to the distance change, optionally, at least one first content hot area is further included in the target content hot area, which specifically includes the following steps:
step 1701, acquiring the current position of a touch point acting on the graphical user interface, and determining a first content hot area with the shortest distance to the current position as a new target content hot area;
in an alternative embodiment, as shown in fig. 6, after the touch point enters the original target content hot area a, in the process that the touch point moves towards the content hot area button 1, when the distance between the touch point and the content hot area button 1 is smaller than the distance between the touch point and the content hot area button 2, and the distance between the touch point and the content hot area button 1 is smaller than the preset threshold, the content hot area button 1 is determined as the new target content hot area.
Step S1703, in response to the movement of the touch point on the gui, determining a distance change between the new target content hot area and the current position of the touch point, and adjusting an audio parameter of the audio prompt according to the distance change.
In an alternative embodiment, after the content hot zone button 1 is determined as a new target content hot zone, in response to the movement of the touch point towards the new target content button 1, the audio parameters of the audio prompt are adjusted according to the distance change according to the change of the distance between the current position of the touch point and the new target content button 1. Wherein, in response to the movement of the touch point on the gui, a change in a distance between the new target content hot area and the current position of the touch point is determined, and an audio parameter of the audio prompt is adjusted according to the change in the distance, which are described above in detail and are not repeated herein.
By detecting the number change of the content hot areas corresponding to the touch points at different positions in real time, after the player obtains the number of the content hot areas through audio, a preliminary cognition is distributed to the area of the content hot areas on the graphical user interface, and the increase and the decrease of the number of the content hot areas on the graphical user interface can be dynamically obtained.
An information presentation apparatus is also disclosed in the present exemplary embodiment, and fig. 7 is a block diagram of an information presentation apparatus in an exemplary embodiment of the present disclosure. As shown in fig. 7, the apparatus includes:
the system comprises a graphical user interface, a display module and a display module, wherein the graphical user interface at least comprises a content hot area which is an area capable of triggering an audio prompt in a screen recognition mode;
the response module is used for responding to a trigger event of the screen recognition mode and controlling the graphical user interface to enter the screen recognition mode;
the determining module is used for acquiring the current position of a touch point acting on the graphical user interface in the screen identifying mode and determining a target content hot area according to the current position;
and the prompting module is used for performing audio prompting according to the distance between the target content hot area and the current position of the touch point.
Optionally, the trigger event of the screen-recognition mode includes one of the following: determining a preset number of touch points acting on the graphical user interface, and moving towards a preset direction; or receiving audio information matched with a preset audio clip through the first terminal device.
Optionally, the determining a target content hot zone according to the current location includes: and determining the content hot area with the shortest distance to the current position as a target content hot area.
Optionally, the determining the content hot zone with the shortest distance to the current location as the target content hot zone further includes: taking the current position as a reference point, and acquiring at least one content hot area in a preset first range; determining a distance between the current location and the at least one content hotspot; and determining the content hot zone with the shortest distance as a target content hot zone.
Optionally, the method further comprises: and responding to the movement of the touch point on the graphical user interface, determining the distance change between the target content hot area and the current position of the touch point, and adjusting the audio parameters of the audio prompt according to the distance change.
Optionally, the method further comprises: the audio parameters include frequency, volume, or audio type.
Optionally, the method further comprises: responding to the movement of the touch point to the target content hot area, and acquiring a broadcast message corresponding to the target content hot area; and playing the broadcast message through the first terminal equipment.
Optionally, after the broadcast message is played by the first terminal device, the method further includes: acquiring the current position of a touch point acting on the graphical user interface, and determining other content hot areas except the target content hot area with the shortest distance to the current position as a new target content hot area; and responding to the movement of the touch point on the graphical user interface, determining the distance change between the new target content hot area and the current position of the touch point, and adjusting the audio parameters of the audio prompt according to the distance change.
Optionally, the target content hot area further includes at least one first content hot area, and after the first terminal device plays the announcement message, the method further includes: acquiring the current position of a touch point acting on the graphical user interface, and determining a first content hot area with the shortest distance to the current position as a new target content hot area; and responding to the movement of the touch point on the graphical user interface, determining the distance change between the new target content hot area and the current position of the touch point, and adjusting the audio parameters of the audio prompt according to the distance change.
Optionally, the content hot zone includes: operating a control, an information control or game content, and after the audio prompt is played through the first terminal device, further comprising: determining the target content hot area as an operation control; and responding to the execution triggering event of the operation control, controlling the graphical user interface to exit the screen recognition mode, and executing an operation instruction corresponding to the operation control.
Through the embodiment, the technical problem that effective operation cannot be performed through touch screen equipment due to the fact that a user cannot acquire visual information of a screen through vision and cannot acquire effective touch information from the smooth screen is solved, the user touch screen is convenient and reliable to control, user experience is effectively improved, the original graphical user interface cannot be influenced, and the cost of achieving barrier-free function during graphical user interface development is reduced.
The specific details of each module unit in the above embodiments have been described in detail in the corresponding information prompting method, and in addition, the information prompting apparatus further includes other unit modules corresponding to the information prompting method, so that details are not described here again.
It should be noted that although in the above detailed description several modules or units of the device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit, according to embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into embodiments by a plurality of modules or units.
Fig. 8 is a schematic structural diagram of a computer-readable storage medium in an exemplary embodiment of the disclosure. As shown in fig. 8, a program product 1100 is depicted, having a computer program stored thereon, which when executed by a processor, in an alternative embodiment, performs the steps of:
controlling the graphical user interface to enter the screen-recognition mode in response to a trigger event of the screen-recognition mode;
in the screen recognition mode, acquiring the current position of a touch point acting on the graphical user interface, and determining a target content hot area according to the current position;
and performing audio prompt according to the distance between the target content hot area and the current position of the touch point.
Optionally, the trigger event of the screen-recognition mode includes one of the following: determining a preset number of touch points acting on the graphical user interface, and moving towards a preset direction; or receiving audio information matched with a preset audio clip through the first terminal device.
Optionally, the determining a target content hot zone according to the current location includes: and determining the content hot area with the shortest distance to the current position as a target content hot area.
Optionally, the determining the content hot zone with the shortest distance to the current location as the target content hot zone further includes: taking the current position as a reference point, and acquiring at least one content hot area in a preset first range; determining a distance between the current location and the at least one content hotspot; and determining the content hot zone with the shortest distance as a target content hot zone.
Optionally, the method further comprises: and responding to the movement of the touch point on the graphical user interface, determining the distance change between the target content hot area and the current position of the touch point, and adjusting the audio parameters of the audio prompt according to the distance change.
Optionally, the method further comprises: the audio parameters include frequency, volume, or audio type.
Optionally, the method further comprises: responding to the movement of the touch point to the target content hot area, and acquiring a broadcast message corresponding to the target content hot area; and playing the broadcast message through the first terminal equipment.
Optionally, after the broadcast message is played by the first terminal device, the method further includes: acquiring the current position of a touch point acting on the graphical user interface, and determining other content hot areas except the target content hot area with the shortest distance to the current position as a new target content hot area; and responding to the movement of the touch point on the graphical user interface, determining the distance change between the new target content hot area and the current position of the touch point, and adjusting the audio parameters of the audio prompt according to the distance change.
Optionally, the target content hot area further includes at least one first content hot area, and after the first terminal device plays the announcement message, the method further includes: acquiring the current position of a touch point acting on the graphical user interface, and determining a first content hot area with the shortest distance to the current position as a new target content hot area; and responding to the movement of the touch point on the graphical user interface, determining the distance change between the new target content hot area and the current position of the touch point, and adjusting the audio parameters of the audio prompt according to the distance change. Optionally, the content hot zone includes: operating a control, an information control or game content, and after the audio prompt is played through the first terminal device, further comprising: determining the target content hot area as an operation control; and responding to the execution triggering event of the operation control, controlling the graphical user interface to exit the screen recognition mode, and executing an operation instruction corresponding to the operation control.
With one of the computer-readable storage media of the embodiments of the present disclosure, a computer program is stored thereon, which, when being executed by a processor, implements the method steps of the above-mentioned information prompting method. Through the embodiment, the technical problem that effective operation cannot be performed through touch screen equipment due to the fact that a user cannot acquire visual information of a screen through vision and cannot acquire effective touch information from the smooth screen is solved, the user touch screen is convenient and reliable to control, user experience is effectively improved, the original graphical user interface cannot be influenced, and the cost of achieving barrier-free function during graphical user interface development is reduced.
A computer readable storage medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable storage medium may transmit, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied in a computer readable storage medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
The electronic apparatus 1000 in the present exemplary embodiment is described below with reference to fig. 9. The electronic device 1000 is only one example and should not bring any limitations to the functionality or scope of use of embodiments of the present disclosure.
Referring to FIG. 9, an electronic device 1000 is shown in the form of a general purpose computing device. The components of the electronic device 1000 may include, but are not limited to: at least one processor 1010, at least one memory 1020, a bus 1030 that couples various system components including the processor 1010 and the memory 1020, and a display unit 1040.
Wherein the memory 1020 stores program code executable by the processor 1010 such that the processor 1010 implements the following steps via execution of the executable instructions:
controlling the graphical user interface to enter the screen-recognition mode in response to a trigger event of the screen-recognition mode;
in the screen recognition mode, acquiring the current position of a touch point acting on the graphical user interface, and determining a target content hot area according to the current position;
and performing audio prompt according to the distance between the target content hot area and the current position of the touch point.
Optionally, the trigger event of the screen-recognition mode includes one of the following: determining a preset number of touch points acting on the graphical user interface, and moving towards a preset direction; or receiving audio information matched with a preset audio clip through the first terminal device.
Optionally, the determining a target content hot zone according to the current location includes: and determining the content hot area with the shortest distance to the current position as a target content hot area.
Optionally, the determining the content hot zone with the shortest distance to the current location as the target content hot zone further includes: taking the current position as a reference point, and acquiring at least one content hot area in a preset first range; determining a distance between the current location and the at least one content hotspot; and determining the content hot zone with the shortest distance as a target content hot zone.
Optionally, the method further comprises: and responding to the movement of the touch point on the graphical user interface, determining the distance change between the target content hot area and the current position of the touch point, and adjusting the audio parameters of the audio prompt according to the distance change.
Optionally, the method further comprises: the audio parameters include frequency, volume, or audio type.
Optionally, the method further comprises: responding to the movement of the touch point to the target content hot area, and acquiring a broadcast message corresponding to the target content hot area; and playing the broadcast message through the first terminal equipment.
Optionally, after the broadcast message is played by the first terminal device, the method further includes: acquiring the current position of a touch point acting on the graphical user interface, and determining other content hot areas except the target content hot area with the shortest distance to the current position as a new target content hot area; and responding to the movement of the touch point on the graphical user interface, determining the distance change between the new target content hot area and the current position of the touch point, and adjusting the audio parameters of the audio prompt according to the distance change.
Optionally, the target content hot area further includes at least one first content hot area, and after the first terminal device plays the announcement message, the method further includes: acquiring the current position of a touch point acting on the graphical user interface, and determining a first content hot area with the shortest distance to the current position as a new target content hot area; and responding to the movement of the touch point on the graphical user interface, determining the distance change between the new target content hot area and the current position of the touch point, and adjusting the audio parameters of the audio prompt according to the distance change. Optionally, the content hot zone includes: operating a control, an information control or game content, and after the audio prompt is played through the first terminal device, further comprising: determining the target content hot area as an operation control; and responding to the execution triggering event of the operation control, controlling the graphical user interface to exit the screen recognition mode, and executing an operation instruction corresponding to the operation control.
Through one of these kind of electronic equipment of this disclosed embodiment, the electronic equipment includes: a processor; and a memory for storing executable instructions of the processor; wherein the processor is configured to perform the specific method steps of the above-described information prompting method via execution of the executable instructions. Through the embodiment, the technical problem that effective operation cannot be performed through touch screen equipment due to the fact that a user cannot acquire visual information of a screen through vision and cannot acquire effective touch information from the smooth screen is solved, the user touch screen is convenient and reliable to control, user experience is effectively improved, the original graphical user interface cannot be influenced, and the cost of achieving barrier-free function during graphical user interface development is reduced.
The electronic device may further include: a power component configured to power manage an executing electronic device; a wired or wireless network interface configured to connect the electronic device to a network; and an input-output (I/O) interface. The electronic device may operate based on an operating system stored in memory, such as Android, iOS, Windows, Mac OS X, Unix, Linux, FreeBSD, or the like.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiment of the present invention can be embodied in the form of a software product, which can be stored in a non-volatile storage medium (which can be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to make a computing device (which can be a personal computer, a server, an electronic device, or a network device, etc.) execute the method according to the embodiment of the present invention.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (14)

1. An information prompting method, wherein a first terminal device provides a graphical user interface, the graphical user interface at least comprises a content hot area, and the content hot area is an area which can trigger audio prompting in a screen-recognition mode, and the method comprises the following steps:
controlling the graphical user interface to enter the screen-recognition mode in response to a trigger event of the screen-recognition mode;
in the screen recognition mode, acquiring the current position of a touch point acting on the graphical user interface, and determining a target content hot area according to the current position;
and performing audio prompt according to the distance between the target content hot area and the current position of the touch point.
2. The method of claim 1, the trigger event for the screenshot mode comprising one of:
determining a preset number of touch points acting on the graphical user interface, and moving towards a preset direction; alternatively, the first and second electrodes may be,
and receiving audio information matched with a preset audio clip through the first terminal equipment.
3. The method of claim 1, the controlling the graphical user interface into the screen-aware mode, further comprising:
and controlling the graphical user interface to enter the screen recognition mode, and acquiring the screen coordinate position of the content hot area in the graphical user interface and the broadcast information corresponding to the content hot area.
4. The method of claim 1, the determining a target content hotspot according to the current location, comprising:
and determining the content hot area with the shortest distance to the current position as a target content hot area.
5. The method of claim 4, wherein determining the content hot zone having the shortest distance to the current location as the target content hot zone further comprises:
taking the current position as a reference point, and acquiring at least one content hot area in a preset first range;
determining a distance between the current location and the at least one content hotspot;
and determining the content hot zone with the shortest distance as a target content hot zone.
6. The method of any of claims 1-5, further comprising:
and responding to the movement of the touch point on the graphical user interface, determining the distance change between the target content hot area and the current position of the touch point, and adjusting the audio parameters of the audio prompt according to the distance change.
7. The method of claim 6, further comprising:
the audio parameters include frequency, volume, or audio type.
8. The method of claim 1, further comprising:
responding to the movement of the touch point to the target content hot area, and acquiring a broadcast message corresponding to the target content hot area;
and playing the broadcast message through the first terminal equipment.
9. The method of claim 8, after playing the announcement message by the first terminal device, the method further comprising:
acquiring the current position of a touch point acting on the graphical user interface, and determining other content hot areas except the target content hot area with the shortest distance to the current position as a new target content hot area;
and responding to the movement of the touch point on the graphical user interface, determining the distance change between the new target content hot area and the current position of the touch point, and adjusting the audio parameters of the audio prompt according to the distance change.
10. The method of claim 8, further comprising at least one first content hot zone within the target content hot zone, after playing the announcement message by the first terminal device, the method further comprising:
acquiring the current position of a touch point acting on the graphical user interface, and determining a first content hot area with the shortest distance to the current position as a new target content hot area;
and responding to the movement of the touch point on the graphical user interface, determining the distance change between the new target content hot area and the current position of the touch point, and adjusting the audio parameters of the audio prompt according to the distance change.
11. The method of claim 8, the content hot zone is: operating a control, an information control or game content, and after the audio prompt is played through the first terminal device, further comprising:
determining the target content hot area as an operation control;
and responding to the execution triggering event of the operation control, controlling the graphical user interface to exit the screen recognition mode, and executing an operation instruction corresponding to the operation control.
12. An information prompting device, the device comprising:
the system comprises a graphical user interface, a display module and a display module, wherein the graphical user interface at least comprises a content hot area which is an area capable of triggering an audio prompt in a screen recognition mode;
the response module is used for responding to a trigger event of the screen recognition mode and controlling the graphical user interface to enter the screen recognition mode;
the determining module is used for acquiring the current position of a touch point acting on the graphical user interface in the screen identifying mode and determining a target content hot area according to the current position;
and the prompting module is used for performing audio prompting according to the distance between the target content hot area and the current position of the touch point.
13. An electronic device, comprising:
a processor; and
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the information prompting method of any of claims 1-11 via execution of the executable instructions.
14. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the information presentation method of any one of claims 1 to 11.
CN202010997209.5A 2020-09-21 2020-09-21 Information prompting method and device, storage medium and electronic equipment Active CN112090058B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010997209.5A CN112090058B (en) 2020-09-21 2020-09-21 Information prompting method and device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010997209.5A CN112090058B (en) 2020-09-21 2020-09-21 Information prompting method and device, storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN112090058A true CN112090058A (en) 2020-12-18
CN112090058B CN112090058B (en) 2024-01-30

Family

ID=73756404

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010997209.5A Active CN112090058B (en) 2020-09-21 2020-09-21 Information prompting method and device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN112090058B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101850183A (en) * 2010-05-26 2010-10-06 浙江大学 Interactive music entertainment equipment based on touch sensation
CN101950244A (en) * 2010-09-20 2011-01-19 宇龙计算机通信科技(深圳)有限公司 Method and device for giving prompt for content information on user interface
US20110224000A1 (en) * 2010-01-17 2011-09-15 James Toga Voice-based entertainment activity in a networked enviorment
CN102221922A (en) * 2011-03-25 2011-10-19 苏州瀚瑞微电子有限公司 Touch system for supporting voice prompt and realization method thereof
JP2013175045A (en) * 2012-02-24 2013-09-05 Denso Corp Touch type switch device
CN108854072A (en) * 2018-06-22 2018-11-23 北京心智互动科技有限公司 A kind of voice prompt method and device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110224000A1 (en) * 2010-01-17 2011-09-15 James Toga Voice-based entertainment activity in a networked enviorment
CN101850183A (en) * 2010-05-26 2010-10-06 浙江大学 Interactive music entertainment equipment based on touch sensation
CN101950244A (en) * 2010-09-20 2011-01-19 宇龙计算机通信科技(深圳)有限公司 Method and device for giving prompt for content information on user interface
CN102221922A (en) * 2011-03-25 2011-10-19 苏州瀚瑞微电子有限公司 Touch system for supporting voice prompt and realization method thereof
JP2013175045A (en) * 2012-02-24 2013-09-05 Denso Corp Touch type switch device
CN108854072A (en) * 2018-06-22 2018-11-23 北京心智互动科技有限公司 A kind of voice prompt method and device

Also Published As

Publication number Publication date
CN112090058B (en) 2024-01-30

Similar Documents

Publication Publication Date Title
CN112351302B (en) Live broadcast interaction method and device based on cloud game and storage medium
CN107648847B (en) Information processing method and device, storage medium and electronic equipment
CN107551555B (en) Game picture display method and device, storage medium and terminal
CN108465238B (en) Information processing method in game, electronic device and storage medium
CN109905754B (en) Virtual gift receiving method and device and storage equipment
US20230219000A1 (en) Pathfinding control method and device in game
CN111324253B (en) Virtual article interaction method and device, computer equipment and storage medium
CN108038726B (en) Article display method and device
CN113691829B (en) Virtual object interaction method, device, storage medium and computer program product
CN107185232B (en) Virtual object motion control method and device, electronic equipment and storage medium
CN112516589A (en) Game commodity interaction method and device in live broadcast, computer equipment and storage medium
CN114466209B (en) Live broadcast interaction method and device, electronic equipment, storage medium and program product
CN112000252A (en) Virtual article sending and displaying method, device, equipment and storage medium
CN113485626A (en) Intelligent display device, mobile terminal and display control method
CN109954276A (en) Information processing method, device, medium and electronic equipment in game
CN107626105B (en) Game picture display method and device, storage medium and electronic equipment
CN112969087A (en) Information display method, client, electronic equipment and storage medium
CN112891936A (en) Virtual object rendering method and device, mobile terminal and storage medium
CN108710512A (en) Preloading method, apparatus, storage medium and the intelligent terminal of application program
CN109091864B (en) Information processing method and device, mobile terminal and storage medium
CN112090058B (en) Information prompting method and device, storage medium and electronic equipment
CN115643445A (en) Interaction processing method and device, electronic equipment and storage medium
CN103677500A (en) Data processing method and electronic device
CN112929685B (en) Interaction method and device for VR live broadcast room, electronic device and storage medium
CN113769403A (en) Virtual object moving method and device, readable storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant