CN116978375A - User interface control method, device, equipment and storage medium - Google Patents

User interface control method, device, equipment and storage medium Download PDF

Info

Publication number
CN116978375A
CN116978375A CN202310678188.4A CN202310678188A CN116978375A CN 116978375 A CN116978375 A CN 116978375A CN 202310678188 A CN202310678188 A CN 202310678188A CN 116978375 A CN116978375 A CN 116978375A
Authority
CN
China
Prior art keywords
control
text information
user
determining
voice instruction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310678188.4A
Other languages
Chinese (zh)
Inventor
刘佳
欧阳能钧
刘嵘
华鲸州
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apollo Intelligent Connectivity Beijing Technology Co Ltd
Original Assignee
Apollo Intelligent Connectivity Beijing Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Apollo Intelligent Connectivity Beijing Technology Co Ltd filed Critical Apollo Intelligent Connectivity Beijing Technology Co Ltd
Priority to CN202310678188.4A priority Critical patent/CN116978375A/en
Publication of CN116978375A publication Critical patent/CN116978375A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • G10L25/54Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for retrieval
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/225Feedback of the input speech

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The disclosure provides a user interface control method, a device, equipment and a storage medium, relates to the technical field of voice, in particular to the technical field of voice control, interface interaction and the like, and can be applied to scenes such as vehicle man-machine interaction interface control, equipment man-machine interaction interface control and the like. The specific implementation scheme comprises the following steps: acquiring a first voice instruction of a user; according to the first voice command and text information respectively corresponding to each control in the interface to be operated, determining the control respectively corresponding to the text information matched with the first voice command; determining a third control according to the control association relation between the first control and the first control; and operating the first control and the third control. The control which the user really wants to interact with can be accurately determined, and user experience is improved.

Description

User interface control method, device, equipment and storage medium
Technical Field
The disclosure relates to the technical field of voice, in particular to the technical fields of voice control, interface interaction and the like, and can be applied to scenes such as vehicle man-machine interaction interface control, equipment man-machine interaction interface control and the like, in particular to a user interface control method, a device, equipment and a storage medium.
Background
As technology is continuously developed, vehicle-mounted terminals having a voice control function are introduced into more and more vehicles.
Currently, some vehicles have a "visible and so-to-speak" function. The visual and can say functions refer to that each control displayed on a user interface of the terminal can be controlled through voice interaction, and clicking, touching and other operations are not required by a user in a manual mode.
However, when the vehicle is in the visible/i-to-say mode and the user interface includes a plurality of controls having the same text, the vehicle cannot accurately determine the control that the user really wants to interact with according to the voice command of the user, so that the user experience is poor.
Disclosure of Invention
The invention provides a user interface control method, a device, equipment and a storage medium, which can accurately determine a control which a user really wants to interact with, and improve user experience.
According to a first aspect of the present disclosure, there is provided a user interface control method, including:
acquiring a first voice instruction of a user; determining controls corresponding to the text information matched with the first voice instruction according to the first voice instruction and the text information corresponding to each control in the interface to be operated, wherein the text information matched with the first voice instruction at least comprises first text information and second text information, the number of the first controls corresponding to the first text information is one, the number of the second controls corresponding to the second text information is at least two, and the second controls are different; determining a third control according to the control association relation between the first control and the first control, wherein the third control is a control associated with the first control in the second control, and the control association relation of the first control is used for indicating the control associated with the first control; and operating the first control and the third control.
According to a second aspect of the present disclosure, there is provided a user interface control apparatus, the apparatus comprising: an acquisition module and a processing module.
The acquisition module is used for acquiring a first voice instruction of a user.
The processing module is used for determining controls corresponding to the text information matched with the first voice instruction according to the first voice instruction and the text information corresponding to each control in the interface to be operated, wherein the text information matched with the first voice instruction at least comprises first text information and second text information, the number of the first controls corresponding to the first text information is one, the number of the second controls corresponding to the second text information is at least two, and the second controls are different; determining a third control according to the control association relation between the first control and the first control, wherein the third control is a control associated with the first control in the second control, and the control association relation of the first control is used for indicating the control associated with the first control; and operating the first control and the third control.
According to a third aspect of the present disclosure, there is provided an electronic device comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform a method as in the first aspect.
According to a fourth aspect of the present disclosure, there is provided a non-transitory computer readable storage medium storing computer instructions for causing a computer to perform the method according to the first aspect.
According to a fifth aspect of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements the method according to the first aspect.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The drawings are for a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
fig. 1 is a flow chart of a user interface control method according to an embodiment of the disclosure;
FIG. 2 is a schematic flow chart of another method for controlling a user interface according to an embodiment of the disclosure;
FIG. 3 is a schematic diagram of an interface to be operated according to an embodiment of the present disclosure;
FIG. 4 is a schematic flow chart of a user interface control method according to an embodiment of the disclosure;
FIG. 5 is a schematic flow chart of a user interface control method according to an embodiment of the disclosure;
FIG. 6 is a schematic diagram of the components of a user interface control device provided in an embodiment of the present disclosure;
fig. 7 is a schematic diagram of the composition of an electronic device according to an embodiment of the disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
It should be appreciated that in embodiments of the present disclosure, the character "/" generally indicates that the context associated object is an "or" relationship. The terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated.
As technology is continuously developed, vehicle-mounted terminals having a voice control function are introduced into more and more vehicles.
Currently, some vehicles have a "visible and so-to-speak" function. The visual and can say functions refer to that each control displayed on a user interface of the terminal can be controlled through voice interaction, and clicking, touching and other operations are not required by a user in a manual mode.
For example, when the user interface displays three controls with functions of setting the air-conditioning air speed, and corresponding text information is "high", "medium" and "low", the user only needs to speak the air-conditioning air speed required to be set by the user, for example, the user speaks "the air-conditioning air speed is high", and the vehicle-mounted terminal can perform virtual clicking operation on the control with the corresponding text information is "high", so that the air-conditioning air speed is set to be high.
However, when the vehicle is in the visible/i-to-say mode and the user interface includes a plurality of controls having the same text, the vehicle cannot accurately determine the control that the user really wants to interact with according to the voice command of the user, so that the user experience is poor.
Based on the above example, when the user interface further displays three controls with functions of setting the volume of the keys of the vehicle and corresponding text information of "high", "medium" and "low", respectively, after the user speaks that the air conditioner speed is high ", the vehicle-mounted terminal cannot accurately determine the control that the user really wants to interact because the corresponding text information of" high "has two controls, so that the vehicle-mounted terminal cannot execute any operation in order to avoid misoperation, and the setting of the air speed of the air conditioner cannot be realized, resulting in poor user experience.
Under the background technology, the user interface control method provided by the disclosure can accurately determine the control which the user really wants to interact with, and improves the user experience.
The execution subject of the user interface control method provided by the embodiment of the disclosure may be a computer or a server, or may also be other electronic devices with data processing capability; alternatively, the execution subject of the method may be a processor (e.g., a central processing unit (central processing unit, CPU)) in the above-described electronic device; still alternatively, the execution subject of the method may be an Application (APP) installed in the electronic device and capable of implementing the function of the method; alternatively, the execution subject of the method may be a functional module, a unit, or the like having the function of the method in the electronic device. The subject of execution of the method is not limited herein.
For example, the electronic device may be an in-vehicle terminal mounted on a vehicle.
In some embodiments, the server may be a single server, or may be a server cluster formed by a plurality of servers. In some implementations, the server cluster may also be a distributed cluster. The present disclosure is not limited to a specific implementation of the server.
The user interface control method is exemplarily described below with reference to the accompanying drawings.
Fig. 1 is a flowchart illustrating a user interface control method according to an embodiment of the disclosure. As shown in fig. 1, the method may include:
s101, acquiring a first voice instruction of a user.
The first voice command of the user may be received, for example, through an audio input device (e.g., a microphone) provided on the electronic device.
S102, determining controls corresponding to the text information matched with the first voice command according to the first voice command and the text information corresponding to each control in the interface to be operated.
The text information matched with the first voice instruction at least comprises first text information and second text information, wherein the number of first controls corresponding to the first text information is one, the number of second controls corresponding to the second text information is at least two, and all the second controls are different.
For example, the interface to be operated may be displayed for the user through a display device (e.g., a display, VR glasses, etc.) provided on the electronic apparatus.
One possible implementation manner may perform voice recognition on the first voice command to obtain a voice text corresponding to the first voice command, match text information corresponding to each control in the interface to be operated with the obtained voice text, and the controls corresponding to the text information matched with the voice text are controls corresponding to the text information matched with the first voice command.
In another possible implementation manner, the voice text corresponding to the first voice and the text information corresponding to each control in the interface to be operated can be clustered through an aggregation model, and the obtained controls corresponding to the text information of the same category as the voice text corresponding to the first voice are controls corresponding to the text information matched with the first voice instruction.
For example, taking a control to be operated, which includes a corresponding text message as "air conditioner air speed", three controls, which have functions of setting air conditioner air speed, and corresponding text messages as "high", "medium", and "low", respectively, a control corresponding to text message as "key volume", and three controls, which have functions of setting key volume, and corresponding text messages as "high", "medium", and "low", respectively, as examples, the first text message may be "air conditioner air speed" or "key volume", and the second text message may be "high" or "medium" or "low".
In S102, the second controls are all different, which means that the functions that the second controls can implement are all different. For example, when the user interface includes the control a and the control B, text information corresponding to the control a and the control B are both "high" and both have a function of setting the air-conditioning air speed, the functions that the control a and the control B can realize are both to set the air-conditioning air speed to be high, and the control a and the control B cannot be used as the second control.
S103, determining a third control according to the first control and the control association relation of the first control.
The third control is a control associated with the first control in the second control, and the control association relation of the first control is used for indicating the control associated with the first control.
Taking an example that an interface to be operated comprises a control A, a control B, a control C and a control D, determining a first control as the control A, determining a second control as the control B and the control C, and determining the control B as a third control if the control associated with the control A is indicated by the control association relation of the first control as the control B and the control D.
S104, operating the first control and the third control.
For example, operations such as virtual clicking, virtual touching, etc. may be performed on the first control and the third control, which is not limited.
For example, when the first control and the third control are operated, the first control and the third control can be operated simultaneously or separately, and the sequence of the operations is not limited. For example, in the interface to be operated, when the third control is a control that is displayed after clicking the first control (for example, the display device of the electronic device does not display the third control by default, and only after clicking the first control, the third control can be displayed under the first control), and when operating the first control and the third control, the first control can be clicked first and then the third control can be clicked, so that the operation process seen by the user accords with the habit of the user better, and the user experience is better.
According to the embodiment of the disclosure, the first voice command is acquired, the first control and the plurality of second controls corresponding to the text information matched with the first voice command are determined according to the first voice command and the text information corresponding to each control in the interface to be operated, the third control associated with the first control can be accurately determined from the plurality of second controls corresponding to the same text information according to the control association relation of the first control and the first control, the determined third control is the control which the user really wants to interact with, the first control and the third control are operated, functions required by the user are accurately realized, and user experience is improved.
In some possible embodiments, the control association relationship of the first control is preset.
For example, the control association relationship of the first control may be a correspondence relationship between the first control and other controls set by the person and stored directly. For example, the corresponding relation between the first control and other controls is stored in a mapping table manner, and the mapping table is used as the control association relation of the first control.
It should be noted that the foregoing examples of the control association relation of the preset first control are only exemplary, and in practical application, the preset of the control association relation of the first control may be implemented by other related technologies, which is not limited herein.
According to the embodiment, through the preset control association relation of the first control, the third control can be accurately determined according to the control association relation of the first control and the first control, the control which really needs to be interacted by the user is accurately determined, and the user experience is improved.
Fig. 2 is another flow chart of a user interface control method according to an embodiment of the disclosure. Before determining the third control according to the control association relationship between the first control and the first control in the foregoing embodiment, as shown in fig. 2, the method may further include:
s201, determining a fourth control according to the position relation between each second control and the first control.
The fourth control is a second control which meets the preset requirement with the first control in position.
The position of each control may refer to the position of each control in the interface to be operated.
In one possible implementation, the second control, which is greater than or equal to the first threshold in distance from the first control, may be determined to be the fourth control. The value of the first threshold is not limited.
Taking control a as a first control, taking control B, control C and control D as examples, taking a first threshold value of 5, and determining that control B is a fourth control if distances between control B and control a, between control C and control a, and between control D and control a are respectively 4, 8 and 12.
In another possible implementation manner, a second control which is located in the same element or the same control together with the first control can be determined as a fourth control, wherein the length and the width of the same element or the same control are respectively larger than the length and the width of the second control, the length of the same element or the same control is smaller than a second threshold value, and the width of the same element or the same control is smaller than a third threshold value. The magnitudes of the second threshold and the third threshold are not limited, and the magnitudes of the second threshold and the third threshold may be the same or different.
Fig. 3 is a schematic diagram of an interface to be operated according to an embodiment of the disclosure.
As shown in fig. 3, in the interface 301 to be operated, the control a 302 is a first control, the control B303 and the control C304 are second controls, the control a 302 and the control B303 are located in the control D305 and the control E306, the control B302 and the control C304 are located in the control E306, the length and the width of the control D305 are respectively smaller than a second threshold and a third threshold, the length of the control E306 is larger than the second threshold, and the control B303 is determined to be a fourth control.
S202, obtaining the control association relation of the first control according to the fourth control.
For example, the corresponding relation between the first control and the fourth control can be directly used as the control association relation of the first control.
In a possible implementation manner, the fourth control determined in S201 may be directly used as the third control determined in the foregoing embodiment without executing S202, so that steps required to be executed by the electronic device are reduced, and processing efficiency of the electronic device is improved.
According to the embodiment, the fourth control can be rapidly determined in the plurality of second controls according to the position relation between each second control and the first control, and the control association relation of the first control can be rapidly obtained according to the corresponding relation between the fourth control and the first control, so that the control association relation of the first control can be free from being preset, manual participation is reduced, and labor and time cost is reduced.
Fig. 4 is a schematic flowchart of another method for controlling a user interface according to an embodiment of the disclosure. Before determining the third control according to the control association relationship between the first control and the first control in the foregoing embodiment, as shown in fig. 4, the method may further include:
s401, determining a fifth control according to the father node where the first control is located.
The fifth control is a control located on the same father node as the first control.
For example, the parent node may be a previous node of the first control or a previous node of the previous node, but the number of nodes existing between the parent node and the node of the first control is smaller than the fourth threshold, and the value of the fourth threshold is not limited.
In some other embodiments, the control that is located at the same parent node as the first control may also be determined directly from the second control obtained in S102.
S402, obtaining the control association relation of the first control according to the fifth control.
For example, the corresponding relation between the first control and the fifth control can be directly used as the control association relation of the first control.
In a possible implementation manner, when the fifth control is the control that is directly obtained from the second control in S102 and is determined to be located at the same parent node as the first control, S402 may not be needed to be executed, and the fifth control determined in S401 is directly used as the third control determined in the foregoing embodiment, so that steps required to be executed by the electronic device are reduced, and processing efficiency of the electronic device is improved.
According to the embodiment, the fifth control which is in the same father node with the first control can be rapidly determined in the plurality of second controls according to the father node where the first control is located, and the control association relation of the first control can be rapidly obtained according to the corresponding relation between the fifth control and the first control, so that the control association relation of the first control is not required to be preset, manual participation is reduced, and labor and time cost is reduced.
Fig. 5 is a schematic flowchart of another method for controlling a user interface according to an embodiment of the disclosure. Before the first voice command of the user is obtained in the foregoing embodiment, as shown in fig. 5, the method may further include:
s501, acquiring a second voice instruction of the user.
The second voice command of the user may be received, for example, through an audio input device (e.g., a microphone) provided on the electronic device.
S502, determining first text information matched with the second voice command according to the second voice command and text information respectively corresponding to each control in the interface to be operated.
The second voice command may be subjected to voice recognition to obtain a voice text corresponding to the second voice command, text information corresponding to each control in the interface to be operated is respectively matched with the obtained voice text, and the text information matched with the voice text is used as the first text information.
S503, determining that a first control corresponding to the first text information is a text control.
By way of example, the text control may be a control that corresponds to text information, cannot be clicked or touched.
S504, outputting prompt information.
The prompt information is used for prompting a user to input a voice instruction for operating a control associated with the first control.
The prompt may be output, for example, via a display device (e.g., a display, VR glasses, etc.) or an audio output device (e.g., a speaker, headphones, etc.) provided on the electronic device.
For example, the prompt information may include first text information and be generated in a preset sentence pattern. The form of the preset sentence pattern is not limited. For example, taking the first text information as "air conditioner wind speed" as an example, the prompt information may be "please ask you what is you want to adjust the air conditioner wind speed? ".
S505, a third voice instruction input by the user according to the prompt information is acquired.
The third voice command of the user may be received, for example, through an audio input device (e.g., a microphone) provided on the electronic device.
Illustratively, take the prompt message as "please ask you what is you want to adjust the air conditioner wind speed? For example, the user may input a third voice command to the electronic device according to the text information corresponding to the control in the interface to be operated, for example, the third voice command may be "turn up".
S506, generating a first voice command according to the third voice command and the second voice command.
For example, the third voice command and the second voice command may be combined, and the combined result is the first voice command.
In a possible implementation manner, the third voice instruction and the second voice instruction obtained in S501 to S505 may be directly used without executing S506 and S101 to determine a control corresponding to the text information matched with the second voice instruction and a control corresponding to the text information matched with the third voice instruction; the control corresponding to the text information matched with the second voice instruction is the first control, and the control corresponding to the text information matched with the third voice instruction is the second control, so that steps required to be executed by the electronic equipment are reduced, and the processing efficiency of the electronic equipment is improved.
According to the method, the device and the system, the second voice command is acquired, the text information matched with the second voice command is determined to be the first text information in the text information corresponding to each control in the interface to be operated, when the first control corresponding to the first text information is determined to be the text control, the prompt information is output, the user can be prompted to continue inputting the voice command, the third voice command of the user is acquired, the first voice command is generated according to the third voice command and the second voice command, the user can be guided to input the voice command for multiple times, the user can be guided to input the voice command more accurately, and the user experience is improved.
The foregoing description of the embodiments of the present disclosure has been presented primarily in terms of methods. To achieve the above functions, it includes corresponding hardware structures and/or software modules that perform the respective functions. Those of skill in the art will readily appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as hardware or combinations of hardware and computer software. Whether a function is implemented as hardware or computer software driven hardware depends upon the particular application and design constraints imposed on the solution. The technical aim may be to use different methods to implement the described functions for each particular application, but such implementation should not be considered beyond the scope of the present disclosure.
In an exemplary embodiment, the disclosed embodiments also provide a user interface control apparatus, which may be used to implement the user interface control method as in the foregoing embodiments.
Fig. 6 is a schematic diagram of a user interface control device according to an embodiment of the disclosure. As shown in fig. 6, the apparatus may include: an acquisition module 601 and a processing module 602.
The acquiring module 601 is configured to acquire a first voice instruction of a user.
The processing module 602 is configured to determine, according to the first voice command and text information corresponding to each control in the interface to be operated, controls corresponding to the text information matched with the first voice command, where the text information matched with the first voice command includes at least a first text information and a second text information, one first control corresponding to the first text information, at least two second controls corresponding to the second text information, and each second control is different; determining a third control according to the control association relation between the first control and the first control, wherein the third control is a control associated with the first control in the second control, and the control association relation of the first control is used for indicating the control associated with the first control; and operating the first control and the third control.
In some possible embodiments, the control association relationship of the first control is preset.
In some possible embodiments, the processing module 602 is further configured to:
before determining a third control according to the control association relation between the first control and the first control, determining a fourth control according to the position relation between each second control and the first control, wherein the fourth control is a second control with the position meeting the preset requirement with the first control; and obtaining the control association relation of the first control according to the fourth control.
In some possible embodiments, the processing module 602 is further configured to:
before determining a third control according to the control association relation between the first control and the first control, determining a fifth control according to a father node where the first control is located, wherein the fifth control is a control located in the same father node as the first control; and obtaining the control association relation of the first control according to the fifth control.
In some possible embodiments, the first control is a text control, and the obtaining module 601 is further configured to:
before the first voice instruction of the user is acquired, acquiring a second voice instruction of the user; determining the first text information matched with the second voice instruction according to the second voice instruction and the text information respectively corresponding to each control in the interface to be operated; determining that the first control corresponding to the first text information is a text control; outputting prompt information, wherein the prompt information is used for prompting a user to input a voice instruction for operating a control associated with the first control; acquiring a third voice instruction input by a user according to the prompt information; and generating the first voice command according to the third voice command and the second voice command.
It should be noted that the division of the modules in fig. 6 is schematic, and is merely a logic function division, and other division manners may be implemented in practice. For example, two or more functions may also be integrated in one processing module. The embodiments of the present disclosure are not limited in this regard. The integrated modules may be implemented in hardware or in software functional modules.
In the technical scheme of the disclosure, the acquisition, storage, application and the like of the related user personal information all conform to the regulations of related laws and regulations, and the public sequence is not violated.
According to embodiments of the present disclosure, the present disclosure also provides an electronic device, a readable storage medium and a computer program product.
In an exemplary embodiment, an electronic device includes: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method as described in the above embodiments. The electronic device may be the computer or server described above.
In an exemplary embodiment, the readable storage medium may be a non-transitory computer readable storage medium storing computer instructions for causing a computer to perform the method according to the above embodiment.
In an exemplary embodiment, the computer program product comprises a computer program which, when executed by a processor, implements the method according to the above embodiments.
Fig. 7 illustrates a schematic block diagram of an example electronic device 700 that may be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 7, the electronic device 700 includes a computing unit 701 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 702 or a computer program loaded from a storage unit 708 into a Random Access Memory (RAM) 703. In the RAM 703, various programs and data required for the operation of the electronic device 700 may also be stored. The computing unit 701, the ROM 702, and the RAM 703 are connected to each other through a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.
Various components in the electronic device 700 are connected to the I/O interface 705, including: an input unit 706 such as a keyboard, a mouse, etc.; an output unit 707 such as various types of displays, speakers, and the like; a storage unit 708 such as a magnetic disk, an optical disk, or the like; and a communication unit 709 such as a network card, modem, wireless communication transceiver, etc. The communication unit 709 allows the electronic device 700 to exchange information/data with other devices through a computer network, such as the internet, and/or various telecommunication networks.
The computing unit 701 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 701 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The computing unit 701 performs the respective methods and processes described above, such as a user interface control method. For example, in some embodiments, the user interface control method may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as storage unit 708. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 700 via the ROM 702 and/or the communication unit 709. When the computer program is loaded into the RAM 703 and executed by the computing unit 701, one or more steps of the user interface control method described above may be performed. Alternatively, in other embodiments, the computing unit 701 may be configured to perform the user interface control method by any other suitable means (e.g. by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server incorporating a blockchain.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel, sequentially, or in a different order, provided that the desired results of the disclosed aspects are achieved, and are not limited herein.
The above detailed description should not be taken as limiting the scope of the present disclosure. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present disclosure are intended to be included within the scope of the present disclosure.

Claims (13)

1. A user interface control method, the method comprising:
acquiring a first voice instruction of a user;
determining controls corresponding to text information matched with the first voice instruction according to the first voice instruction and text information corresponding to each control in an interface to be operated, wherein the text information matched with the first voice instruction at least comprises first text information and second text information, the number of the first controls corresponding to the first text information is one, the number of the second controls corresponding to the second text information is at least two, and the second controls are different;
determining a third control according to the control association relation between the first control and the first control, wherein the third control is a control associated with the first control in the second control, and the control association relation of the first control is used for indicating the control associated with the first control;
and operating the first control and the third control.
2. The method of claim 1, wherein the control association relationship of the first control is preset.
3. The method of claim 1, prior to the determining a third control according to the first control and the control association relationship of the first control, the method further comprising:
determining a fourth control according to the position relation between each second control and the first control, wherein the fourth control is a second control with the position meeting the preset requirement with the first control;
and obtaining the control association relation of the first control according to the fourth control.
4. The method of claim 1, prior to the determining a third control according to the first control and the control association relationship of the first control, the method further comprising:
determining a fifth control according to the father node where the first control is located, wherein the fifth control is a control which is located in the same father node as the first control;
and obtaining the control association relation of the first control according to the fifth control.
5. The method of any of claims 1-4, prior to the obtaining the first voice instruction of the user, the method further comprising:
acquiring a second voice instruction of a user;
determining the first text information matched with the second voice instruction according to the second voice instruction and the text information respectively corresponding to each control in the interface to be operated;
determining that the first control corresponding to the first text information is a text control;
outputting prompt information, wherein the prompt information is used for prompting a user to input a voice instruction for operating a control associated with the first control;
acquiring a third voice instruction input by a user according to the prompt information;
and generating the first voice command according to the third voice command and the second voice command.
6. A user interface control device, the device comprising:
the acquisition module is used for acquiring a first voice instruction of a user;
the processing module is used for determining controls corresponding to the text information matched with the first voice instruction according to the first voice instruction and the text information corresponding to each control in the interface to be operated, wherein the text information matched with the first voice instruction at least comprises first text information and second text information, one control corresponding to the first text information is provided, at least two second controls corresponding to the second text information are provided, and each second control is different;
determining a third control according to the control association relation between the first control and the first control, wherein the third control is a control associated with the first control in the second control, and the control association relation of the first control is used for indicating the control associated with the first control;
and operating the first control and the third control.
7. The apparatus of claim 6, wherein the control association relationship of the first control is preset.
8. The apparatus of claim 6, the processing module further to:
before determining a third control according to the control association relation between the first control and the first control, determining a fourth control according to the position relation between each second control and the first control, wherein the fourth control is a second control with the position meeting the preset requirement with the first control;
and obtaining the control association relation of the first control according to the fourth control.
9. The apparatus of claim 6, the processing module further to:
before determining a third control according to the control association relation between the first control and the first control, determining a fifth control according to a father node where the first control is located, wherein the fifth control is a control located in the same father node as the first control;
and obtaining the control association relation of the first control according to the fifth control.
10. The apparatus of any of claims 6-9, the acquisition module further to:
before the first voice instruction of the user is acquired, acquiring a second voice instruction of the user;
determining the first text information matched with the second voice instruction according to the second voice instruction and the text information respectively corresponding to each control in the interface to be operated;
determining that the first control corresponding to the first text information is a text control;
outputting prompt information, wherein the prompt information is used for prompting a user to input a voice instruction for operating a control associated with the first control;
acquiring a third voice instruction input by a user according to the prompt information;
and generating the first voice command according to the third voice command and the second voice command.
11. An electronic device, comprising: at least one processor; and a memory communicatively coupled to the at least one processor;
wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-5.
12. A non-transitory computer readable storage medium storing computer instructions for causing a computer to perform the method of any one of claims 1-5.
13. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any of claims 1-5.
CN202310678188.4A 2023-06-08 2023-06-08 User interface control method, device, equipment and storage medium Pending CN116978375A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310678188.4A CN116978375A (en) 2023-06-08 2023-06-08 User interface control method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310678188.4A CN116978375A (en) 2023-06-08 2023-06-08 User interface control method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116978375A true CN116978375A (en) 2023-10-31

Family

ID=88483947

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310678188.4A Pending CN116978375A (en) 2023-06-08 2023-06-08 User interface control method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116978375A (en)

Similar Documents

Publication Publication Date Title
CN112597754B (en) Text error correction method, apparatus, electronic device and readable storage medium
US20210316745A1 (en) Vehicle-based voice processing method, voice processor, and vehicle-mounted processor
EP3799036A1 (en) Speech control method, speech control device, electronic device, and readable storage medium
KR20220081957A (en) Voice broadcasting method, device, electronic equipment and storage medium
KR20210039354A (en) Speech interaction method, speech interaction device and electronic device
CN115268821B (en) Audio playing method and device, equipment and medium
EP4068278A2 (en) Method and apparatus for voice recognition, electronic device and storage medium
CN115497458B (en) Continuous learning method and device of intelligent voice assistant, electronic equipment and medium
CN111342981A (en) Arbitration method between devices in local area network environment, electronic device and local area network system
CN116978375A (en) User interface control method, device, equipment and storage medium
CN113448668B (en) Method and device for skipping popup window and electronic equipment
CN113641439B (en) Text recognition and display method, device, electronic equipment and medium
CN112817463B (en) Method, device and storage medium for acquiring audio data by input method
CN115016955A (en) Method and device for sharing information among multiple applications
CN114138358A (en) Application program starting optimization method, device, equipment and storage medium
CN112786048A (en) Voice interaction method and device, electronic equipment and medium
CN114327059B (en) Gesture processing method, device, equipment and storage medium
CN112037786B (en) Voice interaction method, device, equipment and storage medium
CN117877470A (en) Voice association method, device, equipment and storage medium
US20130290878A1 (en) Generation and display method of user interface and user interface device
CN117933390A (en) Model mixing precision determination method, device, equipment and storage medium
CN116028009A (en) Man-machine interaction method, device, equipment and storage medium in projection display
CN118779474A (en) Information processing method and device based on large model and electronic equipment
CN117076619A (en) Robot dialogue method and device, storage medium and electronic equipment
CN116258614A (en) Question explanation method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination