CN116631396A - Control display method and device, electronic equipment and medium - Google Patents

Control display method and device, electronic equipment and medium Download PDF

Info

Publication number
CN116631396A
CN116631396A CN202310666513.5A CN202310666513A CN116631396A CN 116631396 A CN116631396 A CN 116631396A CN 202310666513 A CN202310666513 A CN 202310666513A CN 116631396 A CN116631396 A CN 116631396A
Authority
CN
China
Prior art keywords
control
target control
information
determining
interactive interface
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310666513.5A
Other languages
Chinese (zh)
Inventor
伍先爱
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apollo Zhilian Beijing Technology Co Ltd
Original Assignee
Apollo Zhilian Beijing Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Apollo Zhilian Beijing Technology Co Ltd filed Critical Apollo Zhilian Beijing Technology Co Ltd
Priority to CN202310666513.5A priority Critical patent/CN116631396A/en
Publication of CN116631396A publication Critical patent/CN116631396A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/165Management of the audio stream, e.g. setting of volume, audio stream path
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/226Procedures used during a speech recognition process, e.g. man-machine dialogue using non-speech characteristics
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Acoustics & Sound (AREA)
  • Computational Linguistics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The disclosure provides a control display method, device, electronic equipment and medium, relates to the technical field of artificial intelligence, and particularly relates to the technical field of voice interaction. The method can be used for controlling the display effect of the control in the interactive interface through voice under the intelligent driving scene. The control display method provided by the disclosure comprises the following steps: determining a target control to be controlled and control information of the target control according to the voice control instruction; determining description information of a target control; the description information comprises identification information of the target control; determining whether the target control belongs to a system application according to the identification information; and under the condition of the system application, generating the display effect of the target control in the current interactive interface according to the control information of the target control. The method and the device support display effect control on the control displayed in the interactive interface by the system application, expand the applicable scene of voice control and are beneficial to improving user experience.

Description

Control display method and device, electronic equipment and medium
Technical Field
The disclosure relates to the technical field of artificial intelligence, in particular to the technical field of voice interaction, and specifically relates to a control display method.
Background
In an intelligent driving scene, the voice control function can be said as seen, is a function capable of supporting the vehicle-mounted voice control system, and the use of the seen voice control function can bring great convenience and safety to users, so that the voice control function becomes an essential important component in the vehicle-mounted voice interaction system.
What you see is that the function supports controlling the display effect of the control in the interactive interface through voice, but is only effective for application programs integrated with Bridge SDKs (Bridge Software Development Kit, bridging software development kits), so that the display effect of the control in the interactive interface cannot be controlled through voice for application programs such as system applications without the Bridge SDKs.
Disclosure of Invention
The disclosure provides a control display method, device, electronic equipment and medium.
According to an aspect of the present disclosure, there is provided a method for displaying a control, the method including:
determining a target control to be controlled and control information of the target control according to the voice control instruction;
determining description information of a target control; the description information comprises identification information of the target control;
determining whether the target control belongs to a system application according to the identification information;
And under the condition of the system application, generating the display effect of the target control in the current interactive interface according to the control information of the target control.
According to another aspect of the present disclosure, there is provided a display device of a control, the device including:
the target control determining module is used for determining a target control to be controlled and control information of the target control according to the voice control instruction;
the descriptive information determining module is used for determining descriptive information of the target control; the description information comprises identification information of the target control;
the application type determining module is used for determining whether the target control belongs to a system application according to the identification information;
and the first display effect generation module is used for generating the display effect of the target control in the current interactive interface according to the control information of the target control under the condition of belonging to the system application.
According to still another aspect of the present disclosure, there is provided an electronic device including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein, the liquid crystal display device comprises a liquid crystal display device,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform a method of displaying a control according to any of the embodiments of the present disclosure.
According to yet another aspect of the present disclosure, there is provided a non-transitory computer-readable storage medium storing computer instructions for causing a computer to perform a method of displaying a control according to any embodiment of the present disclosure.
According to yet another aspect of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements a method of displaying a control according to any embodiment of the present disclosure.
According to the technology disclosed by the invention, the display effect control of the control displayed in the interactive interface by the system application is supported, the applicable scene of voice control is expanded, and the user experience is improved.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The drawings are for a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 is a flow chart of a method of displaying controls provided in accordance with an embodiment of the present disclosure;
FIG. 2 is a flow chart of another method of displaying controls provided in accordance with an embodiment of the present disclosure;
FIG. 3 is a flow chart of another method of displaying controls provided in accordance with an embodiment of the present disclosure;
FIG. 4 is a flow chart of another method of displaying controls provided in accordance with an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of a display device of a control provided according to an embodiment of the present disclosure;
fig. 6 is a block diagram of an electronic device used to implement a method of display of controls for embodiments of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Fig. 1 is a flowchart of a control display method according to an embodiment of the present disclosure, where the embodiment of the present disclosure is applicable to a case of controlling a display effect of a control in an interactive interface through voice in an intelligent driving scenario. The method may be performed by a display device of the control, which may be implemented in software and/or hardware. As shown in fig. 1, the display method of the control of the present embodiment may include:
S101, determining a target control to be controlled and control information of the target control according to a voice control instruction;
s102, determining description information of a target control; the description information comprises identification information of the target control;
s103, determining whether the target control belongs to a system application according to the identification information;
and S104, under the condition of the system application, generating the display effect of the target control in the current interactive interface according to the control information of the target control.
The voice control instruction is generated based on control information of the target control. The control information is used for controlling the display effect of the target control in the current interactive interface. Optionally, the control information includes a control type. The display dynamic effect of the target control in the current interactive interface can be determined based on the operation type. Further, the control information also includes dynamic effect dependent data. Wherein the dynamic effect dependent data corresponds to a control type. The dynamic effect dependent data is a data base for displaying the control in the current interactive interface according to the display dynamic effect corresponding to the control information. The dynamic effect dependency data corresponding to different control types are different.
The voice control instruction is a control instruction in a voice format. In intelligent driving scenarios, the voice control instructions are used to control the vehicle-to-machine system in a functional dimension, such as playing music or opening a sunroof. The vehicle-mounted functions exist in the form of application programs, and the vehicle-mounted functions supporting voice control in the vehicle-mounted system can be displayed through an interactive interface configured on the vehicle. Specifically, the control for interacting with the application program is displayed on the interactive interface. The control process of the voice control instruction on the vehicle-to-machine system can be intuitively displayed by adjusting the display effect of the control in the interactive interface.
Wherein the control corresponds to the application program, the control can interact with the application program, and the control can input data to the application program or operate the data in the application program. And determining a target control according to the voice control instruction, wherein the target control is a control to be controlled. The target control is associated with control information, and the control information of the target control is determined according to the voice control instruction. In an alternative implementation, the target control comprises a progress bar control. The progress bar control is widely applied to intelligent driving scenes, can be used for adjusting the volume, the playing progress of audio and video, and can also be used for adjusting the transparency of a skylight and the like.
Optionally, format conversion is performed on the voice control instruction, and the voice control instruction is converted into a text format from a voice format to obtain a control instruction text. For example, format conversion of speech control instructions may be implemented based on ASR (Automatic Speech Recognition ). And then, extracting key information from the control instruction text, and determining a target control and control information of the target control based on the extracted key information. Illustratively, the control instruction text is "please adjust the volume to 10", and the extracted key information is "volume", "adjust to" and "10". And then the target control can be determined to be a volume control bar, and the control information of the target control is 'volume adjustment to 10'.
The target control is also associated with description information, and the description information is used for describing the attribute of the target control.
In an alternative embodiment, determining the description information of the target control includes: performing control scanning on the current interactive interface to determine a target control displayed in the current interactive interface; and collecting the identification information and the display state information of the target control as the description information of the target control.
Optionally, the control scanning is performed on the current interactive interface based on the barrier-free service. Among these, the barrier-free service (Accessibility Service) is a set of system-level APIs that can simulate operation. The operation may be simulated after the application has acquired the rights for the barrier-free service. Monitoring the interactive interface through barrier-free service, and determining a target control displayed in the current interactive interface under the condition that the current interactive interface is changed. Collecting control information displayed in a current interactive interface, specifically, collecting identification information and display state information of a target control, and determining the collected identification information and display state information as description information of the target control. The technical scheme provides a feasible description information determining method which can be used for determining the description information of the target control and provides data support for controlling the display effect of the control of the system application in the interactive interface.
Optionally, the description information includes identification information, type information, and display status information. Wherein the identification information is used to distinguish between different controls. The type information is used to determine the control type. The display state information is used for determining the current display state of the control on the interactive interface.
The target control is associated with the application. Based on the identification information of the target control, an application program to which the target control belongs can be determined. Optionally, the application programs include third party applications and system applications. And under the condition that the target control belongs to the system application, generating the display effect of the target control in the current interactive interface according to the control information of the target control. And under the condition that the target control belongs to the third-party application, generating the display effect of the target control in the current interactive interface directly through the third-party application.
According to the technical scheme, control information of a target control to be controlled and the target control is determined according to a voice control instruction; identifying information in the description information of the target control, and determining whether the target control belongs to a system application; and under the condition of the system application, generating the display effect of the target control in the current interactive interface according to the control information of the target control. By using the technical scheme of the invention, the control of the display effect of the control displayed by the application program in the interactive interface can be realized under the condition that the Bridge SDK is not integrated into the application program, and the project docking cost and the development cost are reduced. The method and the device support display effect control on the control displayed in the interactive interface by the system application, expand the applicable scene of voice control and are beneficial to improving user experience.
In an optional embodiment, determining whether the target control belongs to a system application according to the identification information includes: inquiring whether the description information of the target control comprises registration information of the application program according to the identification information of the target control; if not, determining that the target control belongs to the system application.
The registration information is used for determining an application program to which the target control belongs. Optionally, the application programs include third party applications and system applications. In the technical scheme of the disclosure, a part of the control supporting the voice control function belongs to the system application, and a part of the control belongs to the third party application.
The third party application is integrated with the Bridge SDK, and the third party application can realize the display effect of the control in the voice control interactive interface based on the Bridge SDK. The third party application can control the display effect of the control in the interactive interface through voice because the third party application registers the description information of the control to the voice control function. This results in a control that supports voice control functionality, which is a control of a third party application, whose description information may include registration information. The system application without the Bridge SDK can not register the control with the voice control function, so the control belongs to the system application, and the description information of the control does not comprise registration information.
And taking the identification information of the target control as a query condition, querying whether the description information of the target control comprises the registration information of the application program, and if not, determining that the target control belongs to the system application. And if the description information of the target control comprises the registration information of the application program, determining that the target control belongs to the third-party application.
According to the technical scheme, the feasible application determining method for the control belongs to the system application can determine whether the target control belongs to the system application or not. Technical support is provided for controlling the display effect of the control of the system application in the interactive interface.
In an alternative embodiment, the method further comprises: if the description information of the target control comprises registration information of an application program, determining that the target control belongs to a third party application, and forwarding control information of the target control to the third party application; and generating the display effect of the target control in the current interactive interface according to the control information of the target control through the third party application.
If the description information of the target control comprises registration information of the application program, determining that the target control belongs to a third party application, wherein the third party application is relative to a system application, and Bridge SDK is integrated in the third party application. And if the target control belongs to the third party application, forwarding the control information of the target control to the third party application. And generating the display effect of the target control in the current interactive interface according to the control information of the target control through the third-party application.
According to the technical scheme, display effect control of the control displayed in the interactive interface by the system application is supported. And the control device is also compatible to control the display effect of the control displayed in the interactive interface by the third party application. And the application scene of the display method of the control is expanded.
In an optional embodiment, after obtaining the description information of the target control, the method further includes: determining legal controls supporting a voice control function; extracting identification information of the target control from the description information of the target control; and determining whether the target control is an effective control in the current interactive interface according to the identification information of the target control and the identification information of the legal control.
The legal control refers to a control supporting a voice control function. It is appreciated that not all controls support voice control functionality. And determining legal controls supporting the voice control function, and optionally, determining legal controls supporting the voice control function through the voice control function. Identification information of legal controls is determined.
And extracting the identification information of the target control from the description information of the target control, and determining whether the target control is an effective control in the current interactive interface according to the identification information of the target control and the identification information of the legal control. Optionally, the identification information of the target control is matched with the identification information of the legal control, and whether the target control is an effective control in the current interactive interface is determined according to the obtained matching result. Specifically, if the matching is successful, the target control is an effective control in the current interactive interface, otherwise, the target control is not an effective control in the current interactive interface.
If the target control is an effective control in the current interactive interface, continuing to determine whether the target control belongs to the system application.
According to the technical scheme, the target control is determined to be the effective control in the current interactive interface based on the identification information of the legal control. The validity check of the target control is realized, and the feasibility of the display method of the control is ensured.
FIG. 2 is a flow chart of another method of displaying controls provided in accordance with an embodiment of the present disclosure; this embodiment is an alternative to the embodiments described above.
Referring to fig. 2, the method for displaying a control provided in this embodiment includes:
s201, determining a target control to be controlled and control information of the target control according to a voice control instruction;
s202, determining description information of a target control; the description information comprises identification information of the target control;
s203, determining whether the target control belongs to a system application according to the identification information;
s204, under the condition of the system application, determining the operation type of the target control according to the control information of the target control;
optionally, the target control information includes an operation type. The operation type is used for determining the display dynamic effect of the target control in the current interactive interface. Optionally, the operation types include: a setting operation and a clicking operation. Setting a display dynamic effect corresponding to the operation, and having a difference with the display dynamic effect corresponding to the clicking operation. The display effect corresponding to the clicking operation is to simulate the change process of the display state when the target control is clicked, and specifically, the change process of the current display state to the expected display state is displayed. The setting operation focuses on displaying the desired display state. In the case where the operation type is the setting type, the display effect of the target control is a switch between the current display state and the intended display state.
For example, if the voice control command is "please adjust the volume to 10", the target control is determined to be a progress bar control for adjusting the volume, hereinafter referred to as a volume control bar. Control information of the volume control bar is determined based on the semantic control instruction. The operation type of the volume control bar may be extracted from the control information. The operation type may be a setting operation or a clicking operation. If the operation type is a setting operation, the corresponding display action effect may be that the volume value on the volume control bar is directly set as a value 10 in the current interactive interface, and meanwhile, the volume progress mark on the volume control bar in the current interactive interface is directly jumped to the display position corresponding to the volume 10 from the display position corresponding to the current volume. If the operation type is a click operation, the corresponding display action effect may be to simulate clicking on the volume control bar, and gradually drag the volume progress mark on the volume control bar from the position corresponding to the current volume to the display position corresponding to the volume 10. Meanwhile, the volume value on the volume control bar can change along with the movement of the volume progress mark.
Optionally, under the condition of belonging to the system application, extracting the operation type of the target control from the control information of the target control.
And S205, generating the display effect of the target control in the current interactive interface according to the control information and the operation type.
Wherein the control information is used to determine an expected display state of the target control. The control information is used for determining dynamic effects exhibited by the target control in an expected display state.
And generating the display effect of the target control in the current interactive interface according to the control information and the operation type of the target control.
According to the technical scheme, under the condition of system application, the operation type of the target control is determined according to the control information of the target control; and generating the display effect of the target control in the current interactive interface according to the control information and the operation type. The method and the device provide a feasible display effect generation scheme under the condition that the target control belongs to the system application, and provide technical support for realizing display effect control on the control displayed in the interactive interface by the system application. According to the method and the device for generating the display effect, the operation types are considered in the process of generating the display effect, and corresponding display effects are generated aiming at different operation types, so that the user experience is improved.
Fig. 3 is a flowchart of another control display method according to an embodiment of the present disclosure, where the embodiment is an alternative scheme provided on the basis of the foregoing embodiment. As shown in fig. 3, the display method of the control of the present embodiment may include:
S301, determining a target control to be controlled and control information of the target control according to a voice control instruction;
s302, determining description information of a target control; the description information comprises identification information of the target control;
s303, determining whether the target control belongs to a system application according to the identification information;
s304, under the condition of belonging to the system application, determining the operation type of the target control according to the control information of the target control;
optionally, the operation types include: click operation and setting operation. In the case that the operation type is a click operation, the description information further includes display state information. The display state information is used for determining the display state of the control on the interactive interface. The display effect corresponding to the clicking operation is to simulate the change process of the display state when the target control is clicked, and specifically, the change process of the current display state to the expected display state is displayed.
S305, determining the initial display position of a control identification element of the target control in the current interactive interface according to the display state information of the target control;
the control identification element refers to a component element in the target control, wherein the component element highlights the current display state. And the control identification element of the progress bar control is a progress identification element.
The initial display position is related to the current display state of the target control. According to the display state information of the target control, the initial display position of the control identification element of the target control in the current interactive interface can be determined.
In an optional embodiment, determining, according to the display state information of the target control, an initial display position of a control identification element of the target control in the current interactive interface includes: extracting a preset adjustable range and a current display state of a target control from the display state information; and determining the initial display position of the control identification element in the current interactive interface according to the preset adjustable range and the current display state.
Optionally, the display state information includes a current display state and a preset adjustable range. The preset adjustable range is a static attribute of the target control, and is preset according to actual conditions. The current display state is a dynamic attribute of the target control, and the display state of the target control may change at different moments. And under the condition that the target control comprises a progress bar control, the current display state is the current display progress, and the preset adjustable range is an adjustable progress range.
And extracting the preset adjustable range and the current display state of the target control from the display state information. The preset adjustable range limits the position change range of the control identification element. And determining the initial display position of the control identification element in the current interactive interface according to the preset adjustable range and the current display state. According to the technical scheme, the feasible initial display position determining method is provided, and data support is provided for realizing display effect control of the control displayed in the interactive interface by the system application.
S306, determining the expected display position of the control identification element in the current interactive interface according to the display state information and the control information of the target control;
wherein the expected display position is related to an expected display state of the target control. Optionally, the control information includes an expected display state. The expected display state belongs to the dynamic effect dependency data corresponding to the control type.
And determining the expected display position of the control identification element in the current interactive interface according to the display state information and the control information of the target control. The desired presentation effect of the target control may be presented based on the desired display position.
In an optional embodiment, determining the expected display position of the control identification element in the current interactive interface according to the display state information and the control information of the target control includes: extracting a preset adjustable range of a target control from the display state information, and extracting an expected display state of the target control from the control information; and determining the expected display position of the control identification element in the current interactive interface according to the preset adjustable range and the expected display state.
The preset adjustable range is recorded in display state information of the descriptive information, and the expected display state is recorded in the control information. The preset adjustable range defines the position change range of the control identification element. And determining the expected display position of the control identification element in the current interactive interface according to the preset adjustable range and the expected display state. According to the technical scheme, the feasible expected display position determining method is provided, and data support is provided for realizing display effect control on the control displayed in the interactive interface by the system application.
S307, according to the initial display position and the expected display position, generating the display effect of the target control in the current interactive interface.
Optionally, the control identification element of the target control is moved from the initial display position to the expected display position in the current interactive interface. The initial display position corresponds to the current display state, and the expected display position corresponds to the expected display state. By this, the change process of the current display state to the intended display state can be exhibited.
According to the technical scheme, under the condition that the operation type is clicking operation, the initial display position and the expected display position of the control identification element of the target control in the current interactive interface are determined, the display effect of the target control in the current interactive interface is generated according to the initial display position and the expected display position, and the change process of changing the current display state into the expected display state can be displayed. Technical support is provided for realizing display effect control of a control displayed in an interactive interface by the system application, and the corresponding display effect is generated for clicking operation, so that display effect subdivision is realized, and user experience is improved.
FIG. 4 is a flow chart of another method of displaying controls provided in accordance with an embodiment of the present disclosure; this embodiment is an alternative to the embodiments described above.
Referring to fig. 4, the method for displaying a control provided in this embodiment includes:
s401, determining a target control to be controlled and control information of the target control according to a voice control instruction;
s402, determining description information of a target control; the description information comprises identification information of the target control;
s403, determining whether the target control belongs to a system application according to the identification information;
s404, under the condition of belonging to the system application, determining the operation type of the target control according to the control information of the target control;
s405, extracting an attribute to be adjusted and an expected display value of the attribute to be adjusted of the target control from the control information under the condition that the operation type is a setting operation;
the setting operation focuses on displaying the desired display state. In the case where the operation type is the setting type, the display effect of the target control is a switch between the current display state and the intended display state.
And extracting the attribute to be adjusted of the target control and the expected display value of the attribute to be adjusted from the control information under the condition that the operation type is the setting operation. Wherein the attribute to be adjusted and the expected display value of the attribute to be adjusted are related to the expected display state. The attribute to be adjusted refers to the attribute of the target control to be adjusted. By way of example, the attribute to be adjusted may be a control style or data, etc. The pre-displayed value of the attribute to be adjusted is a value that is desired to be displayed under the attribute to be adjusted. For example, the attribute to be adjusted may be volume and the expected display value of the attribute to be adjusted may be 45. The attribute to be adjusted and the expected display value of the attribute to be adjusted also belong to the dynamic effect dependency data corresponding to the control type.
S406, calling barrier-free service, and generating the display effect of the target control in the current interactive interface according to the attribute to be adjusted and the expected display value of the attribute to be adjusted.
The barrier-free service is a set of system-level APIs which can simulate operation. The operation may be simulated after the application has acquired the rights for the barrier-free service. And under the condition that the attribute to be adjusted and the expected display value of the attribute to be adjusted are determined, generating the display effect of the target control in the current interactive interface by calling the barrier-free service.
In an optional embodiment, invoking the barrier-free service, generating a display effect of the target control in the current interactive interface according to the attribute to be adjusted and the expected display value of the attribute to be adjusted, including: invoking barrier-free service, and selecting a target barrier-free node from candidate barrier-free nodes according to the attribute to be adjusted; and calling the target barrier-free node, and generating the display effect of the target control in the current interactive interface according to the expected display value of the attribute to be adjusted.
The barrier-free service is provided with barrier-free nodes, wherein the barrier-free nodes refer to nodes in which the barrier-free service is embedded for setting control attributes. And selecting a target barrier-free node from the candidate barrier-free nodes according to the attribute to be adjusted. The target barrier-free node is used for setting the attribute to be adjusted of the target control. Optionally, calling a target barrier-free node, setting the attribute to be adjusted as an expected display value, and generating the display effect of the target control in the current interactive interface.
According to the technical scheme, under the condition that the operation type is the setting operation, the display effect generation method is provided, and technical support is provided for realizing display effect control on the control displayed in the interactive interface by the system application.
According to the technical scheme, under the condition that the operation type is a setting operation, the attribute to be adjusted and the expected display value of the attribute to be adjusted of the target control are extracted from the control information; and calling barrier-free service, and generating the display effect of the target control in the current interactive interface according to the attribute to be adjusted and the expected display value of the attribute to be adjusted. The technical scheme of the display control method and the display control device for the control system provide technical support for realizing display effect control of the control displayed in the interactive interface by the system application, and the corresponding display effect is generated for setting operation, so that subdivision of the display effect is realized, and improvement of user experience is facilitated.
Fig. 5 is a schematic structural diagram of a display device of a control according to an embodiment of the present disclosure. The method and the device are applicable to the situation that the display effect of the control in the interactive interface is controlled through voice under the intelligent driving scene. The device can be implemented by software and/or hardware, and can implement the control display method according to any embodiment of the disclosure.
As shown in fig. 5, the display device 500 of the control includes:
the target control determining module 501 is configured to determine a target control to be controlled and control information of the target control according to a voice control instruction;
the description information determining module 502 is configured to determine description information of the target control; the description information comprises identification information of the target control;
an application type determining module 503, configured to determine, according to the identification information, whether the target control belongs to a system application;
and the first display effect generating module 504 is configured to generate, according to the control information of the target control, a display effect of the target control in the current interactive interface under the condition of belonging to the system application.
According to the technical scheme, control information of a target control to be controlled and the target control is determined according to a voice control instruction; identifying information in the description information of the target control, and determining whether the target control belongs to a system application; and under the condition of the system application, generating the display effect of the target control in the current interactive interface according to the control information of the target control. By using the technical scheme of the invention, the control of the display effect of the control displayed by the application program in the interactive interface can be realized under the condition that the Bridge SDK is not integrated into the application program, and the project docking cost and the development cost are reduced. The method and the device support display effect control on the control displayed in the interactive interface by the system application, expand the applicable scene of voice control and are beneficial to improving user experience.
Optionally, the first display effect generating module 504 includes: the operation type determining submodule is used for determining the operation type of the target control according to the control information of the target control; and the display effect generation sub-module is used for generating the display effect of the target control in the current interactive interface according to the control information and the operation type.
Optionally, in the case that the operation type is a click operation, the description information further includes display status information; the display effect generation sub-module includes: the initial display position determining unit is used for determining the initial display position of the control identification element of the target control in the current interactive interface according to the display state information of the target control; the expected display position determining unit is used for determining the expected display position of the control identification element in the current interactive interface according to the display state information and the control information of the target control; and the display effect generating unit is used for generating the display effect of the target control in the current interactive interface according to the initial display position and the expected display position.
Optionally, the initial display position determining unit includes: the initial display information extraction subunit is used for extracting a preset adjustable range and a current display state of a target control from the display state information; and the initial display position determining subunit is used for determining the initial display position of the control identification element in the current interactive interface according to the preset adjustable range and the current display state.
Optionally, the expected display position determining unit includes: the expected display information extraction subunit is used for extracting a preset adjustable range of the target control from the display state information and extracting an expected display state of the target control from the control information; and the expected display position determining subunit is used for determining the expected display position of the control identification element in the current interactive interface according to the preset adjustable range and the expected display state.
Optionally, the display effect generating sub-module includes: the attribute to be adjusted determining unit is used for extracting the attribute to be adjusted of the target control and the expected display value of the attribute to be adjusted from the control information under the condition that the operation type is a setting operation; the barrier-free service calling unit is used for calling barrier-free service and generating the display effect of the target control in the current interactive interface according to the attribute to be adjusted and the expected display value of the attribute to be adjusted.
Optionally, the barrier-free service calling unit includes: the barrier-free node determining subunit is used for calling barrier-free service and selecting a target barrier-free node from candidate barrier-free nodes according to the attribute to be adjusted; and the barrier-free node calling subunit is used for calling the target barrier-free node and generating the display effect of the target control in the current interactive interface according to the expected display value of the attribute to be regulated.
Optionally, the description information determining module 502 includes: the control scanning sub-module is used for scanning the control on the current interactive interface and determining a target control displayed in the current interactive interface; and the description information determination submodule is used for collecting the identification information and the display state information of the target control and taking the identification information and the display state information as the description information of the target control.
Optionally, the apparatus further includes: the legal control determining module is used for determining legal controls supporting the voice control function after obtaining the description information of the target control; the identification information extraction module is used for extracting the identification information of the target control from the description information of the target control; and the control validity determining module is used for determining whether the target control is a valid control in the current interactive interface according to the identification information of the target control and the identification information of the legal control.
Optionally, the application type determining module 503 includes: the registration information inquiry sub-module is used for inquiring whether the description information of the target control comprises registration information of the application program according to the identification information of the target control; and the application type determining submodule is used for determining that the target control belongs to the system application if the application type determining submodule is not used for determining that the target control belongs to the system application.
Optionally, the apparatus further includes: the control information forwarding module is used for determining that the target control belongs to a third party application if the description information of the target control comprises registration information of an application program, and forwarding the control information of the target control to the third party application; and the second display effect generation module is used for generating the display effect of the target control in the current interactive interface according to the control information of the target control through the third party application.
Optionally, the target control includes a progress bar control.
The display device for the control provided by the embodiment of the disclosure can execute the display method for the control provided by any embodiment of the disclosure, and has the corresponding functional modules and beneficial effects of executing the display method for the control.
In the technical scheme of the disclosure, the related user information, the collection, storage, use, processing, transmission, provision, disclosure and the like of the voice control instruction all conform to the regulations of related laws and regulations and do not violate the popular public order.
According to embodiments of the present disclosure, the present disclosure also provides an electronic device, a readable storage medium and a computer program product.
Fig. 6 illustrates a schematic block diagram of an example electronic device 600 that may be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 6, the electronic device 600 includes a computing unit 601 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 602 or a computer program loaded from a storage unit 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data required for the operation of the electronic device 600 can also be stored. The computing unit 601, ROM 602, and RAM 603 are connected to each other by a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
A number of components in the electronic device 600 are connected to the I/O interface 605, including: an input unit 606 such as a keyboard, mouse, etc.; an output unit 607 such as various types of displays, speakers, and the like; a storage unit 608, such as a magnetic disk, optical disk, or the like; and a communication unit 609 such as a network card, modem, wireless communication transceiver, etc. The communication unit 609 allows the electronic device 600 to exchange information/data with other devices through a computer network, such as the internet, and/or various telecommunication networks.
The computing unit 601 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 601 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The computing unit 601 performs the various methods and processes described above, such as the display method of the control. For example, in some embodiments, the display method of the control may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as storage unit 608. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 600 via the ROM 602 and/or the communication unit 609. When the computer program is loaded into RAM 603 and executed by computing unit 601, one or more steps of the display method of the control described above may be performed. Alternatively, in other embodiments, the computing unit 601 may be configured to perform the display method of the control in any other suitable manner (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a display device of a general purpose computer, special purpose computer, or other programmable control such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server incorporating a blockchain.
Artificial intelligence is the discipline of studying the process of making a computer mimic certain mental processes and intelligent behaviors (e.g., learning, reasoning, thinking, planning, etc.) of a person, both hardware-level and software-level techniques. Artificial intelligence hardware technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing, and the like; the artificial intelligent software technology mainly comprises a computer vision technology, a voice recognition technology, a natural language processing technology, a machine learning/deep learning technology, a big data processing technology, a knowledge graph technology and the like.
Cloud computing (cloud computing) refers to a technical system that a shared physical or virtual resource pool which is elastically extensible is accessed through a network, resources can comprise servers, operating systems, networks, software, applications, storage devices and the like, and resources can be deployed and managed in an on-demand and self-service mode. Through cloud computing technology, high-efficiency and powerful data processing capability can be provided for technical application such as artificial intelligence and blockchain, and model training.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel or sequentially or in a different order, provided that the desired results of the technical solutions of the present disclosure are achieved, and are not limited herein.
The above detailed description should not be taken as limiting the scope of the present disclosure. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present disclosure are intended to be included within the scope of the present disclosure.

Claims (27)

1. A method of displaying controls, the method comprising:
determining a target control to be controlled and control information of the target control according to the voice control instruction;
determining description information of a target control; the description information comprises identification information of the target control;
determining whether the target control belongs to a system application according to the identification information;
and under the condition of the system application, generating the display effect of the target control in the current interactive interface according to the control information of the target control.
2. The method of claim 1, wherein generating a display effect of the target control in the current interactive interface according to the control information of the target control comprises:
determining the operation type of the target control according to the control information of the target control;
And generating the display effect of the target control in the current interactive interface according to the control information and the operation type.
3. The method of claim 2, wherein, in the case where the operation type is a click operation, the description information further includes display state information;
and generating the display effect of the target control in the current interactive interface according to the control information and the operation type, wherein the display effect comprises the following steps:
determining the initial display position of a control identification element of the target control in the current interactive interface according to the display state information of the target control;
according to the display state information and the control information of the target control, determining the expected display position of the control identification element in the current interactive interface;
and generating the display effect of the target control in the current interactive interface according to the initial display position and the expected display position.
4. The method of claim 3, wherein the determining, according to the display status information of the target control, an initial display position of a control identification element of the target control in the current interactive interface includes:
extracting a preset adjustable range and a current display state of a target control from the display state information;
And determining the initial display position of the control identification element in the current interactive interface according to the preset adjustable range and the current display state.
5. The method of claim 3, wherein determining an expected display position of the control identification element in a current interactive interface according to display state information and control information of the target control comprises:
extracting a preset adjustable range of a target control from the display state information, and extracting an expected display state of the target control from the control information;
and determining the expected display position of the control identification element in the current interactive interface according to the preset adjustable range and the expected display state.
6. The method of claim 2, wherein generating a display effect of the target control in the current interactive interface according to the control information and the operation type comprises:
extracting the attribute to be adjusted of the target control and the expected display value of the attribute to be adjusted from the control information under the condition that the operation type is a setting operation;
and calling barrier-free service, and generating the display effect of the target control in the current interactive interface according to the attribute to be adjusted and the expected display value of the attribute to be adjusted.
7. The method of claim 6, wherein invoking the barrier-free service generates a display effect of the target control in the current interactive interface according to the property to be adjusted and the expected display value of the property to be adjusted, comprising:
invoking barrier-free service, and selecting a target barrier-free node from candidate barrier-free nodes according to the attribute to be adjusted;
and calling the target barrier-free node, and generating the display effect of the target control in the current interactive interface according to the expected display value of the attribute to be adjusted.
8. The method of any of claims 1-7, wherein determining descriptive information for a target control comprises:
performing control scanning on the current interactive interface to determine a target control displayed in the current interactive interface;
and collecting the identification information and the display state information of the target control as description information of the target control.
9. The method of claim 8, further comprising, after obtaining the description information of the target control:
determining legal controls supporting a voice control function;
extracting identification information of the target control from the description information of the target control;
and determining whether the target control is an effective control in the current interactive interface according to the identification information of the target control and the identification information of the legal control.
10. The method of any of claims 1-7, determining, from the identification information, whether the target control belongs to a system application, comprising:
inquiring whether the description information of the target control comprises registration information of the application program according to the identification information of the target control;
if not, determining that the target control belongs to the system application.
11. The method of claim 10, the method further comprising:
if the description information of the target control comprises registration information of an application program, determining that the target control belongs to a third party application, and forwarding control information of the target control to the third party application;
and generating the display effect of the target control in the current interactive interface according to the control information of the target control through the third party application.
12. The method of any of claims 1-7, wherein the target control comprises a progress bar control.
13. A display device for a control, the device comprising:
the target control determining module is used for determining a target control to be controlled and control information of the target control according to the voice control instruction;
the descriptive information determining module is used for determining descriptive information of the target control; the description information comprises identification information of the target control;
The application type determining module is used for determining whether the target control belongs to a system application according to the identification information;
and the first display effect generation module is used for generating the display effect of the target control in the current interactive interface according to the control information of the target control under the condition of belonging to the system application.
14. The apparatus of claim 13, wherein the first display effect generation module comprises:
the operation type determining submodule is used for determining the operation type of the target control according to the control information of the target control;
and the display effect generation sub-module is used for generating the display effect of the target control in the current interactive interface according to the control information and the operation type.
15. The apparatus of claim 14, wherein, in the event that the type of operation is a click operation, the description information further includes display status information;
the display effect generation sub-module includes:
the initial display position determining unit is used for determining the initial display position of the control identification element of the target control in the current interactive interface according to the display state information of the target control;
the expected display position determining unit is used for determining the expected display position of the control identification element in the current interactive interface according to the display state information and the control information of the target control;
And the display effect generating unit is used for generating the display effect of the target control in the current interactive interface according to the initial display position and the expected display position.
16. The apparatus of claim 15, wherein the initial display position determining unit comprises:
the initial display information extraction subunit is used for extracting a preset adjustable range and a current display state of a target control from the display state information;
and the initial display position determining subunit is used for determining the initial display position of the control identification element in the current interactive interface according to the preset adjustable range and the current display state.
17. The apparatus of claim 15, wherein the intended display position determining unit comprises:
the expected display information extraction subunit is used for extracting a preset adjustable range of the target control from the display state information and extracting an expected display state of the target control from the control information;
and the expected display position determining subunit is used for determining the expected display position of the control identification element in the current interactive interface according to the preset adjustable range and the expected display state.
18. The apparatus of claim 14, wherein the display effect generation sub-module comprises:
the attribute to be adjusted determining unit is used for extracting the attribute to be adjusted of the target control and the expected display value of the attribute to be adjusted from the control information under the condition that the operation type is a setting operation;
the barrier-free service calling unit is used for calling barrier-free service and generating the display effect of the target control in the current interactive interface according to the attribute to be adjusted and the expected display value of the attribute to be adjusted.
19. The apparatus of claim 18, wherein the barrier-free service invocation unit comprises:
the barrier-free node determining subunit is used for calling barrier-free service and selecting a target barrier-free node from candidate barrier-free nodes according to the attribute to be adjusted;
and the barrier-free node calling subunit is used for calling the target barrier-free node and generating the display effect of the target control in the current interactive interface according to the expected display value of the attribute to be regulated.
20. The apparatus of any of claims 13-19, wherein the descriptive information determination module comprises:
the control scanning sub-module is used for scanning the control on the current interactive interface and determining a target control displayed in the current interactive interface;
And the description information determination submodule is used for collecting the identification information and the display state information of the target control and taking the identification information and the display state information as the description information of the target control.
21. The apparatus of claim 20, the apparatus further comprising:
the legal control determining module is used for determining legal controls supporting the voice control function after obtaining the description information of the target control;
the identification information extraction module is used for extracting the identification information of the target control from the description information of the target control;
and the control validity determining module is used for determining whether the target control is a valid control in the current interactive interface according to the identification information of the target control and the identification information of the legal control.
22. The apparatus according to any of claims 13-19, an application type determination module, comprising:
the registration information inquiry sub-module is used for inquiring whether the description information of the target control comprises registration information of the application program according to the identification information of the target control;
and the application type determining submodule is used for determining that the target control belongs to the system application if the application type determining submodule is not used for determining that the target control belongs to the system application.
23. The apparatus of claim 22, the apparatus further comprising:
The control information forwarding module is used for determining that the target control belongs to a third party application if the description information of the target control comprises registration information of an application program, and forwarding the control information of the target control to the third party application;
and the second display effect generation module is used for generating the display effect of the target control in the current interactive interface according to the control information of the target control through the third party application.
24. The apparatus of any of claims 13-19, wherein the target control comprises a progress bar control.
25. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein, the liquid crystal display device comprises a liquid crystal display device,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of displaying controls according to any one of claims 1-12.
26. A non-transitory computer readable storage medium storing computer instructions for causing a computer to perform the method of displaying a control according to any one of claims 1-12.
27. A computer program product comprising a computer program which, when executed by a processor, implements a method of displaying a control according to any of claims 1-12.
CN202310666513.5A 2023-06-06 2023-06-06 Control display method and device, electronic equipment and medium Pending CN116631396A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310666513.5A CN116631396A (en) 2023-06-06 2023-06-06 Control display method and device, electronic equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310666513.5A CN116631396A (en) 2023-06-06 2023-06-06 Control display method and device, electronic equipment and medium

Publications (1)

Publication Number Publication Date
CN116631396A true CN116631396A (en) 2023-08-22

Family

ID=87591868

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310666513.5A Pending CN116631396A (en) 2023-06-06 2023-06-06 Control display method and device, electronic equipment and medium

Country Status (1)

Country Link
CN (1) CN116631396A (en)

Similar Documents

Publication Publication Date Title
KR102490776B1 (en) Headless task completion within digital personal assistants
US11194448B2 (en) Apparatus for vision and language-assisted smartphone task automation and method thereof
US9454964B2 (en) Interfacing device and method for supporting speech dialogue service
US11163377B2 (en) Remote generation of executable code for a client application based on natural language commands captured at a client device
EP3799036A1 (en) Speech control method, speech control device, electronic device, and readable storage medium
JP2021192114A (en) Voice interaction method, device, electronic device, computer readable storage medium and computer program
CN112506854A (en) Method, device, equipment and medium for storing page template file and generating page
WO2023122444A1 (en) Language model prediction of api call invocations and verbal responses
CN110945455A (en) Electronic device for processing user utterance for controlling external electronic device and control method thereof
JP2022031854A (en) Generation method of reply content, device, apparatus and storage medium
US20210098012A1 (en) Voice Skill Recommendation Method, Apparatus, Device and Storage Medium
CN116705018A (en) Voice control method, voice control device, electronic equipment and readable storage medium
US11966562B2 (en) Generating natural languages interface from graphic user interfaces
CN113641439B (en) Text recognition and display method, device, electronic equipment and medium
CN116631396A (en) Control display method and device, electronic equipment and medium
CN112966201B (en) Object processing method, device, electronic equipment and storage medium
CN115497458A (en) Continuous learning method and device of intelligent voice assistant, electronic equipment and medium
US20220358931A1 (en) Task information management
CN114428646B (en) Data processing method and device, electronic equipment and storage medium
WO2023112118A1 (en) Operation assistance device, operation assistance method, and operation assistance program
US20210327437A1 (en) Electronic apparatus and method for recognizing speech thereof
CN115798469A (en) Voice control method and device, electronic equipment and computer readable storage medium
EP3799038A1 (en) Speech control method and device, electronic device, and readable storage medium
CN116540886A (en) Method and device for determining text control, electronic equipment and storage medium
CN117806622A (en) Application program generation method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination