CN115080007A - Voice development method, system, electronic device, and medium - Google Patents

Voice development method, system, electronic device, and medium Download PDF

Info

Publication number
CN115080007A
CN115080007A CN202110277635.6A CN202110277635A CN115080007A CN 115080007 A CN115080007 A CN 115080007A CN 202110277635 A CN202110277635 A CN 202110277635A CN 115080007 A CN115080007 A CN 115080007A
Authority
CN
China
Prior art keywords
development
voice
program
component
source code
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110277635.6A
Other languages
Chinese (zh)
Inventor
陈本智
陈功平
兰守忍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202110277635.6A priority Critical patent/CN115080007A/en
Publication of CN115080007A publication Critical patent/CN115080007A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/30Creation or generation of source code
    • G06F8/31Programming languages or programming paradigms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/30Creation or generation of source code
    • G06F8/36Software reuse
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/30Creation or generation of source code
    • G06F8/38Creation or generation of source code for implementing user interfaces
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computing Systems (AREA)
  • Multimedia (AREA)
  • Acoustics & Sound (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Stored Programmes (AREA)

Abstract

The application designs a voice development method, a system, an electronic device and a medium. The voice development method comprises the following steps: the method comprises the steps of obtaining a voice development instruction of a user, obtaining corresponding program development parameters according to the voice development instruction, then generating a development source code file of a program according to the program development parameters, and finally compiling the development source code file to generate the program. Compared with a traditional program development mode, the technical scheme of the application can improve the development efficiency of a developer, reduce the threshold of program development, avoid the threat to the body health of the developer caused by long-time mouse knocking, and has universality and generalizability.

Description

Voice development method, system, electronic device, and medium
Technical Field
The present application relates to the field of software development technologies, and in particular, to a method, a system, an electronic device, and a medium for speech development.
Background
With continuous progress and development of scientific technology, people begin to emerge on the linkage requirements among intelligent devices, and an operating system adopted by the intelligent devices is required to have strong interactivity and universality, so that application scenes such as co-screen cooperation between a mobile phone and a computer can be realized; in contradiction, the Android operating system (Android) with a large reserved quantity in the existing market gradually cannot support the increasing equipment cooperation requirement. Therefore, a great number of software developers are looking at other emerging open source operating systems, and attempt to overcome the above problems through the construction of new application ecoenvironments.
With this background, more and more emerging operating systems are in operation. For example, the grandmon system provides the recent developer with a beta version of grandmon system 2.0. As a distributed operating system facing a whole scene based on microkernel, the hongmeng system is not only a single system of a mobile phone or a certain device, but also a universal system capable of connecting all devices in series, and the system is currently adapted to a smart screen, and is further adapted to multiple terminal devices such as mobile phones, tablets, computers, smart cars and wearable devices in the future.
However, it is undeniable that the current development threshold for such emerging operating systems is high: for example, as shown in fig. 1, a developer needs to learn a large amount of front-end development related instructions in advance, and then manually design a layout style of a user interface in software application, then input a source code file corresponding to the layout and style of the user interface through a keyboard and a mouse, and finally compile the source code file to generate a front-end target program corresponding to the user interface. When a developer wants to develop a user interface of a Java Script (JS) application on the hongmeng system, he first needs to learn a development paradigm of Hypertext Markup Language (HTML), Cascading Style Sheets (CSS), Java Script Language (JS), or Component Tree (CT) files, familiar with usage of each Component, related attributes, and Style setting rules thereof; then, inputting the source code files corresponding to the layout and the style by the keyboard and the mouse through the layout and the style of the user interface in the manual design software application; and finally, compiling the source code file to form a front-end object program corresponding to the user interface, and if a compiling error occurs, manually correcting until the compiling is successful and displaying an expected user interface. In the above process, there are problems of low manual input efficiency, low development efficiency, long learning period and high development threshold, and it is urgently needed to provide an application development method to solve the above problems.
In addition, in the development process of software application, because software application development tools such as computers and the like do not conform to the ergonomic design, a software developer needs to fix the same posture for a long time and repeatedly knock a mouse to act, the software developer is easily disturbed by diseases such as repetitive strain and the like, and symptoms such as ache, pain, stabbing pain or limb muscle weakness occur, so that the normal working life is influenced. How to solve the problem of strain in the software application development process is also worth paying attention.
Disclosure of Invention
An object of the present application is to provide a voice development method, system, electronic device, and medium. By the method, a developer can perform program development work in a voice input mode. The voice development system can identify the voice development instruction of the user, automatically generate the corresponding development source code file according to the user requirement, greatly simplify the operation required to be executed by the user in the program development process, and improve the program development experience of the user.
A first aspect of the present application provides a method for developing a program by using speech, including: acquiring a voice development instruction of a user; acquiring program development parameters corresponding to the voice development instruction; generating a development source code file of the program according to the program development parameters; the development source code file is compiled to generate a program.
That is, in the embodiment of the present application, the program development action may be implemented by inputting a voice development instruction by voice.
In a possible implementation of the first aspect, the development source code file includes at least one development component, and each voice development instruction corresponds to one development component; the program development parameters comprise component names, component attribute values, component styles and component layout information corresponding to the development components.
For example, when the content corresponding to the program to be developed is a graphical user interface, the source code file is developed corresponding to the display content in the graphical user interface. The graphical user interface may include content such as display pictures (e.g., appearance pictures of marketing products, etc.), display texts (e.g., text introduction contents of marketing products, etc.), interactive components (e.g., purchase keys of marketing products, etc.), etc., where each display picture, display text, or interactive component corresponds to a development component, for example, a corresponding portion of an appearance picture of marketing products in a development source code file is an image development component, and a corresponding portion of a text introduction of marketing products in a development source code file is a text development component.
A complete development component includes the component name, component property values, component styles, and component layout information. The component name may represent a content name corresponding to the development component, for example, the component name of the Text development component may be "Text", the component name of the Image development component may be "Image", and the component name of the score development component may be "Rating (score plug-in)", which is not limited herein. The component attribute value may represent specific representation content corresponding to the development component, for example, the component attribute value of the text development component may be specific text content that needs to be displayed, and the component attribute value of the image development component may be specific image that needs to be displayed, and the like, which is not limited herein. The component style may represent a representation form of a specific representation content corresponding to the development component, for example, the component style of the text development component may be a font, a font size, and a color of a text, and the component style of the image development component may be a size of an image, and the like, which is not limited herein. The component layout information may represent layout position information corresponding to the development components, and if the component layout information of the development components is arranged in a column, display contents corresponding to each development component may be arranged in a vertical direction according to an input sequence; if the component layout information of the development components is arranged in rows, the display content corresponding to each development component may be arranged in the horizontal direction according to the input sequence, which is not limited herein.
In one possible implementation of the first aspect, the development source code file includes a development paradigm; and putting corresponding program development parameters according to the development paradigm to generate a development source code file. The development paradigm can be understood as a source code generation rule and a standard format corresponding to a development source code file, and different development source code files have different development paradigms. By means of putting the program development parameters into the development paradigm, development source code files required by users can be generated.
In a possible implementation of the first aspect, each voice development instruction comprises a start flag and/or a stop flag. For example, the start flag may be denoted by "start" and the end flag may be denoted by "end". By setting the starting mark and the ending mark in the voice development instruction, the voice development instruction input by a user can be distinguished and identified more accurately.
In one possible implementation of the first aspect, the development source code file includes a component tree file, a hypertext markup language file, a cascading style sheet file, and a JavaScript file. JavaScript is a lightweight, interpreted or just-in-time programming language with function priority, and supports object-oriented, command-oriented and declarative (e.g., functional programming) styles. The user may select a development source code file corresponding to the program development requirement, which is not limited herein.
In one possible implementation of the first aspect described above, the programs include an application program and a system program. Wherein the application includes a graphical user interface. The user may select the program content to be developed according to the own program development requirement, which is not limited herein.
In a possible implementation of the first aspect, the speech development method further includes: pre-training to obtain a voice recognition model; and recognizing the voice development instruction according to the voice recognition model to obtain a program development parameter corresponding to the voice development instruction.
In other words, in the embodiment of the present application, a speech recognition model is obtained through pre-training, and a speech development instruction of a user is recognized according to a recognition rule included in the speech recognition model, so as to obtain program development parameters related to program development. The training mode of the speech recognition model may be obtained by training in a machine learning manner or the like according to a preset training set, which is not limited herein.
In a possible implementation of the first aspect, the speech development method further includes: and executing an updating operation on the generated development source code file according to the program development parameters so as to update the development source code file. Wherein the updating operation comprises deleting and modifying part of the content in the development source code file.
That is, in the embodiment of the present application, the user can update the generated development source code file by inputting a voice development instruction, where the input voice development instruction includes the location information corresponding to the development component that needs to be updated and the program development parameter corresponding to the updated development component. It can be understood that a program developer needs to modify and adjust the inputted development source code file many times in the program development process to meet the final design requirement. The generated development source code file is updated through the voice development instruction, the development habit of a program developer is met, and the voice development experience of the developer can be further improved.
In one possible implementation of the foregoing first aspect, the speech development method further includes: and displaying the generated program and/or the development source code file corresponding to the program.
For example, when the content corresponding to the program to be developed is a graphical user interface, the developer needs to observe the generated graphical user interface to determine whether the development source code file needs to be modified and adjusted. By presenting the visual content in the generated program, the user can conveniently and properly adjust the generated development source code file, so that the visual display effect of the program is optimized. Similarly, a development source code file corresponding to the voice development instruction can be displayed, a user can conveniently and visually check the recognition generation result of the voice development instruction, the user can timely perform correction and modification when the voice development instruction is recognized wrongly, and the user can conveniently check the voice development instruction in the subsequent adjustment process, so that the voice development experience of the user is further improved.
A second aspect of the present application provides a speech development system for a program, comprising: the pickup module is used for acquiring a voice development instruction of a user; the recognition module is used for acquiring program development parameters corresponding to the voice development instruction according to the voice development instruction; the generating module is connected with the recognition module and used for generating a development source code file according to the program development parameters corresponding to the voice development instruction; and the compiling module is connected with the generating module and is used for compiling the development source code file to generate a program.
In a possible implementation of the second aspect, the identification module further includes: the training unit is used for training in advance to obtain a voice recognition model; the recognition module recognizes the voice development instruction according to the voice recognition model so as to obtain a program development parameter corresponding to the voice development instruction.
In other words, in the embodiment of the application, the recognition module is provided with a speech recognition model obtained through pre-training, and recognizes the speech development instruction of the user according to the recognition rule included in the speech recognition model, so as to obtain the program development parameters related to the program development. The speech recognition model may be obtained by training with a training unit, and the training mode adopted by the training unit may be obtained by training in a machine learning mode or the like according to a preset training set, which is not limited herein.
In a possible implementation of the second aspect, the generating module further includes: and the correcting unit is used for executing updating operation on the generated development source code file according to the program development parameters so as to update the development source code file. Wherein, the updating operation may include deleting and modifying part of the content in the development source code file.
That is, in the embodiment of the present application, through the setting of the modification unit, the user can update the generated development source code file by inputting a voice development instruction, where the input voice development instruction includes the position information corresponding to the development component that needs to be updated and the program development parameter corresponding to the updated development component. It can be understood that a program developer needs to modify and adjust the inputted development source code file many times in the program development process to meet the final design requirement. By arranging the correction unit in the voice development system, the development habit of program developers is met, and the voice development experience of the developers can be further improved.
In a possible implementation of the second aspect, the speech development system further includes a display unit, which is respectively connected to the generation module and the compiling module, and is configured to display the generated program and/or the development source code file corresponding to the program.
For example, when the content corresponding to the program to be developed is a graphical user interface, the developer needs to observe the generated graphical user interface to determine whether the development source code file needs to be modified and adjusted. The display unit can be a display screen, and the generated visualized content in the program and/or the development source code file corresponding to the program are/is presented on the display screen, so that the user can conveniently and intuitively know the correctness of the input of the voice development instruction and the final presentation effect of the program, and the voice development experience of the user is further improved.
In a possible implementation of the second aspect, the display unit is further capable of displaying a layout adjustment button; and acquiring the component layout information corresponding to the development component in the development source code file according to the interactive information received by the layout adjusting button. The development components are the same as those described above, and are not described herein again.
That is, in an embodiment of the present application, the speech development system provided by the second aspect of the present application includes a display unit, and the display unit may display a layout adjustment button with which a user interactively controls layout information corresponding to each development component.
For example, the display unit may be a display screen, and the layout adjustment button may be an interactable button presented in one display screen, the interactable button including layout information of the currently developed component. In the process of developing the graphical user interface, the presentation contents corresponding to the development components in the preset state are arranged according to the sequence of column arrangement (longitudinal arrangement), and at this time, an interactive button serving as a layout adjustment button presents an arrow in the vertical direction, which indicates that the presentation contents corresponding to the development components are arranged according to the sequence of column arrangement. When a user wants to arrange the presentation contents corresponding to the development components according to the sequence of row arrangement (horizontal arrangement), the user can interact with the development components by clicking a layout adjusting button; after receiving the clicking operation of the user, the layout adjusting button can arrange the presentation contents corresponding to the development components corresponding to the subsequently received voice development instruction according to the row arrangement sequence. At this time, an interactive button serving as a layout adjustment button presents "an arrow in the horizontal direction", which indicates that the presentation contents corresponding to the current development components are arranged in the order of row arrangement. By adopting the arrangement of the layout adjusting button, a developer does not need to repeatedly emphasize corresponding component layout information when inputting each voice development instruction, the control and adjustment of the component layout information of the development component can be realized through simple interaction, and the voice development experience of a user is further improved.
A third aspect of the present application provides an electronic device comprising: a memory storing instructions; a processor, the processor coupled to the memory, which when executed by the processor, causes the electronic device to perform the speech development method as provided in the first aspect.
In a possible implementation of the third aspect, the electronic device further includes: the pickup equipment is used for acquiring a voice development instruction of a user; and the display device is used for displaying the generated program and/or the development source code file corresponding to the program.
A fourth aspect of the present application provides a readable medium, which is characterized in that the readable medium has stored thereon instructions, which when executed on an electronic device, cause the electronic device to execute the voice development method as provided in the foregoing first aspect.
Drawings
FIG. 1 is a flow diagram illustrating a method of program development in the prior art;
FIG. 2 illustrates a schematic structural diagram of a speech development system according to an embodiment of the present application;
FIG. 3a illustrates a graphical user interface generated from voice development instructions, according to an embodiment of the present application;
FIG. 3b illustrates another graphical user interface generated from speech development instructions according to an embodiment of the present application;
FIG. 4 is a schematic diagram of a voice development apparatus according to an embodiment of the present application;
FIG. 5a illustrates a graphical user interface generated from voice development instructions, according to an embodiment of the present application;
FIG. 5b illustrates another graphical user interface generated from speech development instructions according to an embodiment of the present application;
FIG. 6 illustrates a flow diagram of a method for speech development, according to an embodiment of the present application;
FIG. 7 illustrates a display diagram of a graphical user interface generated from speech development instructions according to an embodiment of the present application;
FIG. 8 shows a flow diagram of another method of speech development, according to an embodiment of the application;
FIG. 9 illustrates a schematic diagram of modifying a generated graphical user interface based on speech development instructions, according to an embodiment of the present application;
FIG. 10 is a flow diagram illustrating another method for speech development according to an embodiment of the application;
FIG. 11 illustrates another graphical user interface generated from voice development instructions, according to an embodiment of the present application;
FIG. 12 is a diagram illustrating placement of a layout adjustment button according to an embodiment of the present application;
FIG. 13 illustrates a flow diagram of another method of speech development, according to an embodiment of the present application;
FIG. 14 is a schematic diagram illustrating another display for generating a graphical user interface based on speech development instructions, according to an embodiment of the present application;
FIG. 15 is a flow diagram illustrating another method for speech development according to an embodiment of the application;
FIG. 16 is a schematic diagram illustrating an electronic device according to an embodiment of the present application;
fig. 17 shows a block diagram of a software architecture of an electronic device according to an embodiment of the present application.
Detailed Description
Embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
In order to solve the problems of high development threshold and low development efficiency of the operating system and great physical strain of a developer in the development process, the application provides a voice development system. In the embodiment of the application, a developer can input a development instruction in a voice input mode, a voice development device performs targeted recognition on input voice, key factors such as component names, attribute values and styles are extracted from the input voice to form a corresponding component tree file, and finally an analysis engine is adopted to analyze and render the obtained component tree file, so that the development of a software application user interface is finally completed. Through the voice development system, the development efficiency of a developer can be improved, the development threshold of the developer for a new operating system is reduced, meanwhile, the threat to the body health of the developer caused by long-time mouse knocking is avoided, more developers can be attracted to build a new application ecological environment, and the voice development system has universality and popularization.
It is understood that the operating system applicable to the technical solution of the present application may be various operating systems, for example, an Android (Android) open source operating system, an apple mobile operating system (iOS) developed by apple, an operating system (Windows) developed by microsoft, and a grand operating system. For ease of illustration, the following description will be generally described in terms of a Hongmon operating system.
Similarly, it can be understood that the development object applicable to the technical solution of the present application may be various software applications, for example, an application for a mobile phone, an application for a computer, control software for a smart device, and the like. For ease of illustration, the following description will be described in terms of the development of a user interface for an application in a Hongmon operating system environment.
For convenience of description, the specific structure of the language development system and the speech development method of the present application will be described in detail below by taking an example of converting a speech instruction into a component tree source code file to perform programming and implement software development.
In an embodiment of the present application, fig. 2 shows a schematic structural diagram of a speech development system 200. Specifically, the speech development system may include a speech processing module 201, a recognition module 202, a model training module 203, a generation module 204, a modification module 205, and a compilation module 206.
The voice processing module 201 is used for converting the voice command input by the developer into a digital signal. It can be understood that, a developer usually inputs a voice command through a sound pickup device, which may be a microphone, and acquires an analog signal corresponding to the voice command by collecting an audio vibration signal in the air, and the voice processing module 201 converts such analog signal into a digital electrical signal so that a subsequent module can recognize the digital electrical signal. In the process of converting the analog signal into the digital signal, the conversion may be implemented by using the prior art such as framing, and those skilled in the art may use appropriate technical means to achieve the above conversion effect, which is not limited herein.
Further, considering that a developer may be in a noisy environment when inputting a voice, the voice processing module 201 may further have a function of noise filtering, and may filter out noisy environmental noise to highlight the voice input content of the developer; in addition, considering that a developer has thinking time in the process of inputting Voice, Voice input is not performed continuously for a long time, a certain time interval may exist between the input of two programming Voice commands, and the Voice processing module 201 may also have a function of Voice Activity Detection (VAD), so as to identify and eliminate the collected information corresponding to the silent period from the long-time Voice signal stream. It can be understood that the voice processing module 201 performs a series of preprocessing operations on the collected developer input voice, converts the collected signal into a digital signal that can be processed by a subsequent module, and improves the voice collection quality corresponding to the digital signal as much as possible.
The recognition module 202 is connected to the speech processing module 201, and is configured to extract development parameters in the developer speech instruction from the digital signal according to a pre-trained speech recognition rule, and send the development parameters to the generation module 204 and the modification module 205. It is understood that when a developer uses the speech development system to develop a software ui, a completed speech command may include some irrelevant links or linguistic words, and the recognition module 202 needs to remove these irrelevant factors to capture the content related to the development programming action.
For example, the voice command input by the developer is "create a text component, the content is 'Hello World', and the color is black". For example, when the speech development system is oriented to user interface development in the environment of the hong meng operating system, the development source code file required by the developer is a Component Tree (CT) file; the component tree file includes a plurality of display components, each display component corresponds to one display element in the user interface (for example, as shown in fig. 5, each function icon 500 in the user interface can be regarded as one component), and specifically includes a "component name", "attribute value", and a "style": the component name indicates the name of a display component, i.e., "text component" in the above-mentioned voice instruction; the attribute value represents a content attribute corresponding to the display component, namely "Hello World" in the voice instruction; the style represents the presentation style corresponding to the display component, i.e., "black" in the voice instruction described above. The developer wishes to create a black text content of Hello World on the user interface through the voice command, and the corresponding recognition module 202 extracts three development parameters of text component, Hello World and black from the voice command, and forms a mapping with the component name, attribute value and style required by the component tree file. That is, under the guidance of the pre-trained speech recognition rule, the recognition module 202 can extract the corresponding development parameters from the developer speech command and establish the corresponding mapping relationship according to the requirement of the development file.
The model training module 203 is connected to the recognition module 202, and is configured to train according to a preset training set to obtain the training rule. A preset training set may be stored in the model training module 203, where the preset training set may include a plurality of voice commands and a plurality of standard recognition results, and each voice command corresponds to one standard recognition result. The model training module 203 summarizes the above speech recognition rules by recognizing the speech commands and comparing the recognized speech commands with the standard recognition results.
For example: when the preset training set includes a plurality of following voice commands "create a text component with … … and … …", after a plurality of training, the voice recognition rules can be summarized as: the voice content following the voice digital signal with the content of 'content being' is an attribute value parameter corresponding to one text component in the component tree file, and the voice content following the voice digital signal with the color of 'content being' is a color style parameter corresponding to one text component in the component tree file.
The generating module 204 is connected to the identifying module 202, and is configured to form a corresponding development source code file according to the development parameters extracted by the identifying module 202. For example, when the user interface in the environment of the hongmeng-oriented operating system is developed, the generation module 204 may generate a corresponding component tree file, i.e., a development source file, according to the component names, the attribute values, and the styles extracted by the recognition module 202.
For example: when the voice command input by the developer is "create a text component, the content is 'Hello World', and the color is black", the component tree file generated by the generation module 204 is:
{
”className”:”Text”,
”value”:”Hello World”,
”color”:”black”
}
wherein, "className" represents a component name, "value" represents an attribute value, "color" represents a color style, and "Text" represents that the above component is a Text component, the same as below.
The modification module 205 is respectively connected to the identification module 202 and the generation module 204, and is configured to modify the development source code file generated by the generation module 204 according to the development parameters extracted by the identification module 202. It can be understood that, during the development process of the application program, the developer needs to repeatedly perform necessary debugging and modification on the existing content, and particularly, during the related development of the user interface, the developer needs to adjust the layout information such as the position, size, color, and the like of each component in the component tree file, and this adjustment operation is implemented by the modification module 205.
For example: for the above component tree file, when the voice instruction input by the developer is "modify the color of the characters in the text component to be gray", the recognition module 202 extracts the development parameter "modify the style parameter to be gray" from the voice instruction and sends the development parameter to the modification module 205, and the modification module 205 modifies the component tree file correspondingly to:
{
”className”:”Text”,
”value”:”Hello World”,
”color”:”gray”
}
the compiling module 206 is respectively connected to the generating module 204 and the modifying module 205, and is configured to compile the development source code file to form an executable target application program. For example, when the user interface is developed in the environment of the hongmeng-oriented operating system, the compiling module 206 may be a rendering engine module, and a visual user interface is obtained and displayed by parsing and rendering the component tree file.
For example: setting that a developer needs to design a user interface for an application program at a mobile phone end, and when a voice instruction input by the developer is "create a text component, the content is 'Hello World', and the color is black", the development display effect formed by the compiling module 206 can be as shown in fig. 3a, wherein 300 is a virtual screen for demonstration, and 301 is the development display effect corresponding to the voice instruction; if the developer is not satisfied with the current previous development display effect and then inputs a new voice command "modify the color of the text in the text component to be gray", the development display effect formed by the compiling module 206 can be as shown in fig. 3b, and the "Hello World" content 301 displayed in the virtual screen 300 for demonstration is changed from black to gray, so that the modification operation of the developer is realized.
In some embodiments of the present application, fig. 4 shows a schematic structural diagram of a speech development device for a user interface development scenario in a hongmeng-oriented operating system environment. Specifically, such a voice developing apparatus may include a sound pickup apparatus 400, a voice mode converter 401, a voice-to-word mapper 402, a component and style binder 403, a component corrector 404, a layout controller 405, a component tree generator 406, a rendering engine 407, and a display apparatus 408.
The sound pickup apparatus 400 is used for acquiring an audio signal corresponding to a voice development instruction input by a developer.
The voice mode converter 401 is connected to the sound pickup apparatus 400 and is configured to convert the audio signal acquired by the sound pickup apparatus 400 into a digital signal, and further, the voice mode converter 401 can perform preprocessing operations such as noise filtering on the digital signal to acquire a high-quality digital signal more favorable for voice recognition.
The speech and word mapper 402 is connected to the speech mode converter 401 and is configured to identify a specific word corresponding to the speech development instruction from the digital signal. Specifically, the manner of recognizing specific words from the digital signal can be implemented by means of a viterbi decoding, etc., and those skilled in the art can adopt an appropriate speech recognition algorithm according to the actual application requirement.
The component and pattern binder 403 is connected to the speech and word mapper 402, and is configured to parse each component information included in the speech development instruction from the specific word obtained through recognition, where each component information includes a component name, an attribute, and a pattern of the component. It can be understood that, when the user interface is developed in the environment of the hong meng operating system, the development source code file required by the developer is a Component Tree (CT) file; the component tree file comprises a plurality of components, each component corresponds to one display element in the user interface, and the specific composition of each component comprises a component name, a property value and a style. The component/style binding unit 403 is used to extract and bind words related to the development settings of the component name, property value, and style from the specific words of the voice development instruction.
For example, when the voice command input by the developer is "create a text component, the content is 'Hello World', and the color is black", the component and style binder 402 can obtain three key-value pairs from the text component, which are: "Component name-Text Component", "Attribute value-Hello World", "style-Black" (Color: Black), and these three key-value pairs all correspond to the same Component. The component and style binder 402 implements the mapping and binding between specific words and component information in the form of key-value pairs as described above.
Further, when the developer inputs a plurality of instructions at once, the component and style binder 403 can implement binding of each component and the corresponding style and/or property value. For example, when the voice command input by the developer is "create a text component 1, the content is 'Hello World', the color is black, stop; when creating a text component 2 with content ' END ' and grey color ', the component and style binder 402 may identify "stop" as a separator, and obtain six key-value pairs from it, respectively: "component name-text component 1", "attribute value-Hello World", "style-black", and "component name-text component 2", "attribute value-END", "style-gray", and the first three of these six key-value pairs correspond to text component 1 and the last three to text component 2.
The component tree generator 406 is connected to the component and style binder 403 for generating a component tree file according to each component information. It is understood that the component tree file corresponds to a source code file in the development process of an application, which includes a plurality of components. For example, the concrete representation of the component tree file may be as follows:
{
”className”:”Text”,
”value”:”Hello World”,
”color”:”gray”
}
{
”className”:”Rating”,
”numstars”:”4”,
”height”:”150”
}
wherein, the "Rating" represents that the above-mentioned components are scoring components, the "numstars" represents the number of scoring stars corresponding to the scoring components, and the "height" represents the presentation height of each scoring star, and the unit is a pixel.
The component tree file comprises two components, wherein the first component is a character component, the content is a gray character of Hello World, the second component is a scoring component, the content is a star-level scoring object which displays 4 stars and is 150 pixels in height. In the conventional development process, the component tree file is directly input by a developer in a keyboard and mouse input mode, the technical scheme provided by the application can convert the voice development instruction of the developer into the component tree file consistent with the keyboard and mouse input, and the two hands of the developer are liberated. Through statistics, when the component tree files with the same code amount are input, the input speed of the voice development instruction is improved by about 15% compared with the input speed of a keyboard and a mouse, and the development efficiency can be effectively improved.
The component corrector 404 is respectively connected to the voice and word mapper 402 and the component tree generator 406 for modifying each component according to the voice development instruction of the developer. It can be understood that, when a developer needs to perform optimization and adjustment continuously according to a debugging result in the process of software development, particularly in the development of a visual interface such as a user interface, the developer needs to perform continuous adjustment on display components in the user interface in order to obtain a better display effect, which requires that each component in the composition tree file be modified and adjusted through the component corrector 404. The modification adjustment may include changing the style and/or properties of the component, undoing the entered component, inserting a new component between the two entered components, etc., without limitation.
The layout controller 405 is connected to the voice and word mapper 402 and the component tree generator 406, respectively, for controlling the layout direction of each component according to the voice development instruction of the developer. It can be understood that when developed for user interfaces, the location of the text component, icon component, and image component is of paramount importance: for example, in the design of the user interface of the chat application program, as shown in fig. 5a, if the function icons 500 are designed in a vertical layout manner, not only the space of the upper chat list is compressed, but also a large blank area formed in the lower right corner of the user interface is deficient in layout rationality; by designing the function icons 500 in a horizontal layout, as shown in fig. 5b, the entire space in the user interface can be utilized to the maximum extent while achieving both aesthetic design. Therefore, during the development and design process of the user interface, the layout direction between the components needs to be adjusted by the layout controller 405.
The rendering engine 407 is connected to the component tree generator 406, and is configured to compile and render the component tree file generated by the component tree generator 406, obtain a visual software user interface, and present the visual software user interface through the display device 408.
It is understood that in the practical application scenario, the speech mode converter 401, the speech-to-word mapper 402, the component and style binder 403, the component corrector 404, the layout controller 405, the component tree generator 406, and the rendering engine 407 may all be integrated into an integrated processor 409; meanwhile, considering that the functions implemented by the voice mode converter 401, the voice and word mapper 402, the component and style binder 403, the component corrector 404, the layout controller 405, the component tree generator 406 and the rendering engine 407 can all be implemented by a computer, and the computer has a display screen capable of being used as the display device 408 and has an extensible support for the sound pickup device 400, in an actual application scenario, the voice development device can also be integrated into a computer.
For example: when the voice Development device is Integrated on a computer which has an Integrated Development Environment (IDE) and supports external extension of a recording device, a developer can independently select to use a keyboard and a mouse to perform software Development operation or perform software Development operation by adopting a voice input mode in the IDE, so that the use habit of the developer for performing software Development by using the computer all the time is met, various independent selection rights are provided for the developer, and the Development experience of the developer is improved.
For another example, the voice development apparatus may be integrated into a smart phone, and since the smart phone has a sound pickup device and a display, the voice development apparatus can be implemented by integrating components such as the component and style binder 403 and the component tree generator 406 into a processor of the smart phone. The software development operation can be performed by a developer through the smart phone which is a device in a voice input mode anytime and anywhere, particularly, in the process of developing a user interface of a mobile phone application program, the development completion effect can be verified more intuitively through a display screen of the smart phone and further optimized, and the development experience of the developer can be greatly improved while the software development threshold is reduced.
Based on the speech development device shown in fig. 4, the detailed flow of the speech development method will be described in detail below with reference to the accompanying drawings.
In some embodiments of the present application, a specific flow diagram of a developer performing speech development is shown in fig. 6, and specifically includes:
step 600: the sound pickup apparatus 400 acquires a voice development instruction. Wherein the voice development instruction may be input by a user.
Step 601: the voice mode converter 401 converts the voice development instruction into a digital signal. For a detailed conversion, refer to the related description of the speech mode converter 401.
Step 602: the speech-to-word mapper 402 identifies specific words corresponding to the speech development instruction from the digital signals. For a specific recognition method, reference is made to the foregoing description related to the speech and word extractor 402.
Step 603: the component and style binder 403 parses out each component information contained in the voice development instruction from the specific word. For the specific parsing method, refer to the related description of the components and the style binder 403.
Step 604: the component tree generator 406 generates a component tree file according to the component information. For a detailed generation manner, please refer to the related description of the component tree generator 406.
Step 605: the rendering engine 407 compiles and renders the component tree file to obtain a visual software user interface and presents the visual software user interface through the display device 408.
For example, when the developer sequentially inputs "create text component, content is show picture and scoring component, color is black, stop (end, as an end identifier of a single voice development instruction, the same below)", "create image component, picture path is local notebook picture, width is 700px (pixels), stop", and "create scoring component, total star number is 4, height is 150px (pixels), stop" three voice development instructions, the display content presented on the display device 408 is as shown in fig. 7.
It is understood that the display area of the display device 408 may comprise two portions, a code region 701 and a presentation region 702, or may comprise only one portion of the presentation region 702 or the code region 701. The code area 701 is used for displaying source codes corresponding to the component tree files, and the demonstration area 702 is used for displaying a visual user interface obtained by compiling and rendering the component tree files. The developer can more intuitively acquire the specific display effect of each component through the comparison of the two components, and further obtain better development experience.
In the above embodiment, the developer uses "stop" as the end at the end of each voice development instruction, which may be specified by a preset voice development instruction rule, and uses "stop" as the identifier of the end of component creation, which can help the component and style binder 403 to better implement the binding of each component and the corresponding style and/or attribute value. The setting of the identifiers for the start and the end of the component creation is not limited, and for example, a rule of the voice development instruction may be set to require each voice of the developer to be "begin, … …, stop", and the binding accuracy of the component and the style and/or the attribute value is further improved by double-end verification of "begin, which is the start identifier of a single voice development instruction, and" stop ".
In the above embodiment, it can be seen that the composition form of each voice development instruction of the developer is "component + attribute + style + stop", which may also be specified by the preset voice development instruction rule. In other practical application scenarios, the constituent form of the voice development instruction of the developer may also be "component + style + attribute + stop" or "component + style + stop" or "component + attribute + stop", which is not limited herein.
It is considered that in the development process of the user interface, the developer needs to repeatedly modify the source code file to achieve the best display effect. Therefore, in the embodiment of the present application, a specific flow diagram of the developer performing the voice development may also be as shown in fig. 8, and specifically includes:
steps 800 to 803 are the same as steps 600 to 603, and are not described herein.
Step 804: the component rectifier 404 modifies the component tree file according to the component information.
Step 805: the rendering engine 407 compiles and renders the modified component tree file to obtain a visual software user interface, and the visual software user interface is displayed through the display device 408.
For example, after the developer sequentially inputs three voice development instructions, "create Text component 1, content is Hello World, color is gray, stop", "create Text component 2, content is Hello JS, color is white, stop", and "create Text component 3, content is Hello Text, color is black", it is considered that some of the Text components need to be adjusted, and further inputs two voice development instructions, "modify Text component 1, color is black, stop", and "delete Text component 2, stop", and the change in the presentation area 702 may be as shown in fig. 9.
It will be appreciated that since the text component 2 is white in color and cannot be rendered in accordance with the underlying color of the user interface, the developer may choose to delete the text component 2 entirely, since the components are arranged in a vertical column arrangement by default during the development and design process of the user interface, since the deleted text component 3 of the text component 2 automatically displaces its position and moves upwards.
It is to be appreciated that the developer can determine the modification component when entering voice development instructions regarding the modification. When more than one component is contained in the component tree file, each component in the component tree file has an independent serial number, such as the suffix "2" in "text component 2", and the developer can perform location confirmation on the component to be modified by inputting the component name with the serial number through voice.
It is considered that the developer needs to properly adjust the arrangement order of the components in the development process of the user interface. Therefore, in the embodiment of the present application, a specific flow diagram of the developer performing the voice development may also be as shown in fig. 10, and specifically includes:
steps 1000 to 1003 are the same as steps 600 to 603, and are not described herein.
Step 1004: the layout controller 405 parses the layout information of each component included in the voice development instruction from the specific words.
Step 1005: the component tree generator 406 generates a component tree file according to the component information and the layout information.
Step 1006: the rendering engine 407 compiles and renders the component tree file to obtain a visual software user interface and presents the visual software user interface through the display device 408.
It will be appreciated that in the above embodiments, as shown in fig. 7 or fig. 9, each element is arranged in a vertical column by default in the input precedence order as seen from the presentation of the elements in the presentation area 702. Considering that a developer needs to adjust the layout mode of each component during the design and development process of the user interface, and therefore needs to parse the layout information of each component included in the voice development instruction from specific words by the layout controller 405, the component tree generator 406 will generate a corresponding component tree file according to the component information and the layout information.
For example, when the developer sequentially inputs three voice development instructions of "create Text component 1, content Hello World, component in horizontal layout, color gray, stop", "create Text component 2, component in horizontal layout, content Hello JS, color black, stop", and "create Text component 3, component in horizontal layout, content Hello Text, color black", the presentation effect in the presentation area 702 may be as shown in fig. 11.
In other embodiments of the present application, the layout controller 405 can be controlled in other ways. It is to be understood that, in the foregoing embodiment, since each component is arranged in a vertical column distribution in the input sequence by default, when each voice development instruction is input, the developer needs to explain that "the components are arranged in a horizontal direction", and a feeling of fatigue is easily generated. Another implementation of controlling the layout controller 405 in other ways can be seen in fig. 12, where a layout adjustment button 1200 is provided in the lower right corner of the presentation area 702: when the layout adjustment button 1200 presents a downward arrow style, it indicates that each default component is arranged in a vertical column distribution according to an input sequence, and when the layout adjustment button 1200 presents a rightward arrow style, it indicates that each default component is arranged in a vertical column distribution according to an input sequence, a developer can adjust the style of the layout adjustment button 1200 by means of touch screen touch or mouse click, and the component tree generator 406 synchronously obtains corresponding layout information. Therefore, the developer does not need to repeatedly input the layout information through the voice, and the development operation of the developer is further facilitated.
Through the description of the embodiment, the technical scheme provided by the application can replace manual keyboard and mouse input by using a voice input method, so that the problems of low efficiency and high error probability in the manual keyboard and mouse input process are solved, and the overall efficiency of application development is improved. Meanwhile, when the technical scheme provided by the application is used for developing the software user interface facing the Hongmong operating system, developers do not need to learn the development paradigm of the component tree source code file in advance, and can directly participate in the development process only by knowing the simple input rule of the voice development instruction, so that the development threshold is greatly reduced, more developers are attracted, particularly, developers suffering from repetitive strain are added into the construction of the Hongmong application ecology, and the advantage is consistent with the development attraction of other emerging operating systems.
In the above embodiments of the present application, when software user interface development oriented to the hong meng operating system is performed, the generated source code files are all Component Tree (CT) files; in other embodiments of the present application, the source code file may also be written using Hypertext Markup Language (HTML), Cascading Style Sheets (CSSs) and JavaScript Language (JS), which are commonly used in front-end development at present, and a specific flow is shown in fig. 13:
steps 1300 to 1303 are the same as steps 600 to 603, and are not described herein.
And 1304, generating an HTML source code, a CSS source code and a JS source code by the source code generator according to the component information. The source code generator stores development normal forms of hypertext markup language, cascading style sheets and JavaScript language, and generates HTML source codes, CSS source codes and JS source codes in a mode of respectively putting component information into the corresponding development normal forms.
Step 1305: and the packager packages the HTML source code, the CSS source code and the JS source code into a JS data packet.
Step 1306: the rendering engine 407 compiles and renders the JS data packet to obtain a visual software user interface, and the visual software user interface is presented through the display device 408.
It is to be appreciated that in the above embodiment, the component tree generator 406 is replaced with a source code generator and packager, while the component tree file is replaced with a JS data package. Since the form of the source code file is changed, the mapping rule from the component information to the component tree file is adaptively changed to the mapping rule from the component information to the HTML source code, the CSS source code, and the JS source code.
For example, when the developer inputs a voice development instruction of "begin, creates a text component, has a Hello World content, and has a black color, stop", the display content presented on the display device 408 is as shown in fig. 14.
It is understood that the display area of the display device 408 may include two portions, a code area 1401 and a presentation area 1402, respectively, the code area 1401 including an area 14011 for displaying HTML source code, an area 14012 for displaying CSS source code, and an area 14013 for displaying JS source code, and the presentation area 702 for displaying a visual user interface obtained by compiling and rendering JS packets. Specifically, as shown in fig. 14, it can be seen that in the above-described embodiment, the component name information and the attribute value information in the component information are mapped into the HTML source code, and the style information in the component information is mapped into the CSS source code.
In the above embodiment, the JS data packet formed by packaging the HTML source code, the CSS source code, and the JS source code is adopted to replace the component tree file as the source code file, so that the current programming paradigm that is relatively general in the front-end development process can be better adapted, the expandability of the front-end application is facilitated to be improved, and a developer can expand each component in the front-end application in the subsequent development process.
In other embodiments of the present application, the development of the software application by the developer through the voice input can also be implemented in a manner as shown in fig. 15, where the specific process includes:
step 1500: the developer inputs the specification through the sound pickup device to develop a voice.
The standard development voice comprises a plurality of pre-defined voice commands, the voice commands are composed of English phrases, and the standard development voice needs to be learned and mastered in advance for developers.
Step 1501: the speech processing device performs noise reduction processing on the standard development speech and then performs recognition to obtain a standard text.
In the voice recognition process, the voice processing device only carries out targeted recognition on the voice command which is developed according to the standard.
Step 1502: the translator takes the canonical text and converts the canonical text into a corresponding keystroke instruction.
The keystroke instruction comprises a keystroke instruction corresponding to the keystroke of the keyboard and a click instruction corresponding to the click of the mouse.
Step 1503: the translator sends the keystroke instruction to the integrated compilation environment.
The translator sends the obtained keystroke instruction to an integrated editing environment or a text editor, which is equivalent to adopting a voice input mode to replace a keyboard and mouse input mode to realize the input of the programming code.
In the above embodiment, the developer first needs to learn the customized standard development voices in advance, the development threshold is high, and if the developer forgets or mixes some standard development voices in the development process, the development efficiency is seriously affected; meanwhile, in the above embodiment, the mapping relationship between the voice command and each component in the front-end development is not established: the step flow proposed in the above embodiment actually only replaces the keyboard and mouse input with the voice instruction in the code input link, and only the character-by-character input can be performed on the source code file, which also results in low development efficiency. Moreover, the granularity of the development mode is fine, and the development mode is difficult to be applied to the front-end component development environment with large granularity.
Fig. 16 shows a schematic configuration diagram of the electronic apparatus.
The electronic device may include a processor 110, an external memory interface 120, an internal memory 121, a Universal Serial Bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, a key 190, a motor 191, an indicator 192, a camera 193, a display screen 194, a Subscriber Identification Module (SIM) card interface 195, and the like. The sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
It is to be understood that the illustrated structure of the embodiment of the present invention does not limit the electronic device. In other embodiments of the present application, an electronic device may include more or fewer components than shown, or some components may be combined, some components may be split, or a different arrangement of components may be used. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 110 may include one or more processing units, such as: the processor 110 may include an Application Processor (AP), a modem processor, a Graphics Processing Unit (GPU), an Image Signal Processor (ISP), a controller, a video codec, a Digital Signal Processor (DSP), a baseband processor, and/or a neural-Network Processing Unit (NPU), etc. The different processing units may be separate devices or may be integrated into one or more processors.
The controller can generate an operation control signal according to the instruction operation code and the timing signal to complete the control of instruction fetching and instruction execution.
A memory may also be provided in processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to reuse the instruction or data, it can be called directly from memory. Avoiding repeated accesses reduces the latency of the processor 110, thereby increasing the efficiency of the system.
It should be understood that the interface connection relationship between the modules according to the embodiment of the present invention is only an exemplary illustration, and does not limit the structure of the electronic device. In other embodiments of the present application, the electronic device may also adopt different interface connection manners or a combination of multiple interface connection manners in the above embodiments.
The wireless communication function of the electronic device may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, the modem processor, the baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in an electronic device may be used to cover a single or multiple communication bands. Different antennas can also be multiplexed to improve the utilization of the antennas.
The mobile communication module 150 may provide a solution including 2G/3G/4G/5G wireless communication applied to the electronic device. The mobile communication module 150 may include at least one filter, a switch, a power amplifier, a Low Noise Amplifier (LNA), and the like. The mobile communication module 150 may receive the electromagnetic wave from the antenna 1, filter, amplify, etc. the received electromagnetic wave, and transmit the electromagnetic wave to the modem processor for demodulation. The mobile communication module 150 may also amplify the signal modulated by the modem processor, and convert the signal into electromagnetic wave through the antenna 1 to radiate the electromagnetic wave. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the processor 110. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the same device as at least some of the modules of the processor 110.
The wireless communication module 160 may provide solutions for wireless communication applied to electronic devices, including Wireless Local Area Networks (WLANs) (such as wireless fidelity (Wi-Fi) networks), Bluetooth (BT), Global Navigation Satellite Systems (GNSS), Frequency Modulation (FM), Near Field Communication (NFC), Infrared (IR), and the like. The wireless communication module 160 may be one or more devices integrating at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 2, performs frequency modulation and filtering processing on electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 160 may also receive a signal to be transmitted from the processor 110, perform frequency modulation and amplification on the signal, and convert the signal into electromagnetic waves through the antenna 2 to radiate the electromagnetic waves.
The electronic device implements the display function through the GPU, the display screen 194, and the application processor, etc. The GPU is a microprocessor for image processing, and is connected to the display screen 194 and an application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. The processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
The internal memory 121 may be used to store computer-executable program code, which includes instructions. The internal memory 121 may include a program storage area and a data storage area. The storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required by at least one function, and the like. The data storage area can store data (such as audio data, phone book and the like) created in the using process of the electronic device. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (UFS), and the like. The processor 110 executes various functional applications of the electronic device and data processing by executing instructions stored in the internal memory 121 and/or instructions stored in a memory provided in the processor.
The keys 190 include a power-on key, a volume key, and the like. The keys 190 may be mechanical keys. The input method may also be a touch type key, for example, a key of a virtual keyboard of the first input method displayed by the electronic device. The electronic device may receive a key input, and generate a key signal input related to user settings and function control of the electronic device.
The software system of the electronic device may employ a layered architecture, an event-driven architecture, a micro-core architecture, a micro-service architecture, or a cloud architecture. The embodiment of the invention takes an Android system with a layered architecture as an example, and exemplarily illustrates a software structure of an electronic device.
Fig. 17 is a block diagram of a software configuration of an electronic apparatus according to an embodiment of the present invention.
The layered architecture divides the software into several layers, each layer having a clear role and division of labor. The layers communicate with each other through a software interface. In some embodiments, the Android system is divided into four layers, an application layer, an application framework layer, an Android runtime (Android runtime) and system library, and a kernel layer from top to bottom.
The application layer may include a series of application packages.
As shown in fig. 17, the application package may include applications such as a mall, camera, gallery, calendar, call, map, navigation, WLAN, bluetooth, music, video, short message, etc.
The application framework layer provides an Application Programming Interface (API) and a programming framework for the application program of the application layer. The application framework layer includes a number of predefined functions.
As shown in FIG. 17, the application framework layers may include a window manager, content provider, view system, phone manager, resource manager, notification manager, and the like.
The window manager is used for managing window programs. The window manager can obtain the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like.
The content provider is used to store and retrieve data and make it accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, phone books, etc.
The view system may be a display system service 101 of the electronic device for managing and modifying the display style of applications of the electronic device. The view system acquires a display function corresponding to the display style parameter according to the display style parameter included in the display parameter acquired by the electronic device from the tablet pc 200, and is used for configuring an application program of the electronic device.
The phone manager is used to provide communication functions of the electronic device. Such as management of call status (including on, off, etc.).
The resource manager provides various resources for the application, such as localized strings, icons, pictures, layout files, video files, and the like.
In embodiments of the present invention, the resource manager may also be used to store an Overlay configuration file.
The notification manager enables the application to display notification information in the status bar, can be used to convey notification-type messages, can disappear automatically after a brief dwell, and does not require user interaction. Such as a notification manager used to inform download completion, message alerts, etc. The notification manager may also be a notification that appears in the form of a chart or scroll bar text at the top status bar of the system, such as a notification of a background running application, or a notification that appears on the screen in the form of a dialog window. For example, prompting text information in the status bar, sounding a prompt tone, vibrating the electronic device, flashing an indicator light, etc.
The Android Runtime comprises a core library and a virtual machine. The Android runtime is responsible for scheduling and managing an Android system.
The core library comprises two parts: one part is a function which needs to be called by java language, and the other part is a core library of android.
The application layer and the application framework layer run in a virtual machine. And executing java files of the application program layer and the application program framework layer into a binary file by the virtual machine. The virtual machine is used for performing the functions of object life cycle management, stack management, thread management, safety and exception management, garbage collection and the like.
The system library may include a plurality of functional modules. For example: surface managers (surface managers), Media Libraries (Media Libraries), three-dimensional graphics processing Libraries (e.g., OpenGL ES), 2D graphics engines (e.g., SGL), and the like.
The surface manager is used to manage the display subsystem and provide fusion of 2D and 3D layers for multiple applications.
The media library supports a variety of commonly used audio, video format playback and recording, and still image files, among others. The media library may support a variety of audio-video encoding formats, such as MPEG4, h.264, MP3, AAC, AMR, JPG, PNG, and the like.
The three-dimensional graphic processing library is used for realizing three-dimensional graphic drawing, image rendering, synthesis, layer processing and the like.
The 2D graphics engine is a drawing engine for 2D drawing.
The kernel layer is a layer between hardware and software. The inner core layer at least comprises a display driver, a camera driver, an audio driver and a sensor driver.
The following describes, in an exemplary manner, a workflow of related software and hardware when the electronic device is a mobile phone, in conjunction with a voice development scenario: when the pickup sensor receives a voice development instruction of a user, a corresponding analog audio signal is sent to the kernel layer; the kernel layer processes the analog audio signal into original input time (including digital audio signal corresponding to the analog audio signal, time stamp of the analog audio signal, and other information); the original input time is stored in the kernel layer; and the application program framework layer acquires the original input time from the kernel layer and identifies the control corresponding to the input event. Taking the control corresponding to the voice development instruction as an example of the control for voice development in the mobile phone, the mobile phone calls the application framework stack interface, starts the voice development program (i.e. generating the development source code file corresponding to the development component and compiling the development source code file according to the voice development instruction), and then displays the corresponding development source code file and the displayable content of the program through the display driver.
Through the description of the above embodiments, those skilled in the art will understand that, for convenience and simplicity of description, only the division of the above functional modules is used as an example, and in practical applications, the above function distribution may be completed by different functional modules as needed, that is, the internal structure of the device may be divided into different functional modules to complete all or part of the above described functions.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, a module or a unit may be divided into only one logic function, and may be implemented in other ways, for example, a plurality of units or components may be combined or integrated into another apparatus, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
Units described as separate parts may or may not be physically separate, and parts displayed as units may be one physical unit or a plurality of physical units, may be located in one place, or may be distributed to a plurality of different places. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present application may be essentially or partially contributed to by the prior art, or all or part of the technical solutions may be embodied in the form of a software product, where the software product is stored in a storage medium and includes several instructions to enable a device (which may be a single chip, a chip, or the like) or a processor (processor) to execute all or part of the steps of the methods of the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (17)

1. A method for speech development of a program, comprising:
acquiring a voice development instruction of a user;
acquiring program development parameters corresponding to the voice development instruction;
generating a development source code file of the program according to the program development parameters;
compiling the development source code file to generate the program.
2. The method of claim 1, wherein the development source code file includes at least one development component, one for each of the voice development instructions;
the program development parameters comprise component names, component attribute values, component styles and component layout information corresponding to the development components.
3. The method of claim 1, wherein the development source code file comprises a development paradigm;
and embedding the corresponding program development parameters according to the development paradigm to generate the development source code file.
4. The method of claim 1, wherein each of the voice development instructions comprises a start flag and/or a stop flag.
5. The method of claim 1, wherein the development source code files comprise component tree files, hypertext markup language files, cascading style sheet files, and JavaScript files.
6. The method of claim 1, wherein the program comprises an application program, the application program comprising a graphical user interface.
7. The method of claim 1, further comprising:
pre-training to obtain a voice recognition model;
and recognizing the voice development instruction according to the voice recognition model to obtain a program development parameter corresponding to the voice development instruction.
8. The method of claim 1, further comprising:
executing updating operation on the generated development source code file according to the program development parameters so as to update the development source code file;
the updating operation comprises deleting and modifying part of the content in the development source code file.
9. The method of claim 1, further comprising:
and displaying the generated program and/or the development source code file corresponding to the program.
10. A system for speech development of a program, comprising:
the pickup module is used for acquiring a voice development instruction of a user;
the recognition module is used for acquiring program development parameters corresponding to the voice development instruction according to the voice development instruction;
the generating module is connected with the recognition module and used for generating a development source code file according to the program development parameters corresponding to the voice development instruction;
and the compiling module is connected with the generating module and used for compiling the development source code file to generate the program.
11. The system of claim 10, wherein the identification module further comprises:
the training unit is used for training in advance to obtain a voice recognition model;
and the recognition module recognizes the voice development instruction according to the voice recognition model so as to obtain a program development parameter corresponding to the voice development instruction.
12. The system of claim 10, wherein the generation module further comprises:
the correction unit is used for executing updating operation on the generated development source code file according to the program development parameters so as to update the development source code file;
the updating operation comprises deleting and modifying part of the content in the development source code file.
13. The system of claim 10, further comprising:
and the display unit is connected with the generation module and the compiling module and is used for displaying the generated program and/or the development source code file corresponding to the program.
14. The system of claim 13, wherein the display unit is configured to display a layout adjustment button;
and acquiring the component layout information corresponding to the development component in the development source code file according to the interactive information received by the layout adjusting button.
15. An electronic device, comprising:
a memory storing instructions;
a processor coupled to a memory, the program instructions stored by the memory when executed by the processor causing the electronic device to perform the speech development method of any of claims 1-9.
16. The electronic device of claim 15, further comprising:
the pickup equipment is used for acquiring a voice development instruction of a user;
and the display device is used for displaying the generated program and/or the development source code file corresponding to the program.
17. A readable medium having stored thereon instructions that, when executed on an electronic device, cause the electronic device to perform the speech development method of any of claims 1-9.
CN202110277635.6A 2021-03-15 2021-03-15 Voice development method, system, electronic device, and medium Pending CN115080007A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110277635.6A CN115080007A (en) 2021-03-15 2021-03-15 Voice development method, system, electronic device, and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110277635.6A CN115080007A (en) 2021-03-15 2021-03-15 Voice development method, system, electronic device, and medium

Publications (1)

Publication Number Publication Date
CN115080007A true CN115080007A (en) 2022-09-20

Family

ID=83241332

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110277635.6A Pending CN115080007A (en) 2021-03-15 2021-03-15 Voice development method, system, electronic device, and medium

Country Status (1)

Country Link
CN (1) CN115080007A (en)

Similar Documents

Publication Publication Date Title
US11403364B2 (en) Method and terminal device for extracting web page content
CN107111516B (en) Headless task completion in a digital personal assistant
WO2010091623A1 (en) Apparatus and method for dynamically generating application program interface
CN113132526B (en) Page drawing method and related device
CN112882772B (en) Configuration method of application interface of mobile terminal, mobile terminal and storage medium
CN112839261B (en) Method for improving matching degree of voice instruction and display equipment
CN104470686A (en) System and method for generating contextual behaviours of a mobile robot executed in real time
US20200293168A1 (en) Electronic apparatus and method for controlling thereof
CN112885354A (en) Display device, server and display control method based on voice
WO2022057889A1 (en) Method for translating interface of application, and related device
CN111488147A (en) Intelligent layout method and device
WO2021244459A1 (en) Input method and electronic device
CN114371844B (en) APP development platform, APP development method and electronic equipment
CN112650540B (en) Method for starting fast application and related device
CN115080007A (en) Voice development method, system, electronic device, and medium
WO2023103918A1 (en) Speech control method and apparatus, and electronic device and storage medium
CN114007145A (en) Subtitle display method and display equipment
CN116340680A (en) Display equipment and control method for managing life cycle of plug-in object
CN113038217A (en) Display device, server and response language generation method
Blumendorf Multimodal interaction in smart environments: a model-based runtime system for ubiquitous user interfaces.
KR20080039577A (en) Device and method of converting contents for heterogeneous mobile platforms and computer-readable medium having thereon program performing function embodying the same
CN110456919A (en) Data processing method, device and the device for data processing
CN116661635B (en) Gesture processing method and electronic equipment
EP4270170A1 (en) Voice broadcasting method and apparatus
CN115016722B (en) Text editing method and related equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination