CN114721653A - UI (user interface) generation method, device and equipment - Google Patents

UI (user interface) generation method, device and equipment Download PDF

Info

Publication number
CN114721653A
CN114721653A CN202210313437.5A CN202210313437A CN114721653A CN 114721653 A CN114721653 A CN 114721653A CN 202210313437 A CN202210313437 A CN 202210313437A CN 114721653 A CN114721653 A CN 114721653A
Authority
CN
China
Prior art keywords
sub
target
information
type
information content
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210313437.5A
Other languages
Chinese (zh)
Inventor
李劼
邬浩
刘建昕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Datamesh Technology Co ltd
Original Assignee
Beijing Datamesh Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Datamesh Technology Co ltd filed Critical Beijing Datamesh Technology Co ltd
Priority to CN202210313437.5A priority Critical patent/CN114721653A/en
Publication of CN114721653A publication Critical patent/CN114721653A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/30Creation or generation of source code
    • G06F8/38Creation or generation of source code for implementing user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/30Creation or generation of source code
    • G06F8/34Graphical or visual programming

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Stored Programmes (AREA)

Abstract

The embodiment of the application relates to the field of computers, and discloses a UI generation method, device and equipment. The embodiment of the application relates to a UI generation method, which comprises the following steps: acquiring a generation request and at least one selection message corresponding to a target UI; generating information content corresponding to the target UI according to the generation request, wherein the information content indicates a sub-UI, and the type of the sub-UI is a two-dimensional (2D) UI or a three-dimensional (3D) UI; selecting the type of the sub-UI according to each piece of selection information to obtain at least one target sub-UI; and constructing at least one target UI according to each target sub UI and the information content corresponding to each target sub UI. Therefore, the method and the device integrate the content used by the original UI generation, and then select the sub-UI type suitable for inheriting the content to be used to be the 2D UI or the 3D UI according to the information of the use scene and the operation platform. Therefore, the content of the configuration information interaction channel can be reduced in the software with the coexistence of the 2D UI and the 3D UI, and the cost is reduced for the later operation and maintenance process.

Description

UI (user interface) generation method, device and equipment
Technical Field
The embodiment of the invention relates to the field of computers, in particular to a method, a device and equipment for generating a User Interface (UI).
Background
A User Interface (UI) generally refers to an Interface in a process of human-to-object or human-to-software interaction. In the UI for human-software interaction, commonly used software applications may be classified into two-dimensional (2D) applications and three-dimensional (3D) applications according to usage scenarios. 2D applications are generally referred to as applications running at a mobile phone end or a computer screen end, etc.; the 3D application is typically used in a scene supporting Augmented Reality (AR) technology.
Generally, in the same software engineering project, because the application scenes to which the 2D application and the 3D application are applied are different, the 2D application and the 3D application need to generate a UI respectively, the 2D application corresponds to a 2D UI, the 3D application corresponds to a 3D UI, and in order to ensure that one application can be simultaneously applied to multiple types of platforms, the 2D UI and the 3D UI are generally required to coexist. At the present stage, for a project in which a 2D application and a 3D application coexist, developers usually adopt a 2D UI interface and a 3D UI interface to respectively develop, and establish a mode of information interaction between the 2D UI interface and the 3D UI interface for processing.
However, as the amount of development content information increases and subsequent software is updated iteratively, each time a software engineering content is modified, all contents of the 2D UI interface and all contents of the 3D UI interface need to be modified and a channel for information interaction needs to be reconfigured for the modified contents. Therefore, the later operation and maintenance cost of the software engineering is increased.
Disclosure of Invention
The embodiment of the application provides a UI (user interface) generation method, a device and equipment, and aims to solve the problems of excessive operation and maintenance costs in the development of software application with a 2D UI and a 3D UI.
In a first aspect, an embodiment of the present application provides a UI generating method, where the method includes:
acquiring a generation request and at least one selection message corresponding to a target UI;
generating information content corresponding to the target UI according to the generation request, wherein the information content indicates a sub-UI, and the type of the sub-UI is a two-dimensional (2D) UI or a three-dimensional (3D) UI;
selecting the type of the sub UI according to each piece of selection information to obtain at least one target sub UI;
and constructing at least one target UI according to each target sub UI and the information content corresponding to each target sub UI.
In some possible embodiments, the selecting information content includes: the platform type of the running target UI and the starting scene of the target UI.
In some possible embodiments, the information content may be obtained by at least one preset UI interface, and each UI interface corresponds to one sub-UI.
In some possible embodiments, the selecting the type of the sub-UI according to each piece of the selection information to obtain at least one target sub-UI may be implemented by a pre-configured UI manager.
In some possible embodiments, the type of the sub UI is selected according to each piece of the selection information, and the implementation further includes: and selecting the type of the UI according to one of the selection information contents corresponding to each piece of selection information to obtain at least one target sub-UI.
In some possible embodiments, the selecting of the sub-UI type further includes: and screening according to the selection information through a pre-deployed UI manager.
In a second aspect, an embodiment of the present application further provides an apparatus for generating a UI, where the apparatus includes:
the acquisition module is used for acquiring a generation request corresponding to the target UI and at least one piece of selection information;
a generating module, configured to generate information content corresponding to the target UI according to the generation request, where the information content indicates a sub-UI, and the type of the sub-UI is a two-dimensional 2D UI or a three-dimensional 3D UI;
the selection module is used for selecting the type of the sub-UI according to each piece of selection information to obtain at least one target sub-UI;
and the construction module is used for constructing at least one target UI according to each target sub-UI and the information content corresponding to each target sub-UI.
In a third aspect, an embodiment of the present application further provides an electronic device, where the electronic device includes: a memory and a processor, the memory and the processor being communicatively connected to each other, the memory having stored therein computer instructions, and the processor performing the method of the first aspect or any possible implementation manner of the first aspect by executing the computer instructions.
In a fourth aspect, the present application further provides a computer-readable storage medium, where computer instructions are stored, and the computer instructions are configured to cause the computer to execute the method in the first aspect or any possible implementation manner of the first aspect.
The embodiment of the application provides a technical scheme of a UI (user interface) generation method, which comprises the steps of firstly obtaining a generation request corresponding to a target UI and at least one piece of selection information, then generating information content corresponding to the target UI according to the generation request, wherein the information content indicates a sub-UI, the type of the sub-UI is a two-dimensional (2D) UI or a three-dimensional (3D) UI, and selecting the type of the sub-UI according to each piece of selection information to obtain at least one target sub-UI; and constructing at least one target UI according to each target sub UI and the information content corresponding to each target sub UI. Therefore, according to the technical scheme, the content used by the original UI generation is integrated, and then the sub-UI type suitable for inheriting the used content is selected to be the 2D UI or the 3D UI according to the information of the use scene and the operation platform. Because the target sub UI obtains corresponding content in an inheritance mode and is used for constructing the target UI, the consistency of the target UI content in the reconstruction process is ensured. Therefore, the content of the configuration information interaction channel can be reduced in the software with the coexistence of the 2D UI and the 3D UI, and the cost is reduced for the later operation and maintenance process.
Drawings
FIG. 1 is a flowchart illustrating a UI generating method provided by an embodiment of the present application;
FIG. 2 is a schematic diagram of an exemplary composition of a UI generating device provided by an embodiment of the application;
fig. 3 is an exemplary structural diagram of a UI generating device provided in an embodiment of the present application.
Detailed Description
The terminology used in the following examples of the present application is for the purpose of describing alternative embodiments and is not intended to be limiting of the present application. As used in the specification of the present application and the appended claims, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well. It should also be understood that although the terms first, second, etc. may be used in the following embodiments to describe a certain class of objects, the objects are not limited to these terms. These terms are used to distinguish between particular objects of that class of objects. For example, the following embodiments may adopt the terms first, second, etc. to describe other class objects in the same way, and are not described herein again.
The embodiment of the application provides a technical scheme of a UI (user interface) generation method, which comprises the steps of firstly obtaining a generation request corresponding to a target UI and at least one piece of selection information, then generating information content corresponding to the target UI according to the generation request, wherein the information content indicates a sub-UI, the type of the sub-UI is a two-dimensional (2D) UI or a three-dimensional (3D) UI, and selecting the type of the sub-UI according to each piece of selection information to obtain at least one target sub-UI; and constructing at least one target UI according to each target sub UI and the information content corresponding to each target sub UI. Therefore, according to the technical scheme, the content used by the original UI generation is integrated, and then the sub-UI type suitable for inheriting the used content is selected to be the 2D UI or the 3D UI according to the information of the use scene and the operation platform. Because the target sub UI obtains corresponding content in an inheritance mode and is used for constructing the target UI, the consistency of the target UI content in the reconstruction process is ensured. Therefore, the content of the configuration information interaction channel can be reduced in the software with the coexistence of the 2D UI and the 3D UI, and the cost is reduced for the later operation and maintenance process.
Any electronic device related to the embodiments of the present application may be an electronic device such as a mobile phone, a tablet computer, a wearable device (e.g., a smart watch, a smart bracelet, etc.), a notebook computer, a desktop computer, and an in-vehicle device. The electronic device is preinstalled with a software deployment application. It is understood that the embodiment of the present application does not set any limit to the specific type of the electronic device.
A User Interface (UI) generally refers to an Interface in a process of human-to-object or human-to-software interaction. In the UI for human-software interaction, commonly used software applications may be classified into two-dimensional (2D) applications and three-dimensional (3D) applications according to usage scenarios. 2D applications are generally referred to as applications running at a mobile phone end or a computer screen end, etc.; the 3D application is typically used in a scene supporting Augmented Reality (AR) technology.
Generally, in the same software engineering project, a 2D application and a 3D application coexist, and since application scenes to which the 2D application and the 3D application are applied are different, the 2D application and the 3D application need to generate a UI interface, a 2D UI interface for the 2D application, and a 3D UI interface for the 3D application, respectively. At present, for a project in which a 2D application and a 3D application coexist, developers usually adopt a 2D UI interface and a 3D UI interface to develop respectively, and establish a mode of information interaction between the 2D UI interface and the 3D UI interface for processing.
However, as the amount of development content information increases and subsequent software is updated iteratively, each time a software engineering content is modified, all contents of the 2D UI interface and all contents of the 3D UI interface need to be modified and a channel for information interaction needs to be reconfigured for the modified contents. Therefore, the later operation and maintenance cost of the software engineering is increased.
The following is a description of several exemplary embodiments, and the technical solutions of the embodiments of the present application and the technical effects produced by the technical solutions of the present application will be explained.
In a first aspect of the present application, a UI generation method is provided, and referring to fig. 1, fig. 1 is a schematic method flow diagram of a digital twin scene deployment method provided in an embodiment of the present application, and includes the following steps:
acquiring a generation request and at least one selection message corresponding to a target UI;
generating information content corresponding to the target UI according to the generation request, wherein the information content indicates a sub-UI, and the type of the sub-UI is a two-dimensional (2D) UI or a three-dimensional (3D) UI;
selecting the type of the sub UI according to each piece of selection information to obtain at least one target sub UI;
and constructing at least one target UI according to each target sub-UI and the information content corresponding to each target sub-UI.
Optionally, the information content may be obtained by at least one preset UI interface, and each UI interface corresponds to one sub-UI.
For example, in the early stage of software development, a layer of UI operation interface is first abstracted, for example, according to requirements, an interface of a login system (corresponding to the target system) is developed and the login system needs to have both a 2D UI and a 3D UI, where the abstracted UI operation interface needs to define the content of the login system interface and related rules, which may include: the interface fills in the content (account number, password and the like) or relevant operation buttons (confirmation, one-key login and the like). In general, since interfaces are typically used as grammar contracts that should be followed in defining all class inheritance interfaces. Therefore, the sub-UI under the UI operation interface can inherit all the contents and related rules defined by the UI operation interface, and the inherited contents are realized by the sub-UI, so that the consistency of the inherited contents of different types of sub-UIs under the same UI interface is ensured.
Optionally, the selecting information content includes: the platform type of the running target UI and the starting scene of the target UI.
Optionally, the type of the sub UI is selected according to each piece of the selection information, and the implementation manner further includes: and selecting the type of the UI according to one of the selection information contents corresponding to each piece of selection information to obtain at least one target sub-UI.
Optionally, the implementation manner of the selection of the sub UI type further includes: and screening according to the selection information through a pre-deployed UI manager.
For example, the application scenario (corresponding to the selection information) of the target UI obtained by the system is a mobile phone terminal, and is not used in a scenario supported by AR technology, and needs to be constructed as a 2D UI type. The system passes such information to the UI manager, which continues to make the decision, from which it can be made that the target UI applies in the 2D scene. Therefore, the UI manager selects a sub-UI of the 2D UI type (corresponding to the target sub-UI) to inherit the interface definition contents for constructing the target UI.
Obviously, the selection manner for obtaining the information as the operation platform is similar to the above, and is not described in detail here.
The embodiment of the application provides a technical scheme of a UI (user interface) generation method, which comprises the steps of firstly obtaining a generation request corresponding to a target UI and at least one piece of selection information, then generating information content corresponding to the target UI according to the generation request, wherein the information content indicates a sub-UI, the type of the sub-UI is a two-dimensional (2D) UI or a three-dimensional (3D) UI, and selecting the type of the sub-UI according to each piece of selection information to obtain at least one target sub-UI; and constructing at least one target UI according to each target sub UI and the information content corresponding to each target sub UI. Therefore, according to the technical scheme, the content used by the original UI generation is integrated, and then the sub-UI type suitable for inheriting the used content is selected to be the 2D UI or the 3D UI according to the information of the use scene and the operation platform. Because the target sub UI obtains corresponding content in an inheritance mode and is used for constructing the target UI, the consistency of the target UI content in the reconstruction process is ensured. Therefore, the content of the configuration information interaction channel can be reduced in the software with the coexistence of the 2D UI and the 3D UI, and the cost is reduced for the later operation and maintenance process.
The foregoing embodiments describe various embodiments of the UI generation method provided in the embodiments of the present application, in terms of a generation request corresponding to a target UI and at least one piece of acquisition information, generation of information content, acquisition of a target sub-UI, and construction of at least one target UI according to each target sub-UI and information content corresponding to each target sub-UI. It should be understood that, the embodiment of the present application may implement the above functions in the form of hardware or a combination of hardware and computer software by performing processing steps of generating a request corresponding to a target UI and obtaining at least one selection information, generating information content, obtaining target sub-UIs, and constructing at least one target UI according to each target sub-UI and the information content corresponding to each target sub-UI. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
For example, if the above implementation steps implement the corresponding functions through software modules. As shown in fig. 2, the UI generating device may include an acquiring module, a generating module, a selecting module, and a constructing module. The UI generating device may be used to perform part or all of the operations of the software deployment method described above.
For example:
the acquisition module is used for acquiring a generation request corresponding to the target UI and at least one piece of selection information;
a generating module, configured to generate information content corresponding to the target UI according to the generation request, where the information content indicates a sub-UI, and the type of the sub-UI is a two-dimensional 2D UI or a three-dimensional 3D UI;
the selection module is used for selecting the type of the sub-UI according to each piece of selection information to obtain at least one target sub-UI;
and the construction module is used for constructing at least one target UI according to each target sub-UI and the information content corresponding to each target sub-UI.
Therefore, in the scheme, a generation request and at least one piece of selection information corresponding to a target UI are obtained firstly, then information content corresponding to the target UI is generated according to the generation request, the information content indicates a sub-UI, the type of the sub-UI is a two-dimensional 2D UI or a three-dimensional 3D UI, and the type of the sub-UI is selected according to each piece of selection information to obtain at least one target sub-UI; and constructing at least one target UI according to each target sub UI and the information content corresponding to each target sub UI. Therefore, according to the technical scheme, the content used by the original UI generation is integrated, and then the sub-UI type suitable for inheriting the used content is selected to be the 2D UI or the 3D UI according to the information of the use scene and the operation platform. Because the target sub UI obtains corresponding content in an inheritance mode and is used for constructing the target UI, the consistency of the target UI content in the reconstruction process is ensured. Therefore, the content of the configuration information interaction channel can be reduced in the software with the coexistence of the 2D UI and the 3D UI, and the cost is reduced for the later operation and maintenance process.
It is understood that the functions of the above modules may be implemented by integrating into a hardware entity, for example, the obtaining module may be implemented by integrating into a transceiver, the generating module, the selecting module and the constructing module may be implemented by integrating into a processor, and the programs and instructions for implementing the functions of the above modules may be maintained in a memory. As shown in fig. 3, an electronic device is provided, where the electronic device includes a processor, a transceiver and a memory, where the transceiver is used to execute the UI request acquisition in the UI generation method, and the memory is used to store the program/code preinstalled by the foregoing deployment apparatus, and may also store the code for the processor to execute, etc. When the processor executes the codes stored in the memory, the electronic device is caused to execute part or all of the operations of the software deployment method in the method.
The specific process is described in the above embodiments of the method, and is not described in detail here.
In a specific implementation, corresponding to the foregoing electronic device, an embodiment of the present application further provides a computer storage medium, where the computer storage medium disposed in the electronic device may store a program, and when the program is executed, part or all of the steps in each embodiment of the software deployment method may be implemented. The storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM), a Random Access Memory (RAM), or the like.
One or more of the above modules or units may be implemented in software, hardware or a combination of both. When any of the above modules or units are implemented in software, which is present as computer program instructions and stored in a memory, a processor may be used to execute the program instructions and implement the above method flows. The processor may include, but is not limited to, at least one of: various computing devices that run software, such as a Central Processing Unit (CPU), a microprocessor, a Digital Signal Processor (DSP), a Microcontroller (MCU), or an artificial intelligence processor, may each include one or more cores for executing software instructions to perform operations or processing. The processor may be built in an SoC (system on chip) or an Application Specific Integrated Circuit (ASIC), or may be a separate semiconductor chip. The processor may further include a necessary hardware accelerator such as a Field Programmable Gate Array (FPGA), a PLD (programmable logic device), or a logic circuit for implementing a dedicated logic operation, in addition to a core for executing software instructions to perform an operation or a process.
When the above modules or units are implemented in hardware, the hardware may be any one or any combination of a CPU, a microprocessor, a DSP, an MCU, an artificial intelligence processor, an ASIC, an SoC, an FPGA, a PLD, a dedicated digital circuit, a hardware accelerator, or a discrete device that is not integrated, which may run necessary software or is independent of software to perform the above method flows.
Further, a bus interface may also be included in FIG. 3, which may include any number of interconnected buses and bridges, with one or more processors, represented by a processor, and various circuits of memory, represented by memory, linked together. The bus interface may also link together various other circuits such as peripherals, voltage regulators, power management circuits, and the like, which are well known in the art, and therefore, will not be described any further herein. The bus interface provides an interface. The transceiver provides a means for communicating with various other apparatus over a transmission medium. The processor is responsible for managing the bus architecture and the usual processing, and the memory may store data used by the processor in performing operations.
When the above modules or units are implemented using software, they may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the invention to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another computer readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.) means. The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
It should be understood that, in the various embodiments of the present application, the size of the serial number of each process does not mean the execution sequence, and the execution sequence of each process should be determined by the function and the inherent logic thereof, and should not constitute any limitation to the implementation process of the embodiments.
All parts of the specification are described in a progressive mode, the same and similar parts of all embodiments can be referred to each other, and each embodiment is mainly introduced to be different from other embodiments. In particular, as to the apparatus and system embodiments, since they are substantially similar to the method embodiments, the description is relatively simple and reference may be made to the description of the method embodiments in relevant places.
While alternative embodiments of the present application have been described, additional variations and modifications of these embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all alterations and modifications as fall within the scope of the application.
The above-mentioned embodiments, objects, technical solutions and advantages of the present application are further described in detail, it should be understood that the above-mentioned embodiments are only examples of the present application, and are not intended to limit the scope of the present application, and any modifications, equivalent substitutions, improvements and the like made on the basis of the technical solutions of the present application should be included in the scope of the present invention.

Claims (8)

1. A method for generating a User Interface (UI), the method comprising:
acquiring a generation request and at least one selection message corresponding to a target UI;
generating information content corresponding to the target UI according to the generation request, wherein the information content indicates a sub-UI, and the type of the sub-UI is a two-dimensional (2D) UI or a three-dimensional (3D) UI;
selecting the type of the sub UI according to each piece of selection information to obtain at least one target sub UI;
and constructing at least one target UI according to each target sub UI and the information content corresponding to each target sub UI.
2. The UI generation method according to claim 1, wherein the selecting information content comprises: the platform type of the running target UI and the starting scene of the target UI.
3. The UI generation method according to claim 1, wherein the information content is obtained by at least one preset UI interface, and each UI interface corresponds to one sub-UI.
4. The UI generation method according to claim 1, wherein the selecting the type of the sub-UI according to each of the selection information to obtain at least one target sub-UI can be implemented by a pre-configured UI manager.
5. The UI generation method according to claim 1 or 2, wherein the type of the sub-UI is selected according to each of the selection information, and the implementation further comprises: and selecting the type of the UI according to one of the selection information contents corresponding to each piece of selection information to obtain at least one target sub-UI.
6. An apparatus for generating a UI, the apparatus comprising:
the acquisition module is used for acquiring a generation request and at least one piece of selection information corresponding to the target UI;
a generating module, configured to generate information content corresponding to the target UI according to the generation request, where the information content indicates a sub-UI, and the type of the sub-UI is a two-dimensional 2D UI or a three-dimensional 3D UI;
the selection module is used for selecting the type of the sub-UI according to each piece of selection information to obtain at least one target sub-UI;
and the construction module is used for constructing at least one target UI according to each target sub-UI and the information content corresponding to each target sub-UI.
7. An electronic device, characterized in that the electronic device comprises: a memory and a processor communicatively coupled to each other, the memory having stored therein computer instructions, the processor performing the method of any of claims 1-5 by executing the computer instructions.
8. A computer-readable storage medium having stored thereon computer instructions for causing a computer to perform the method of any one of claims 1-5.
CN202210313437.5A 2022-03-28 2022-03-28 UI (user interface) generation method, device and equipment Pending CN114721653A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210313437.5A CN114721653A (en) 2022-03-28 2022-03-28 UI (user interface) generation method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210313437.5A CN114721653A (en) 2022-03-28 2022-03-28 UI (user interface) generation method, device and equipment

Publications (1)

Publication Number Publication Date
CN114721653A true CN114721653A (en) 2022-07-08

Family

ID=82239717

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210313437.5A Pending CN114721653A (en) 2022-03-28 2022-03-28 UI (user interface) generation method, device and equipment

Country Status (1)

Country Link
CN (1) CN114721653A (en)

Similar Documents

Publication Publication Date Title
CN108958736B (en) Page generation method and device, electronic equipment and computer readable medium
CN109542399B (en) Software development method and device, terminal equipment and computer readable storage medium
CN109495584B (en) Internet of things equipment access method, device, equipment and medium
CN109992480A (en) A kind of log rank amending method, system and electronic equipment and storage medium
CN112214210A (en) Logistics business rule engine and configuration method, device, equipment and storage medium thereof
CN109766319B (en) Compression task processing method and device, storage medium and electronic equipment
US11847509B2 (en) Infrastructure base model API
CN109684008A (en) Card rendering method, device, terminal and computer readable storage medium
CN110737425B (en) Method and device for establishing application program of charging platform system
CN114035879A (en) Page theme color changing method and device, electronic equipment and computer readable medium
CN115392501A (en) Data acquisition method and device, electronic equipment and storage medium
Schauer et al. Internet of things service systems architecture
CN114721653A (en) UI (user interface) generation method, device and equipment
CN110750295A (en) Information processing method, device, electronic equipment and storage medium
CN111813407B (en) Game development method, game running device and electronic equipment
CN112068895B (en) Code configuration method, device, video playing equipment and storage medium
CN112418796B (en) Sub-process task node activation method and device, electronic equipment and storage medium
CN114610309A (en) Object configuration method, device, equipment and storage medium
CN111158684B (en) System creation method, device, electronic equipment and readable storage medium
CN114296855A (en) User interface state management method and device, electronic equipment and storage medium
CN111324368B (en) Data sharing method and server
CN113885886A (en) Method, device, system and storage medium for processing activity service
CN112181401A (en) Application construction method and application construction platform
CN113312025A (en) Component library generation method and device, storage medium and electronic equipment
CN113157360B (en) Method, apparatus, device, medium, and article for processing an API

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination