CN118132059A - Graphical programming method, device, equipment and storage medium - Google Patents

Graphical programming method, device, equipment and storage medium Download PDF

Info

Publication number
CN118132059A
CN118132059A CN202410341037.4A CN202410341037A CN118132059A CN 118132059 A CN118132059 A CN 118132059A CN 202410341037 A CN202410341037 A CN 202410341037A CN 118132059 A CN118132059 A CN 118132059A
Authority
CN
China
Prior art keywords
graphical
formation
code
interface
displaying
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410341037.4A
Other languages
Chinese (zh)
Inventor
吴企帅
王兴
王博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202410341037.4A priority Critical patent/CN118132059A/en
Publication of CN118132059A publication Critical patent/CN118132059A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/30Creation or generation of source code
    • G06F8/34Graphical or visual programming
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Stored Programmes (AREA)

Abstract

The application discloses a graphical programming method, a graphical programming device, graphical programming equipment and a graphical programming storage medium, and relates to the technical field of computers. The method comprises the following steps: displaying a code editing interface of the graphical programming tool, wherein the code editing interface comprises a plurality of preset graphical codes; responding to the editing operation of a first graphical code in a plurality of preset graphical codes, displaying the edited first graphical code, wherein the edited first graphical code is used for adding at least one object into a first formation; responding to the editing operation of a second graphical code in a plurality of preset graphical codes, and displaying the edited second graphical code; and running a graphical programming result to control at least one object in the first formation, wherein the graphical programming result comprises the edited first graphical code and the edited second graphical code. The method improves programming efficiency.

Description

Graphical programming method, device, equipment and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method, an apparatus, a device, and a storage medium for graphical programming.
Background
With the development of computer technology, the technology of performing graphical programming by using graphical building blocks is becoming mature. The patterned building block may also be understood as a patterned code in which a programming language is encapsulated. The user drags the graphic building blocks to realize graphic programming.
In the related art, when a plurality of objects need to be controlled separately, it is general to perform a graphical programming for each object separately. Even if the actions to be performed by the objects are identical, i.e. controlling the objects to implement the same control logic (e.g. all need to be moved 5 meters to the left), it is necessary to program each object separately.
Therefore, in the above related art, when controlling a plurality of objects to implement the same control logic, it is still necessary to program the plurality of objects separately, that is, the re-programming occurs, resulting in low programming efficiency.
Disclosure of Invention
The embodiment of the application provides a graphical programming method, a graphical programming device, graphical programming equipment and a storage medium, which can improve graphical programming efficiency. The technical scheme provided by the application is as follows:
According to an aspect of an embodiment of the present application, there is provided a patterned programming method, the method including:
Displaying a code editing interface of a graphical programming tool, wherein the code editing interface comprises a plurality of preset graphical codes, and the graphical codes are used for processing one or more objects;
Displaying an edited first graphical code in response to an editing operation for the first graphical code in the plurality of preset graphical codes, wherein the first graphical code is used for adding at least one object into the same formation, and the edited first graphical code is used for adding at least one object into the first formation;
Displaying an edited second graphical code in response to an editing operation for the second graphical code in the plurality of preset graphical codes, wherein the second graphical code is used for uniformly controlling at least one object in the same formation, and the edited second graphical code is used for uniformly controlling at least one object in the first formation;
And running a graphical programming result to control at least one object in the first formation, wherein the graphical programming result comprises the edited first graphical code and the edited second graphical code.
According to an aspect of an embodiment of the present application, there is provided a graphic programming apparatus including:
The interface display module is used for displaying a code editing interface of the graphical programming tool, wherein the code editing interface comprises a plurality of preset graphical codes, and the graphical codes are used for processing one or more objects;
The graphical code display module is used for responding to the editing operation of a first graphical code in the plurality of preset graphical codes, displaying the edited first graphical code, wherein the first graphical code is used for adding at least one object into the same formation, and the edited first graphical code is used for adding at least one object into the first formation;
The graphical code display module is further configured to display an edited second graphical code in response to an editing operation for the second graphical code in the plurality of preset graphical codes, where the second graphical code is used for uniformly controlling at least one object in the same formation, and the edited second graphical code is used for uniformly controlling at least one object in the first formation;
The operation module is used for operating a graphical programming result to control at least one object in the first formation, and the graphical programming result comprises the edited first graphical code and the edited second graphical code.
According to an aspect of an embodiment of the present application, there is provided a computer apparatus including a processor and a memory, the memory storing a computer program, the computer program being loaded and executed by the processor to implement the above-described graphical programming method.
According to an aspect of an embodiment of the present application, there is provided a computer-readable storage medium having stored therein a computer program loaded and executed by a processor to implement the above-described graphical programming method.
According to an aspect of an embodiment of the present application, there is provided a computer program product comprising a computer program loaded and executed by a processor to implement the above-described graphical programming method.
The technical scheme provided by the embodiment of the application can bring the following beneficial effects:
The first graphical code is edited on a code editing interface of the graphical programming tool, at least one object can be added into the first formation, the second graphical code is edited and is used for uniformly controlling at least one object in the first formation, and graphical programming results comprising the first graphical code after editing and the second graphical code after editing are operated, so that uniform control of at least one object added into the first formation can be realized. According to the technical scheme provided by the embodiment of the application, the objects needing to execute the same code logic are formed into the first formation through the first graphical code, so that at least one object added into the first formation shares the same edited second graphical code, repeated programming is not needed, and the programming efficiency is improved.
Drawings
FIG. 1 is a schematic illustration of an implementation environment for an embodiment of the present application;
FIG. 2 is an interface diagram of a graphical programming method provided by the related art;
FIG. 3 is an interface diagram of a graphical programming method provided by the related art;
FIG. 4 is an interface diagram of a graphical programming method provided by the related art;
FIG. 5 is an interface diagram of a graphical programming method provided by the related art;
FIG. 6 is an interface diagram of a graphical programming method provided by the related art;
FIG. 7 is an interface diagram of a graphical programming method provided by the related art;
FIG. 8 is an interface diagram of a graphical programming method provided by one embodiment of the present application;
FIG. 9 is an interface diagram of a graphical programming method provided by another embodiment of the present application;
FIG. 10 is a schematic diagram of a virtual scene interface provided by one embodiment of the application;
FIG. 11 is a schematic diagram of a code editing interface provided by one embodiment of the present application;
FIG. 12 is a flow chart of a graphical programming method provided by one embodiment of the present application;
FIG. 13 is an interface diagram of a graphical programming method provided by another embodiment of the present application;
FIG. 14 is an interface diagram of a graphical programming method provided by another embodiment of the present application;
FIG. 15 is a flow chart of a graphical programming method provided by another embodiment of the present application;
FIG. 16 is a schematic diagram of an object selection method provided by one embodiment of the present application;
FIG. 17 is a schematic diagram of an object selection method according to another embodiment of the present application;
FIG. 18 is an interface diagram of a graphical programming method provided by another embodiment of the present application;
FIG. 19 is an interface diagram of a graphical programming method provided by another embodiment of the present application;
FIG. 20 is a block diagram of an object state update method provided by one embodiment of the present application;
FIG. 21 is a block diagram of an object state update method provided by another embodiment of the present application;
FIG. 22 is a block diagram of an object state update method provided by another embodiment of the present application;
FIG. 23 is a schematic diagram of code execution provided by one embodiment of the present application;
FIG. 24 is a schematic diagram of interface switching provided by one embodiment of the present application;
FIG. 25 is a flow chart of a graphical programming method provided by another embodiment of the present application;
FIG. 26 is a schematic illustration of a function panel interface provided by one embodiment of the present application;
FIG. 27 is a schematic illustration of a function panel interface provided by another embodiment of the present application;
FIG. 28 is a flow chart of a method of executing a patterned code provided by one embodiment of the present application;
FIG. 29 is a flow chart of a method of executing a patterned code according to another embodiment of the present application;
FIG. 30 is a flow chart of a method of executing a patterned code according to another embodiment of the present application;
FIG. 31 is a schematic diagram of a compilation process provided by an embodiment of the present application;
FIG. 32 is a schematic diagram of the structure of the components provided by one embodiment of the present application;
FIG. 33 is a code schematic provided by one embodiment of the application;
FIG. 34 is a block diagram of a graphical programming device provided in accordance with one embodiment of the present application;
FIG. 35 is a block diagram of a graphical programming device provided in accordance with another embodiment of the present application;
FIG. 36 is a block diagram of a computer device according to one embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the embodiments of the present application will be described in further detail with reference to the accompanying drawings.
Before describing the technical scheme of the application, some nouns related to the application are explained. The following related explanations may be optionally combined with the technical solutions of the embodiments of the present application, which all belong to the protection scope of the embodiments of the present application. Embodiments of the present application include at least some of the following.
And (3) graphical programming: the programming language is packaged in the graphic building blocks, and programming can be completed by dragging the building blocks to build, so that the method is generally applied to programming learning of teenagers or beginners.
IDE (INTEGRATED DEVELOPMENT ENVIRONMENT ): for providing development environments for applications, tools such as code editors, compilers, debuggers, and graphical user interfaces are typically included.
Virtual scene: a mode with virtual scene non-running static state and running state display in the graphical programming tool.
Code editing: a graphical programming tool is provided with a mode for displaying graphical building block codes, and a user can add, delete, write and operate the codes.
Referring to fig. 1, a schematic diagram of an implementation environment of an embodiment of the present application is shown. The implementation environment of the scheme can comprise: a terminal device 10 and a server 20.
The terminal device 10 includes, but is not limited to, a mobile phone, a tablet computer, an intelligent voice interaction device, a game console, a wearable device, a multimedia playing device, a PC (Personal Computer ), a vehicle-mounted terminal, an intelligent home appliance, and the like. A client of a target application can be installed in the terminal device 10. Alternatively, the target application may be an application that needs to be downloaded and installed, or may be a point-and-use application, which is not limited in the embodiment of the present application.
In the embodiment of the present application, the target application may be an application having a programming function, for example, an integrated development environment is provided in the target application, and a user may write code, compile the code, and run the code in the integrated development environment. Illustratively, the target application includes a graphical programming tool therein. Illustratively, the graphical programming tool is provided with a plurality of preset graphical codes (i.e., graphical building blocks), and the target application program can encapsulate the programming language in the graphical codes, so that the graphical codes can be displayed to the user in the form of building blocks or other graphics, and the user can drag the graphical codes to realize programming. Illustratively, the target application may be directly available to package the graphical code for direct use by the user. The present application is not limited to the specific functions of the target application. The types of the target application program may be a test type application program, a development type application program, a game type application program, a Virtual Reality (VR) type application program, an augmented Reality (Augmented Reality AR) type application program, etc., and the specific types of the target application program are not limited in the embodiment of the present application. In other embodiments, the target application may be considered as a separate functional module implemented as one of the functional modules in the application to be developed. Illustratively, the terminal device 10 has a client running the above-described target application.
The server 20 is used to provide background services for clients of target applications in the terminal device 10. For example, the server 20 may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or may be a cloud server providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs (Content Delivery Network, content delivery networks), and basic cloud computing services such as big data and artificial intelligence platforms, but not limited thereto.
The terminal device 10 and the server 20 can communicate with each other via a network. The network may be a wired network or a wireless network.
In the method provided by the embodiment of the application, the execution main body of each step can be computer equipment. The computer device may be any electronic device having the capability of storing and processing data. For example, the computer device may be the terminal device 10 in fig. 1 or the server 20.
The following will take the object as a virtual unmanned plane as an example, and will be briefly described. With the development of artificial intelligence, programming methods based on graphical programming tools are becoming increasingly mature. The graphical programming tools provided by some application programs are independent of hardware environments, only require networks, and are highly standardized.
However, in the related art, when the operation effect that a plurality of objects need to be grouped to achieve the whole consistency is related, only the copy writing of codes can be respectively carried out on different objects, the process is more complicated and repeated, and meanwhile, the number of building block codes is increased, so that the improvement of the writing efficiency of the building block codes is not facilitated.
In some embodiments, as shown in fig. 2, a user interface 200 provided by a graphical programming tool in the related art includes a building block area, an editing area, a stage area and a material area, the layout of the interface is relatively fixed, a user selects building blocks in the building block area according to his/her own needs, splices the building blocks in the editing area, and after selecting materials from the material area, clicking an operation button can preview effects in the stage area. Of course, if the same operation effect is to be achieved for different objects, the different objects need to be switched to be added and written respectively in the actual programming process.
In some embodiments, as shown in FIG. 3, when a control object 300 is desired, it is desirable to splice blocks in the edit area, resulting in a block code 301 for the control object 300. In some embodiments, as shown in FIG. 4, when a control object 400 is desired, it is desirable to splice blocks in the edit area, resulting in a block code 401 for the control object 400. Even if the object 300 and the object 400 need to be controlled to achieve the same operation effect, in the process of actually coding the codes, different objects need to be switched to respectively add and write the codes.
In some embodiments, as shown in fig. 5, another graphical programming tool of the related art provides a user interface 500 that includes a stage area, a material area, a building block preset area, and a building block editing area. The interface layout is relatively fixed, a user selects building blocks in a building block preset area according to the needs of the user, the building blocks are spliced in a building block editing area, and after materials are selected from a material area, an operation button is clicked to preview the effect in a stage area. Of course, if the same operation effect is to be achieved for different objects, it is also necessary to switch different objects to perform addition writing of codes in the actual programming process.
In some embodiments, as shown in FIG. 6, when it is desired to control object 600, it is desirable to splice blocks in the edit area to obtain block code 601 of control object 600. In some embodiments, as shown in FIG. 7, when a control object 700 is desired, it is desirable to splice blocks in the edit area to obtain a block code 701 for the control object 700. Even if the object 600 and the object 700 need to be controlled to achieve the same operation effect, in the process of actually coding the codes, different objects need to be switched to respectively add and write the codes.
In the related art, the control of the objects in the stage areas cannot be realized by only one set of building block codes, and if the integral unified operation effect on the objects is required to be realized in the code writing process, a plurality of sets of codes are required to be repeatedly written, so that the flow is complicated; in addition, the need to switch back and forth to different objects when looking at and modifying code for different objects greatly reduces the efficiency of debugging (looking at modified code).
According to the technical scheme provided by the embodiment of the application, building block codes capable of simultaneously controlling a plurality of unmanned aerial vehicles to join in formation are provided creatively in a building block preset area, a user can join the plurality of unmanned aerial vehicles in formation according to actual utilization of the building blocks, and one set of codes is used for completing flight control of the formation of the whole unmanned aerial vehicles, so that the efficiency of code writing is improved, and group control, interconnection performance and formation flight of the virtual unmanned aerial vehicles are realized.
The application provides a set of building blocks for simultaneously controlling the formation control, interconnection performance, formation flight and other flight states of a plurality of virtual unmanned aerial vehicles, and a user only needs to add a set of building block codes and formulate the codes to the formation formed by the plurality of virtual unmanned aerial vehicles by modifying parameters of the building block codes, so that the code writing flow is simplified, and the writing efficiency is improved. The complex operation that the building block codes need to be switched back and forth between different objects during debugging (checking and modifying) is avoided, and the debugging efficiency is improved. In the process of adjusting parameters, the efficiency of selecting and grouping a plurality of unmanned aerial vehicles is improved in an interactive mode through a frame selection mode.
Before describing the specific method of the present application, an application scenario of the graphical programming method applied to the graphical programming tool in the present application is described as an example. It should be understood, of course, that the following application scenario is merely illustrative, and is not limited thereto.
In some embodiments, the graphical programming method and the code execution method in the embodiments of the present application may be used in the field of UGC (User generated content, user originated content). Illustratively, UGC refers to a user that can present or otherwise provide its original content to other users. Illustratively, UGC can be employed in the gaming arts, in the animation arts, in the self-media arts, and so forth. By way of example only, UGC in the game field will be specifically described in connection with the graphical programming method mentioned in the present application and the code execution method described below. Illustratively, the target application is an application that allows a user to customize the game, such as an interface provided in the target application that allows the user to customize the game, referred to as a game customization interface. And displaying a plurality of preset graphical codes in the game custom interface, wherein the graphical codes are used for processing one or more objects. Of course, the objects herein include, but are not limited to, any controllable object, such as an object in a virtual scene, such as a virtual animal, a virtual plant, a virtual car, a virtual building, etc., that may appear in a game that the user wants to customize. In some embodiments, in response to an edit operation for a first graphical code of the plurality of preset graphical codes, displaying the edited first graphical code, the first graphical code for joining the at least one object to the same formation, the edited first graphical code for joining the at least one object to the first formation. Illustratively, a plurality of virtual stars are grouped into the same formation, the direction of movement of which is controlled. Illustratively, multiple virtual plants are organized into the same formation, the growth direction and growth rate of which are controlled. In some embodiments, in response to an editing operation for a second graphical code of the plurality of preset graphical codes, displaying the edited second graphical code, where the second graphical code is used to uniformly control at least one object in the same formation, and the edited second graphical code is used to uniformly control at least one object in the first formation. Illustratively, the user is able to implement a unified control over the individual objects in the formation by editing the second graphical code. In some embodiments, after the user edits the first graphical code and the second graphical code, a graphical programming result including the edited first graphical code and the edited second graphical code is run to control at least one object in the first formation. For example, after the user clicks on the operation, the following code execution method is executed, so that a virtual scene built by the user can be seen, and objects in the virtual scene move, such as a team of virtual cars on a street running in a user-defined direction, and a plurality of virtual birds in the sky fly along a track user-defined by the user. The user may build a game scene, a game level, etc. through graphical programming.
In some embodiments, animation development is performed using the graphical programming method of embodiments of the present application. The developer adds a scene in the target application program, adds a plurality of animation objects, configures each animation object, controls the animation objects by using the graphic building block codes, comprises forming a queue, and controls the animation objects in the formation by using a second graphic code. Illustratively, a preview interface is also provided in the graphical programming tool, in which the control process for each animated object can be previewed while running the graphical programming results. Illustratively, when the animation presented in the preview interface meets the expectations, the previewed animation is directly taken as the generated animation.
In other embodiments, the graphical programming method in the embodiments of the present application is used to control the virtual unmanned aerial vehicle. Illustratively, the target application provides a virtual drone flight environment and a drone. Illustratively, the combination unmanned aerial vehicle formation combination is controlled through the patterned building block codes, and dynamic changes of interconnection performance, flight and the like of the formation are realized. Illustratively, building block codes (namely second graphical codes) capable of simultaneously controlling a plurality of unmanned aerial vehicles to join in formation are provided in a building block preset area, a user can join the plurality of unmanned aerial vehicles in formation according to actual utilization of the building blocks, and flight control of the whole unmanned aerial vehicle formation is completed by using one set of codes, so that efficiency of code writing is improved, and formation control, interconnection performance, formation flight and the like of the virtual unmanned aerial vehicles are realized.
In some embodiments, the graphical programming method in the embodiments of the present application is used to perform actual object control. Illustratively, specific parameters of a plurality of actual objects (such as a plurality of real toys, unmanned aerial vehicles, bulbs) and the like are imported into a target application program, and the plurality of actual objects and terminal equipment where the target application program is located are bound. Illustratively, the plurality of real objects are controlled to be combined by the graphical building block codes to form a group and realize the dynamic changes of interconnection performance, flight and the like of the group. The method includes the steps that a plurality of actual objects are added into formation according to a second graphical code, when the second graphical code is operated, control signals are sent to the actual objects to control the actual objects to execute corresponding actions, and therefore the common control of the actual objects is achieved through one set of codes, and programming efficiency is improved.
The following takes an object as a virtual unmanned aerial vehicle as an example, and a specific application scenario of the present application is illustrated by referring to the accompanying drawings.
In some embodiments, as shown in FIG. 8, a user download interface 800 is provided in the target application that enables use of programming functionality provided by the target application when the user clicks to purchase the target application. Of course, the user can download the information for use without purchasing the information. After the user has the right to use the target application, a function selection interface 801 is displayed, and the user can select simulation programming 802 in the function selection interface 801. After the user selects simulation program 802, a user login interface 803 is displayed. The user may select a different account to log into the target application. After the user logs into the target application, a programming mode selection interface 804 is displayed in which the user may select graphical programming or other programming modes, as the application is not limited in this regard.
In some embodiments, as shown in fig. 9, after the user selects the graphical programming, an object number selection interface 900 is displayed, and the user may configure the number of objects by himself, such as configuring only one drone. Illustratively, after the user has configured the number of objects, a scene selection interface 901 is displayed, and the user may configure the scene in which the objects are located by himself, for example, configure the scene as xx smart city. After the user has configured the objects and scenes, a virtual scene interface 902 is displayed in which the virtual scene and the objects 903 created in the scene are displayed.
In some embodiments, as shown in FIG. 10, the graphical programming tool provides a user interface that includes a virtual scene interface 1000. In some embodiments, the virtual scene interface 1000 includes scene areas, navigation toolbars, map navigation areas, and function panel areas.
In some embodiments, as shown in FIG. 11, a code editing interface 1100 is also included in the user interface provided for the graphical programming tool. In some embodiments, the code editing interface 1100 includes a building block preset area, a building block editing area, a navigation toolbar, a map navigation area, and a function panel area.
In some embodiments, the user may drag and edit the preset building block code in the code editing interface 1100 to obtain the edited building block code. When the edited building block code is run, the control process for the object in the virtual scene can be seen in the virtual scene interface 1000.
In the technical scheme provided by the embodiment of the application, the first graphical code and the second graphical code are preset in the building block preset area, at least one object is added into the same formation through the first graphical code, and unified control of at least one object added into the same formation is realized through the second graphical code. Repeated writing of codes is avoided, and writing efficiency of the codes is quickened.
The foregoing is a related description of the present application, and the following description of the method for programming the embodiment of the present application will be made with reference to specific embodiments.
Referring to fig. 12, a flowchart of a graphical programming method according to an embodiment of the application is shown. The main execution body of each step of the method may be the terminal device 10 described above, or may be a client of a target application program running on the terminal device 10, or may be the server 20. In the following method embodiments, for convenience of description, only the execution subject of each step is described as "computer device". The method may include at least one of the following steps (1210-1240).
Step 1210, displaying a code editing interface of the graphical programming tool, where the code editing interface includes a plurality of preset graphical codes, and the graphical codes are used to process one or more objects.
In some embodiments, the graphical programming tool is an application or a functional module in an application that is provided with graphical programming functionality. Illustratively, the graphical programming tool is the target application described above. Illustratively, the graphical programming tool is a graphical programming module in the target application. Illustratively, the graphical programming tool is provided with graphical programming functionality.
In some embodiments, the code editing interface of the graphical programming tool is an interface provided by the graphical programming tool for editing code (including graphical code). For example, as shown in fig. 13, the code editing interface may be interpreted broadly, that is, the complete user interface 1300 is considered as a code editing interface, and the complete user interface 1300 includes at least one of a building block preset region 1303, a function panel region, a navigation toolbar, and a map navigation region in addition to the building block editing region 1304. Of course, the code editing interface presented in the present application can also be interpreted narrowly, i.e. to include only the building block editing area 1304. At this time, the preset block code is displayed not in the block edit area 1304 but in the block preset area 1303.
In some embodiments, the code editing interface includes a plurality of pre-defined graphical codes. Illustratively, a plurality of preset graphical codes are displayed in block preset region 1303. The graphical codes in the embodiment of the application refer to codes displayed in a graphical form, programming languages are packaged in the graphical codes, and the programming languages packaged by different graphical codes are different, namely, the control logic on the objects is different.
In some embodiments, the graphical code includes at least two classes. The first type of graphical code is used to control a single object and can only control a single object. The second type of graphical code is used to control a formation that includes at least one object therein. When multiple objects are present within the formation, the graphical code may control the multiple objects within the formation. The predetermined pattern code may include at least one pattern code of the second type, and may include at least one pattern code of the first type in addition to the pattern code of the second type, for example. The embodiment of the application does not limit the types of the preset graphical codes.
Objects in embodiments of the present application include, but are not limited to, virtual objects or real objects. When the object is a real object, the object may be a movable or non-movable object in the real world, such as a real drone, a light bulb, a toy car, or the like. When the object is a virtual object, the object may be a movable or non-movable object in a virtual scene, such as a virtual drone, a virtual light bulb, a virtual car, or the like. In some embodiments, when the object is a virtual object, the virtual object may be created by a user. When the object is a real object, the graphical programming tool can be connected with at least one real object, and when the graphical programming result is run by the graphical programming tool, a control signal can be sent to the at least one real object to control the at least one real object.
In step 1220, in response to the editing operation for the first graphical code of the plurality of preset graphical codes, displaying the edited first graphical code, the first graphical code being used to add the at least one object to the same formation, and the edited first graphical code being used to add the at least one object to the first formation.
In some embodiments, in response to selecting at least one graphical code in the building block preset area, a plurality of selected graphical codes are displayed in the building block edit area 1304. Illustratively, a first graphical code 1301 and a second graphical code 1302 are included in the code editing interface. Illustratively, the first and second graphical codes 1301 and 1302 may be edited separately, resulting in an edited first graphical code and an edited second graphical code.
Of course, when the default first graphical code meets the user requirement, the first graphical code may not be further edited. For example, in the first graphical code, the unmanned aerial vehicle 1 is added to the formation 1 by default, and the user needs to add the unmanned aerial vehicle to the formation 1, so that the first graphical code may not be edited.
In some embodiments, the first graphical code is graphical code for joining at least one object to the same formation. Illustratively, the editing operation for the first graphical code includes at least one of an editing operation for an object configuration region in the first graphical code, and an editing operation for a formation configuration region in the first graphical code. In some embodiments, as shown in fig. 14, the editing operation for the first graphical code 1400 may be considered to include at least one of an editing operation for the object configuration region 1401 in the first graphical code, an editing operation for the formation configuration region 1402 in the first graphical code. The first graphical code shows the text "unmanned (No. 1) joins the formation (1)", wherein the position of the brackets is allowed to be edited by the user, i.e. not only the object joining the formation can be selected, but also the joining formation. Illustratively, the editing operation for the object configuration area is used to configure the object joining the formation, and the editing operation for the formation configuration area is used to configure the formation joining the formation.
In some embodiments, the editing operation for the first patterned code may be considered as an operation of editing the first patterned code, and the present application is not limited to a specific operation type of the operation, such as at least one of a click operation, a long press operation, a slide operation, an input operation, and the like.
In some embodiments, the edited first graphical code includes identification information for the selected object, e.g., object 1 and object 2 are selected to join formation 1, then 1 and 2 are displayed in the edited first graphical code.
In step 1230, in response to the editing operation for the second graphical code in the plurality of preset graphical codes, displaying the edited second graphical code, where the second graphical code is used to perform unified control on at least one object in the same formation, and the edited second graphical code is used to perform unified control on at least one object in the first formation.
In some embodiments, the second graphical code is graphical code for unified control of at least one object in the first formation. Illustratively, the editing operation for the second graphical code includes at least one of an editing operation for a formation configuration area in the second graphical code, and an editing operation for a parameter configuration area in the second graphical code. In some embodiments, as shown in fig. 13, the text "set formation (1) to flying speed (x) m/s" is displayed in the second graphic code, where the position of the bracket is allowed to be edited by the user, that is, not only the formation can be configured, for example, the formation is configured as the formation 1, but also the control parameter of the graphic code can be configured, for example, the flying speed is configured to be 10 m/s. Illustratively, the editing operation for the formation configuration area is used to configure the formation controlled by the graphical code, and the editing operation for the parameter configuration area is used to configure the control parameters of the graphical code.
In some embodiments, when the object 1, the object 2 and the object 3 are set in the edited first graphical code and the formation 1 is added, and the formation 1 is set in the edited second graphical code to move leftwards by 5m, when the edited first graphical code and the edited second graphical code are run, the edited second graphical code can uniformly control the object 1, the object 2 and the object to move leftwards by 5m.
At step 1240, the graphical programming results are run to control at least one object in the first formation, the graphical programming results including the edited first graphical code and the edited second graphical code.
In some embodiments, prior to step 1240, further comprising displaying the graphical programming results. Illustratively, the edited graphical code displayed by the building block editing area is referred to as a graphical programming result.
In some embodiments, the graphical programming results include an edited first graphical code and an edited second graphical code. Of course, the result of the graphical programming may also include the edited third graphical code. The edited third graphical code may be the first type of graphical code described above, or the second type of graphical code described above, i.e., the edited third graphical code may control one or more objects. That is, the objects in the embodiments of the present application are allowed to be controlled together in formation, or may be controlled separately, which is not limited in this respect. It should be understood that each object in the present application is a single instance, that is, each object is independently an instance, because the related art cannot perform grouping control on a plurality of objects together, but only separately. The technical scheme provided by the embodiment of the application not only can control a single object, but also can simultaneously control a plurality of formations, and respectively and uniformly control different objects in each formation, thereby being beneficial to improving the diversity and flexibility of object control.
In some embodiments, a run control is displayed. And running the graphical programming result in response to the operation for the running control. In some embodiments, as shown in FIG. 13, the graphical programming results in the building block edit field 1304 are run to control at least one object in formation 1 in response to an operation on the run control 1305.
The controls in the embodiment of the application are all User Interface (UI) controls, and the UI controls are any visual controls or elements that can be seen on the User Interface of the application program, for example, controls such as pictures, input boxes, text boxes, buttons, labels, and the like, wherein some UI controls respond to operations of a User, for example, running the controls and running graphical programming results. UI controls involved in embodiments of the present application include, but are not limited to: and running the control.
According to the technical scheme provided by the embodiment of the application, the first graphical code is edited through the code editing interface of the graphical programming tool, at least one object can be added into the first formation, the second graphical code is edited, the edited second graphical code is used for uniformly controlling at least one object in the first formation, and the graphical programming result comprising the edited first graphical code and the edited second graphical code is operated, so that the uniform control of at least one object added into the first formation can be realized. According to the technical scheme provided by the embodiment of the application, the objects needing to execute the same code logic are formed into the first formation through the first graphical code, so that at least one object added into the first formation shares the same edited second graphical code, repeated programming is not needed, and the programming efficiency is improved.
Referring to fig. 15, a flowchart of a graphical programming method according to another embodiment of the application is shown. The main execution body of each step of the method may be the terminal device 10 described above, or may be a client of a target application program running on the terminal device 10, or may be the server 20. In the following method embodiments, for convenience of description, only the execution subject of each step is described as "computer device". The method may include at least one of the following steps (1510-1550).
At step 1510, a code editing interface of the graphical programming tool is displayed, the code editing interface including a plurality of preset graphical codes, the graphical codes being configured to process one or more objects.
In response to the triggering operation for the object configuration area in the first graphical code, an object selection interface is displayed, in which a plurality of elements, each element corresponding to an object, are displayed, step 1520.
In some embodiments, the first graphical code includes an object configuration area therein for configuring at least one object joining the same formation.
In some embodiments, the object configuration region is a block sub-region of the displayed first graphical code. The present application is not limited to the display position of the object configuration area in the user interface, such as the object configuration area in the middle of the first graphic code.
In some embodiments, the object selection interface is displayed in full screen form or in non-full screen form. Illustratively, when the object selection interface is displayed in a non-full screen form, as shown in fig. 14, the object selection interface 1403 is displayed in response to the configuration of the area 1401 for the object in the first graphical code.
In some embodiments, the object selection interface includes a plurality of elements, one for each object. Illustratively, each element corresponds to a number of an object. If the created objects in the virtual scene are numbered, the numbers 1 to N are obtained, and N is a positive integer. Number 1 corresponds to object 1, number 2 corresponds to object 2, and so on. In other embodiments, each element corresponds to a thumbnail of an object, and this form is applicable to cases where the outline of each object differs greatly. When the appearance of an object differs significantly, the object may be characterized by a thumbnail of the object, such as by a header picture of each object. Illustratively, the object selection interface includes a head picture of the object person, a head picture of the object puppy, a head picture of the object cat, and the like, and the user can determine the corresponding object by directly selecting the corresponding picture.
Of course, the arrangement order and arrangement position of the plurality of elements in the object selection interface are not limited in this application, for example, the elements may be arranged from top to bottom in the order of the numbers from small to large, or may be arranged from left to right in the time sequence of object creation.
In step 1530, in response to a selection operation for at least one first element of the plurality of elements, displaying the edited first graphical code, the object corresponding to the at least one first element joining the first formation.
In some embodiments, the first element is the element to which the select operation points. Illustratively, the selecting operation for at least one first element of the plurality of elements is an operation of selecting the at least one first element. The present application is not limited to the operation type of the operation, such as a click operation, a long press operation, a slide operation, and the like.
In some embodiments, the selection operation includes a click operation for each first element. That is, the user needs to click on the element to be added to the formation directly, and click on the element again directly when the user wants to cancel the selection. In some embodiments, as shown at 1600 in fig. 16, when an element is selected, a click operation may be performed directly on the element, and when an element is deselected, as shown at 1610 in fig. 16, the element is deselected by clicking again.
In some embodiments, the selection operation is a framing operation starting at a first location and ending at a second location, and the elements included in the framing region constructed by the first and second locations are first elements. In some embodiments, the rectangular box is the box selection area formed by long-press sliding from the first position to the second position. When the user wants to cancel the selection, the user directly presses the frame again. In some embodiments, as shown in 1700 of fig. 17, when an element is selected, the element may be directly long-framed (i.e., a framing operation), and when the element is deselected, as shown in 1710 of fig. 17, the element that has been long-framed again is deselected.
In some embodiments, as shown in sub-graph a of FIG. 18, a user may edit the graphical code in a code editing interface. As shown in sub-graph b of fig. 18, the user drags a plurality of graphic codes from the building block preset area to the code editing area. As shown in sub-graph c of fig. 18, the user may edit the first graphical code 1800 therein. As shown in sub-graph d of fig. 19, the user may select a plurality of first elements after triggering the display object selection interface 1801. As shown in fig. 19, in case of exiting the editing of the first graphic code, the edited first graphic code 1900 is displayed.
The technical scheme provided by the embodiment of the application provides two selection modes of the selection elements. For the first mode of selection operation, namely, for all elements pointed by the objects which want to be formed into the same formation, one click operation is executed, so that the accuracy of selection is ensured, and selection errors are avoided. For the selection operation of the second mode, namely the selection operation of a frame, a large number of elements can be selected at one time, the mode ensures the selection efficiency, and when a large number of objects need to be selected to join in formation, the second mode is adopted more quickly.
In some embodiments, in response to a trigger operation for an object configuration area in the first graphical code, states respectively corresponding to the plurality of elements are determined according to correspondence between the elements and the objects, the elements are in an unselected state when the elements correspond to the objects that have been created in the virtual scene, and the elements are in an unessential state when the elements do not correspond to the objects that have been created in the virtual scene.
For example, M elements are preset, that is, M objects are allowed to appear in the scene at most, where M is a positive integer. When there are no more than M objects created in the scene, there are no corresponding objects for the element, i.e. in the not created state.
In some embodiments, for an mth element of the plurality of elements, if the mth element is selected and the state of the mth element is an unestablished state, the state of the mth element is not changed; and under the condition that the mth element is selected and the state of the mth element is an unselected state, changing the state of the mth element to be a selected state, wherein m is a positive integer. In some embodiments, where the mth element is deselected and the state of the mth element is the selected state, the state of the mth element is altered to the unselected state.
Illustratively, clicking on the mth element determines that the element is in the selected state if the element is in the unselected state. Illustratively, clicking on the element again changes the element from the selected state to the unselected state.
Illustratively, the mth element is selected by long-press boxes, and if the element is in an unselected state, the element is determined to be in a selected state. Illustratively, once again, the element is long-selected by the box, and then the element changes from the selected state to the unselected state.
Illustratively, clicking or framing the mth element long, if the element is in the un-created state, the state of the element remains unchanged.
Referring to fig. 20, a block diagram of an object state updating method according to an embodiment of the present application is shown. As shown in 2000 of fig. 20, taking an object as an example of a unmanned aerial vehicle, clicking unmanned aerial vehicle parameters (i.e., object configuration areas) in related building blocks (i.e., first graphical codes) of unmanned aerial vehicle formation, and popping up unmanned aerial vehicle formation to select an unmanned aerial vehicle bullet frame (i.e., object selection interface), wherein an interface of an extended bullet frame is utilized. A custom input type FieldFormationDronePopup is created that inherits from the block @ fieldtextinput class in which click events are processed and a selection unmanned box (abbreviated as box) is displayed. Assuming that the number of unmanned aerial vehicles supported to be added at most is M in a scene, the number 1 to M of numbers are fixedly displayed in a bullet frame, and represent the unmanned aerial vehicles 1 to M which are created according to the creation sequence. Each number has 3 states, not created, not selected and selected, respectively. The un-created state indicates that no unmanned aerial vehicle has the sequence number in the scene, and if the number of unmanned aerial vehicles in the scene is n (n < M), then the n+1 to M numbers are un-created states; the unselected state indicates that the serial number unmanned aerial vehicle is not selected; the selected state indicates that the serial number drone has been selected.
Referring to fig. 21, a block diagram of an object state updating method according to another embodiment of the present application is shown. Taking the object as an example of an unmanned plane as shown in 2100 in fig. 21, for a click operation, if the sequence number (i.e., the element) is not currently in a created state, the sequence number is not clickable, does not respond to the click operation, and is still in the created state; if the serial number is currently in an unselected state, after clicking the serial number, changing the serial number into a selected state, and highlighting and flickering the unmanned aerial vehicle corresponding to the serial number in the function panel area to prompt a user, wherein the unmanned aerial vehicle identifier corresponding to the map navigation area also highlights and prompts the user; if the serial number is currently in the selected state, after clicking the serial number, the serial number is changed into an unselected state, and meanwhile, another highlight flash (distinguished from the selected state) is given to the unmanned aerial vehicle entry corresponding to the serial number in the function panel area to prompt the user, and another highlight (distinguished from the selected state) is given to the unmanned aerial vehicle identifier corresponding to the map navigation area to prompt the user.
Referring to fig. 22, a block diagram of an object state updating method according to another embodiment of the present application is shown. As shown in 2200 of fig. 22, taking the unmanned aerial vehicle as an example, the state of the sequence number (i.e., the element) outside the frame (the frame of the frame selection area) is not changed. The following operations are performed for all sequence numbers within the box. If the sequence number is currently in the non-created state, the sequence number does not change state and is still in the non-created state; if the serial number is in an unselected state currently, changing the serial number into a selected state, and highlighting and flashing the unmanned aerial vehicle corresponding to the serial number in the function panel area to prompt a user, wherein the unmanned aerial vehicle identification corresponding to the map navigation area also highlights and prompts the user; if the serial number is currently in the selected state, the serial number is changed into the unselected state, and meanwhile, the unmanned aerial vehicle entry corresponding to the serial number in the function panel area is highlighted in another way (distinguished from the selected state) to prompt the user, and the unmanned aerial vehicle identifier corresponding to the map navigation area is highlighted in another way (distinguished from the selected state) to prompt the user.
In some embodiments, the first element in the selected state is displayed differently from the other elements in the unselected state. In some embodiments, the first element in the selected state is different from the other elements in the unselected state in terms of display color, brightness, gray scale, etc.
In some embodiments, the object corresponding to the first element in the selected state and the object corresponding to the other element in the unselected state are displayed differently in the virtual scene. In some embodiments, when a user selects a target element in the object selection interface, the object corresponding to the target element in the selected state is highlighted in the virtual scene, while the object pointed to by the unselected element is not highlighted. That is, when the user selects several elements, the objects corresponding to the several elements are highlighted in the virtual scene, where highlighting also includes blinking, a logo Huang Xianshi, and so on.
In some embodiments, the mark of the object corresponding to the first element in the selected state in the map navigation interface is displayed differently from the mark of the object corresponding to the other element in the unselected state in the map navigation interface, wherein the map navigation interface displays a map of the virtual scene and the mark of each created object in the virtual scene in the map. In some embodiments, the map navigation interface herein corresponds to the map navigation area described above. The object corresponding to the first element in the selected state is highlighted in the map navigation interface, where highlighting further includes flashing, labeling Huang Xianshi, and so on. The mark in the embodiment of the application is also used for indicating the object, for example, the mark is at least one of a head picture, a number and an identification of the object.
In some embodiments, the identifier of the object corresponding to the first element in the selected state in the function panel interface is displayed differently from the identifiers of the objects corresponding to the other elements in the unselected state in the function panel interface, where the function panel interface is configured to display configuration information corresponding to each created object in the virtual scene, where the configuration information includes the identifier of the object. In some embodiments, the function panel interface herein corresponds to the function panel area described above. The identification of the object corresponding to the first element in the selected state in the function panel interface is highlighted, where highlighting further includes flashing, labeling Huang Xianshi, and so forth.
In some embodiments, the several distinct displays are different.
According to the technical scheme provided by the embodiment of the application, the selected objects and other objects are displayed in the map navigation interface, the function panel interface and the object selection interface in a distinguishing manner in the process of selecting the objects, so that a user can clearly determine which objects are selected, and further whether the selection is wrong or not is observed more intuitively, and the user can modify the objects in time.
In some embodiments, the triggering operation for the object configuration area in the first graphical code and the selecting operation for at least one first element of the plurality of elements are two-step operations, and may be a one-step sliding operation without loosening hands. When the triggering operation of the object configuration area in the first graphical code and the selecting operation of at least one first element in the plurality of elements are one-step sliding operation without loosening, the triggering operation of the object configuration area in the first graphical code is long-pressed to call out the object selection interface, and under the condition of not loosening hands, the elements added into the same formation are continuously selected on the object interface, so that the operation steps are reduced.
In some embodiments, the method as shown in step 1520 and step 1530 configures at least one object to join the same formation. In other embodiments, identification information of at least one object added to the same formation may be directly input in the object configuration area, and at least one object added to the first formation may be determined according to the input identification information of the at least one object. The input may be text input or speech input.
The above steps 1520 and 1530 illustrate a method of configuring at least one object to join the same formation. In other embodiments, the following steps 1560 (not shown) and 1570 (not shown) are also employed to configure at least one object joining the same formation.
Step 1560, in response to the triggering operation for the object configuration area in the first graphical code, displaying the virtual scene interface in the editing state, wherein the virtual scene interface displays the virtual scene and the created objects in the virtual scene, and the positions of different objects in the virtual scene are different.
Illustratively, objects in the virtual scene interface in the edit state are allowed to be edited, as distinguished from the previous virtual scene interface.
Illustratively, the virtual scene interface in the edited state is displayed in a non-full screen form, similar to the object selection interface 1403 in fig. 14 described above.
Illustratively, the virtual scene interface in the edited state is displayed in full screen form. That is, in response to a triggering operation for the object configuration area in the first graphical code, the virtual scene interface in the editing state is displayed full screen.
Step 1570, responsive to a selection operation for at least one first object of the created objects, displaying the edited first graphical code, the at least one first object joining the first formation.
In some embodiments, the selecting operation for at least one first object of the created objects is an operation to select the at least one object. In some embodiments, the selection operation includes a click operation for each object. In some embodiments, the selecting operation is a framing operation starting at a third location and ending at a fourth location, and the objects included in the framing region constructed at the third and fourth locations are selected objects. Similar to the above, reference is made to the above explanation and no further explanation is given here.
In some embodiments, the edited first graphical code includes identification information of the selected object, such as selecting object 1 and object 2 to join formation 1, then identification 1 and identification 2 are displayed in the edited first graphical code.
The technical scheme provided by the embodiment of the application allows the user to directly select the object in the virtual scene to join the formation, so that the selection of the object is more visual, errors are not easy to occur, and the programming efficiency is improved.
The above embodiment describes in particular how the object that selects to join the formation may of course be configured as well as the formation to which the first graphical code points.
In some embodiments, the first graphical code further includes a formation configuration area, where the formation configuration area is configured to configure formation of at least one object join.
In some embodiments, in response to a trigger operation for a formation configuration area in the first graphical code, displaying a formation selection interface in which a plurality of formation numbers are displayed, each formation number corresponding to a formation; and displaying the edited first graphical code in response to a selection operation for a first formation number in the plurality of formation numbers, wherein the formation corresponding to the first formation number is the first formation.
In some embodiments, the triggering operation for the formation configuration area in the first graphical code and the selecting operation for the first formation number of the plurality of formation numbers are two-step operations, and may be a one-step sliding operation without loosening hands. When the triggering operation of the formation configuration area in the first graphic code and the selecting operation of the first formation number in the plurality of formation numbers are one-step sliding operation without loosening, for example, the formation selection interface is exhaled by long-pressing the triggering operation of the formation configuration area in the first graphic code, and the operation steps are reduced by continuing to slide to the first formation number on the object interface without loosening hands.
In other embodiments, the formation configuration area may directly input identification information added to the formation, and determine the directed formation according to the input identification information of the formation. The input may be text input or speech input.
In step 1540, in response to the editing operation for the second graphical code in the plurality of preset graphical codes, displaying the edited second graphical code, where the second graphical code is used to uniformly control at least one object in the same formation, and the edited second graphical code is used to uniformly control at least one object in the first formation.
And 1550, running a graphical programming result to control at least one object in the first formation, wherein the graphical programming result comprises the edited first graphical code and the edited second graphical code.
In some embodiments, the graphical programming results are run, displaying a control process for at least one object in the first formation in a virtual scene. If the control formation 1 is required to fly 10 meters to the left in the graphical programming result, and the formation 1 comprises the object 1 and the object 2, the animation of controlling the object 1 and the object 2 to fly 10 meters to the left is displayed in the virtual scene. In some embodiments, as shown in fig. 23, in response to an operation for the run control 2301, the graphical programming results are run, displaying a control process 2302 for at least one object in the first formation in a virtual scene.
The technical scheme provided by the embodiment of the application can provide the running result for the user to preview in an animation mode, is beneficial to the user to quickly find the problems in the graphical programming result, thereby quickly modifying and improving the programming efficiency.
In some embodiments, at least one of a scene control, a code editing control is displayed; the virtual scene interface comprises a virtual scene and objects created in the virtual scene, and the code editing control is used for switching and displaying the code editing interface. Running a graphical programming result, responding to the triggering operation of the scene control, displaying a virtual scene interface, and displaying a control process of at least one object in the first formation in the virtual scene interface; and in response to the triggering operation of the code editing control, redisplaying the code editing interface, wherein the code editing interface comprises a graphical programming result.
In some embodiments, as shown in fig. 24, a scene control 2401 and a code editing control 2402 are displayed. In response to a trigger operation to the scene control 2401, the virtual scene interface 2400 is displayed. In response to a triggering operation on the code editing control 2402, a code editing interface 2404 is displayed. The virtual scene interface and the code editing interface can be switched through the scene control and the code editing control, and the virtual scene interface and the code editing interface can be displayed on the same screen, for example, the virtual scene interface is displayed on the left side, and the code editing interface is displayed on the right side. Of course, the code editing interface may also be displayed full screen, with the virtual scene interface displayed in a picture-in-picture format.
In some embodiments, in response to a formation setting operation for a target object in the objects created in the virtual scene interface, displaying a formation selection interface, wherein the formation selection interface comprises a plurality of formation numbers, each formation number corresponds to one formation; and responding to the selection operation of the second formation number in the plurality of formation numbers, and determining that the target object joins in the formation corresponding to the second formation number. The method can directly modify the formation of the object in the virtual scene interface, such as long-term pressing of the target object in the virtual scene interface, and display of the formation selection interface; and responding to the selection operation of the second formation number in the plurality of formation numbers, and determining that the target object joins in the formation corresponding to the second formation number. The code and the scene are combined by directly modifying the formation in the scene, so that the flexibility of code editing is improved, and the programming efficiency is improved.
Illustratively, modifications to the formation at interfaces other than the code editing interface will be synchronized to the edited first graphical code and the edited second graphical code such that the scene and code remain synchronized, avoiding errors.
According to the technical scheme provided by the embodiment of the application, the virtual scene interface and the code editing interface are switched or displayed on the same screen, so that a user can conveniently edit codes and view the control process of each object in the scene, and the programming efficiency is improved.
Referring to fig. 25, a flowchart of a graphical programming method according to another embodiment of the application is shown. The main execution body of each step of the method may be the terminal device 10 described above, or may be a client of a target application program running on the terminal device 10, or may be the server 20. In the following method embodiments, for convenience of description, only the execution subject of each step is described as "computer device". The method may include at least one of the following steps (2510 to 2570).
In step 2510, in response to a triggering operation of creating a control for an object in the function panel interface, configuration information corresponding to the newly added object is displayed in the function panel interface, where the function panel interface is configured to display configuration information corresponding to each created object in the virtual scene.
In some embodiments, in response to a trigger operation for creating a control for an object in the function panel interface, configuration information corresponding to a newly added object displayed in the function panel interface is initialized information or default configuration information.
In some embodiments, in response to a trigger operation to create a control for an object in the function panel interface, the newly added object is displayed in the virtual scene.
In some embodiments, as shown in fig. 26, in response to a trigger operation for an object creation control 2600 in a function panel interface, configuration information 2601 corresponding to a newly added object is displayed in the function panel interface.
Step 2520, in response to the modification operation for the configuration information corresponding to the newly added object, displays the modified configuration information.
In other embodiments, in addition to modifying the configuration information corresponding to the newly added object, other configuration information corresponding to the created object may be modified.
In some embodiments, as shown in fig. 27, in response to a modification operation for the configuration information 2701 corresponding to the created object, the modified configuration information is displayed.
In step 2530, newly added objects created based on the modified configuration information are displayed in the virtual scene.
In some embodiments, the newly added object is modified in the virtual scene based on the modified configuration information.
In some embodiments, an object 2702 corresponding based on the modified configuration information is displayed in the virtual scene.
In some embodiments, a search input field is included in the function panel interface.
Illustratively, the search bar is used to search for created objects to view configuration information for the objects.
In some embodiments, in response to an input operation for a search input field, search information for a created object input at the search input field is displayed; and displaying the configuration information corresponding to the object pointed by the search information.
In some embodiments, the computer device matches the keywords in the search information with the configuration information of each created object according to the search information input by the user, and determines the created object with the highest matching degree as the object pointed by the search information. Further, the configuration information corresponding to the object pointed by the search information is displayed.
Considering that when there are many objects created by the user, configuration information of all the created objects cannot be displayed in one interface, the user is required to pull down or scroll to find the related objects. Therefore, the technical scheme provided by the embodiment of the application provides the search bar to facilitate the user to quickly search the related object, so that the configuration information of the object is quickly checked to modify the configuration information, and the information processing efficiency is improved.
In some embodiments, the function panel interface includes tabs corresponding to each created object.
The tabs in the embodiments of the present application should be interpreted broadly, that is, each object corresponds to a tab, and the tab is a specific element used to characterize the object, such as a control, a logo, a thumbnail, and so on. The tab is displayed because the tab area is smaller, so that more tabs can be displayed in the smaller area, that is, tabs corresponding to a plurality of created objects can be displayed in a small area. For example, M labels are displayed in the function panel interface, each label corresponding to a created object.
In some embodiments, in response to a selection operation of a target tab in tabs corresponding to each created object, configuration information corresponding to the object to which the target tab points is displayed. Illustratively, a target tab is selected to quickly display configuration information for the object to which the tab is directed. This way, the information display efficiency is improved.
In some embodiments, in response to a delete operation for the target tab, the object pointed to by the target tab is deleted from the virtual scene. Illustratively, deleting the target tab can quickly delete the object to which the tab points. This way, the efficiency of object deletion is improved.
In some embodiments, in response to a formation setting operation for the target tab, the formation to which the object pointed to by the target tab is added is altered. Illustratively, in response to a formation setting operation for the target tab, a formation selection interface is displayed on which to reselect the formation to which the object pointed to by the target tab is joined. The method realizes quick configuration for formation, and is beneficial to improving programming efficiency.
In some embodiments, controls corresponding to each formation are also displayed in the function panel interface. For example, in response to clicking operation of a target control in the controls corresponding to each formation, configured objects under the formation corresponding to the target control are displayed. Illustratively, when the object configured under the formation corresponding to the target control is displayed, all the objects under the formation in the virtual scene are displayed differently from other objects. For example, the object in the queue corresponding to the target control may be added or deleted. The method and the device for displaying the object in the formation on the function panel interface can also be used for rapidly viewing the object under each formation on the function panel interface, meanwhile, the position of each object under each formation can be viewed in a scene, in addition, the objects can be rapidly deleted or added, and the flexibility and the efficiency of programming are improved.
In step 2540, a code editing interface of the graphical programming tool is displayed, where the code editing interface includes a plurality of preset graphical codes, and the graphical codes are used to process one or more objects.
In step 2550, in response to an editing operation for a first graphical code of the plurality of preset graphical codes, displaying the edited first graphical code, the first graphical code being used for adding at least one object to the same formation, and the edited first graphical code being used for adding at least one object to the first formation.
In step 2560, in response to an editing operation for a second graphical code in the plurality of preset graphical codes, displaying the edited second graphical code, where the second graphical code is used to perform unified control on at least one object in the same formation, and the edited second graphical code is used to perform unified control on at least one object in the first formation.
In step 2570, a graphical programming result is run to control at least one object in the first formation, the graphical programming result including the edited first graphical code and the edited second graphical code.
The foregoing is a description of a graphical programming method, and the following description is made in terms of how the graphical programming results are executed in detail after programming using a graphical programming tool, in conjunction with the following examples.
It should be understood, of course, that the methods of executing the patterned code described below and the methods of programming the patterns described above may be performed by the same computer device or by different computer devices. When executed by the same computer device, the computer device runs an application program supporting programming functions as well as code running functions. When executed by a different computer device, the first computer device performs the above-described graphical programming method to obtain a graphical programming result, the first computer device transmits the graphical programming result to the second computer device, and the second computer device performs the graphical programming result. The first computer device has an application program supporting programming functionality running thereon, and the second computer device has an application program supporting code running functionality running thereon.
In addition, the edited first patterned code and the edited second patterned code included in the patterned result in the patterned programming method are considered as the first patterned code and the second patterned code in the patterned result mentioned in the execution method of the patterned code described below. Since the following embodiment is a code execution side, the patterned code mentioned in the following embodiment is considered as edited, that is, the first patterned code is directly replaced with the first patterned code after editing in the following embodiment, the second patterned code is replaced with the second patterned code after editing, and the third patterned code is considered as the third patterned code after editing.
Referring to fig. 28, a flowchart of a method for executing a patterned code according to an embodiment of the application is shown. The main execution body of each step of the method may be the terminal device 10 described above, or may be a client of a target application program running on the terminal device 10, or may be the server 20. In the following method embodiments, for convenience of description, only the execution subject of each step is described as "computer device". The method may comprise at least one of the following steps (2810-2830).
In step 2810, a graphical programming result is obtained, the graphical programming result including a plurality of graphical codes executed in sequence, the graphical codes being used to process one or more virtual objects in the virtual scene.
In some embodiments, a graphical programming result is obtained, where the graphical programming result includes a plurality of graphical codes, and the graphical codes are graphical codes edited by a programmer. Illustratively, the graphical codes in the graphical programming results are arranged in a certain order, such as a top-to-bottom order, a left-to-right order, and so forth. Illustratively, when the graphical programming results are executed, they are sequentially executed in the order of the graphical code in the graphical programming results. In some embodiments, each of the graphical codes corresponds to control code logic, i.e., a programming language encapsulated in the graphical code. Illustratively, the control code logic corresponding to the different graphical codes is different, i.e., the programming language encapsulated in the graphical codes is different.
In some embodiments, the virtual scene is a virtual environment created (or provided) for user programming in the application. The virtual environment may be a simulated world of a real world, a semi-simulated and semi-fictional three-dimensional world, or a purely fictional three-dimensional world. The virtual environment may be any one of a two-dimensional virtual environment, a 2.5-dimensional virtual environment, and a three-dimensional virtual environment.
In some embodiments, the virtual object is a movable object in a virtual environment. The movable object may be at least one of a virtual drone, a virtual character, a virtual animal, and a cartoon character. In some embodiments, when the virtual environment is a three-dimensional virtual environment, the virtual objects may be three-dimensional virtual models, each having its own shape and volume in the three-dimensional virtual environment, occupying a portion of the space in the three-dimensional virtual environment. Alternatively, the virtual object is a three-dimensional character constructed based on three-dimensional human skeleton technology, which implements different external figures by wearing different skins. In some implementations, the virtual object may also be implemented using a 2.5-dimensional or 2-dimensional model, which is not limited in this embodiment of the application.
Step 2820, executing the first graphical code included in the graphical programming result, and adding at least one virtual object in the virtual scene into the first formation.
In some embodiments, the first graphical code is graphical code for joining at least one virtual object in the virtual scene to the first formation. And responding to the editing operation of the user on the first graphical code in the programming process, wherein the edited first graphical code comprises first configuration information. The first configuration information comprises identification information of the first formation and identification information corresponding to at least one virtual object added into the first formation. In some embodiments, the identification information herein is information used to characterize the formation or object, such as a number, a label, a sequence number, and the like. As shown in fig. 19, if the first edited graphical code 1900 indicates that the unmanned plane No.1 and No.2 join the formation 1, the unmanned plane No.1 and No.2 and the formation are the first configuration information included in the first edited graphical code.
In some embodiments, at least one virtual object in the virtual scene is joined to the first formation according to first configuration information included in the first graphical code upon execution of the code.
The binding is performed according to the identification information corresponding to the at least one virtual object added to the first formation in the first configuration information and the identification information of the first formation, that is, according to the identification information of the first formation, the at least one virtual object added to the first formation can be quickly found.
By way of example, according to the identification information and the identification information of the first formation respectively corresponding to the at least one virtual object added to the first formation in the first configuration information, a corresponding relation table of the virtual object and the formation is established, that is, according to the identification information of the first formation, the at least one virtual object added to the first formation can be quickly searched from the corresponding relation table.
Illustratively, a dynamic link between the first formation and the at least one virtual object joining the first formation is established according to the identification information corresponding to the at least one virtual object joining the first formation in the first configuration information and the identification information of the first formation, respectively. If the formation 1 and the virtual object 1 establish dynamic links, the formation 1 and the virtual object 2 establish dynamic links, and the formation 1 and the virtual object 3 establish dynamic links, that is, when the second graphical code is executed, that is, when each object in the formation is controlled, the virtual object joining the first formation is linked through the dynamic links of the first formation. The method directly builds the dynamic link to find the virtual object added into the formation, so that the construction of a corresponding relation table or a binding relation can be avoided, and quick positioning can be realized, so that the searching efficiency is improved.
And 2830, executing a second graphical code included in the graphical programming result, and uniformly controlling at least one virtual object added into the first formation.
In some embodiments, the second graphical code is graphical code for unified control of individual virtual objects in the first formation. And responding to the editing operation of the user on the second graphical code in the programming process, wherein the edited second graphical code comprises second configuration information. The second configuration information comprises identification information and control parameter information of the first formation.
The technical scheme provided by the embodiment of the application obtains the graphical programming result; executing a first graphical code included in the graphical programming result, so as to add at least one virtual object in the virtual scene into a first formation; and executing the second graphical code included in the graphical programming result, so that unified control of at least one virtual object added into the first formation is realized. According to the technical scheme provided by the embodiment of the application, the objects needing to execute the same code logic are formed into the first formation through the first graphical code, so that at least one object added into the first formation shares one second graphical code, unified control of at least one virtual object in the same formation can be realized by executing the second graphical code, repeated programming codes are not required to be acquired to control each virtual object respectively, and the execution efficiency of the codes is improved.
Referring to fig. 29, a flowchart of a method for executing a patterned code according to another embodiment of the application is shown. The main execution body of each step of the method may be the terminal device 10 described above, or may be a client of a target application program running on the terminal device 10, or may be the server 20. In the following method embodiments, for convenience of description, only the execution subject of each step is described as "computer device". The method may comprise at least one of the following steps (2910-2950).
Step 2910, obtaining a graphical programming result, where the graphical programming result includes a plurality of graphical codes that are executed sequentially, where the graphical codes are used to process one or more virtual objects in the virtual scene.
Step 2920, executing a first graphical code included in the graphical programming result, and adding at least one virtual object in the virtual scene to the first formation.
Step 2930, obtaining a control logic code corresponding to the second graphic code, where the control logic code refers to a programming language encapsulated in the second graphic code.
In some embodiments, a programming language encapsulated in the second graphical code is obtained, resulting in control code logic corresponding to the second graphical code.
Step 2940, determining at least one virtual object joining the first formation.
In some embodiments, since the identification information corresponding to the at least one virtual object added to the first formation and the identification information of the first formation are already bound, the at least one virtual object added to the first formation can be quickly found directly according to the identification information of the first formation.
For example, since a correspondence table is established between the virtual objects and the formations, at least one virtual object added into the first formation can be quickly found from the correspondence table directly according to the identification information of the first formation.
Illustratively, since the dynamic link between the first formation and the at least one virtual object joining the first formation has been established, respectively, the at least one virtual object joining the first formation is linked directly through the dynamic link of the first formation. The dynamic link is directly constructed to find the virtual object added into the formation, so that the searching efficiency can be improved.
Step 2950, executing control logic code to uniformly control at least one virtual object joining the first formation.
In some embodiments, after the at least one virtual object is determined, the at least one virtual object is controlled separately, i.e., simultaneously, with the control logic code. Illustratively, the control code logic is a programming language that controls 1 meter to the left, and then this programming language is used to control at least one virtual object in the formation, respectively.
In some embodiments, there is no precedence over the control of at least one virtual object. Illustratively, the target application includes a plurality of actuators, and the s-th actuator is configured to execute control logic code to control the s-th virtual object added to the first queue, where s is a positive integer. The plurality of executors synchronously control the virtual object, thereby being beneficial to accelerating the processing speed and improving the code execution efficiency.
In some embodiments, there is a second graphical code that has an execution condition, i.e., the second graphical code needs to be executed when the virtual object satisfies the condition. The following exemplary description is made of the execution conditions of the second patterned code.
In some embodiments, the above method further comprises at least one of the following steps S1-S3 (not shown).
Step S1, obtaining state information corresponding to at least one virtual object added into a first formation, wherein the state information is used for reflecting the real-time state of the virtual object.
In some embodiments, the status information includes current location information, color information, shape information, and the like of the virtual object. In some embodiments, a determination is made as to whether the at least one virtual object of the first queue satisfies an execution condition of the second graphical code prior to execution of the second graphical code.
And S2, under the condition that the state information corresponding to the at least one virtual object added into the first formation respectively does not meet the execution condition of the second graphical code, continuing to control the at least one virtual object added into the first formation until the state information corresponding to the at least one virtual object added into the first formation respectively meets the execution condition of the second graphical code.
For example, the execution condition of the second graphical code is that the longest distance between two virtual objects joining the first formation is not more than 5 meters, state information corresponding to at least one virtual object joining the first formation is obtained, and if it is determined that the longest distance between two virtual objects joining the first formation is 3 meters, the execution of the second graphical code is allowed at this time. Illustratively, if it is determined that the longest distance between two virtual objects joining the first formation is 6 meters, then execution of the second graphical code is not permitted at this time. In an exemplary case where the execution of the second graphical code is not permitted, it is necessary to continue controlling the at least one virtual object joining the first formation until the state information respectively corresponding to the at least one virtual object joining the first formation satisfies the execution condition of the second graphical code.
And step S3, executing the control logic code to uniformly control the at least one virtual object added into the first formation when the state information corresponding to the at least one virtual object added into the first formation meets the execution condition of the second graphical code.
The above steps S1-S3 introduce how to execute the patterned code simultaneously. Taking the graphical code as a building block code as an example, when a certain unmanned aerial vehicle in the group executes building blocks, if the current state does not meet the premise of executing the building blocks, other unmanned aerial vehicles need to wait for the unmanned aerial vehicle to adjust the state to a state of meeting the requirement of executing the building blocks. The unmanned aerial vehicle formation executor can check the states of all unmanned aerial vehicles in the group, and if the states are met, building blocks are executed; if not, then the unmanned opportunity adjustment status of executing the building blocks is not satisfied, and the unmanned aerial vehicle formation executor (see explanation of the embodiments below) again checks all unmanned aerial vehicle status in the group in the main cycle under the life cycle, and the cycle is reciprocated until all unmanned aerial vehicles satisfy the status of executing the building blocks. For example, there are 1, 2,3 unmanned aerial vehicles in the unmanned aerial vehicle formation a, let the formation a perform circular motion around a certain point p, wait for 1, 2,3 unmanned aerial vehicles to spin to the direction perpendicular to the line of the center of the circle, and then perform circular motion together.
In some embodiments, the above method further comprises at least one of the following steps S4-S6 (not shown).
And S4, for the kth virtual object in the at least one virtual object, after executing the control logic code to control the kth virtual object, determining that the kth virtual object is in an execution completion state of the second graphical code, wherein k is a positive integer.
In some embodiments, a flag bit is set for all virtual objects, where the value of the flag bit is a first value when the kth virtual object is in an execution-completed state for the second graphical code.
Illustratively, when the control logic code is executed to control the kth virtual object, the value of the flag bit of the kth virtual object is changed from the second value to the first value. Illustratively, the original value of the flag bit is the second value.
And S5, when the control logic code is executed and the kth virtual object is not controlled, determining that the kth virtual object is in an unexecuted state of the second graphical code, wherein the first value represents the unexecuted state.
In some embodiments, when the kth virtual object is in an incomplete state of execution of the second graphical code, the value of the flag bit is a second value, the second value representing the complete state of execution.
Illustratively, when the control logic code is executed, the value of the flag bit of the kth virtual object is not changed when the kth virtual object is not controlled. Illustratively, the original value of the flag bit is the second value.
Step S6, traversing at least one virtual object added into the first formation, and executing a third graphical code in the condition that all the at least one virtual object added into the first formation is in the execution completion state of the second graphical code, wherein the third graphical code is the graphical code positioned behind the second graphical code in the graphical programming result.
In some embodiments, when the flag bits of the at least one virtual object joining the first formation are all the first value, the at least one virtual object joining the first formation is considered to be all in an executed state for the second graphical code, at which time execution of the next graphical code of the second graphical code is allowed. In some embodiments, when one of the flag bits of the at least one virtual object joining the first formation is a second value, it is considered that not all of the at least one virtual object joining the first formation is in an execution-completed state for the second graphical code, at which time execution of a next graphical code of the second graphical code is not allowed.
Steps S4 to S6 above describe how to determine whether the execution of the graphic code is completed. When the building block starts to execute, a flag bit (completed) is set as false for all unmanned aerial vehicles in the group, and after the building block is executed, the value of the flag bit is changed to true. The unmanned aerial vehicle formation executor checks the value of the zone bit of the unmanned aerial vehicle in the group in each main cycle of the life cycle, if any one of the zone bits of the unmanned aerial vehicle is false, the unmanned aerial vehicle building blocks in the group are not executed, otherwise, the unmanned aerial vehicle building blocks in the group are executed, and the following building blocks can be executed continuously. For example, there are 1,2, 3 unmanned aerial vehicles in the unmanned aerial vehicle formation a, let the formation a move to a certain point p, and need to wait for 1,2, 3 unmanned aerial vehicles to fly to the point p and then execute the following building blocks.
In some embodiments, the code type of the second graphical code is the first type or the second type.
In some embodiments, where the code type of the second graphical code is the first type, the control type of the second graphical code for at least one virtual object in the first formation is transient control, which refers to modifying an attribute of the virtual object. Illustratively, the virtual object is modified, the light of the virtual object is modified, the clothing of the virtual object is modified, and so on.
In some embodiments, in a case where the code type of the second graphical code is the second type, the control type of the second graphical code for at least one virtual object in the first formation is persistent control, where persistent control refers to control of the virtual object for a first duration. Such as controlling the virtual object to fly left at speed a for 1 minute, such as controlling the virtual object to fly around point b for 5 seconds, such as controlling the virtual object to remain stationary in place for 1 hour, etc.
In some embodiments, in the case where the code type of the second patterned code is the second type, it is necessary to determine whether at least one virtual object joining the first formation satisfies the execution condition before executing the second patterned code. In an exemplary case, when the code type of the second patterned code is the second type, it is necessary to determine whether execution of the second patterned code is completed when the second patterned code is executed.
In some embodiments, in the case where the code type of the second patterned code is the second type, it is not necessary to determine whether the at least one virtual object joining the first formation satisfies the execution condition before executing the second patterned code. In an exemplary case, when the code type of the second patterned code is the second type, it is not necessary to determine whether execution of the second patterned code is completed when the second patterned code is executed.
In some embodiments, in the case where the code type of the second patterned code is the first type, it is necessary to determine whether at least one virtual object joining the first formation satisfies the execution condition before executing the second patterned code. In an exemplary case, when the code type of the second patterned code is the first type, it is necessary to determine whether execution of the second patterned code is completed when the second patterned code is executed.
In some embodiments, in the case where the code type of the second patterned code is the first type, it is not necessary to determine whether the at least one virtual object joining the first formation satisfies the execution condition before executing the second patterned code. In an exemplary case, when the code type of the second patterned code is the first type, it is not necessary to determine whether execution of the second patterned code is completed when the second patterned code is executed.
In some embodiments, for unmanned aerial vehicle formation blocks (i.e., graphic codes), the blocks can be divided into 2 types according to the execution time of the blocks, and the first type (i.e., the graphic codes of the first type) is a block which does not need to be executed for a period of time (such as setting speed or light, etc., modifying the attribute to consider the block to be executed completely), and no special treatment is needed; the second type (i.e., the second type of graphic code) is a building block that needs to be executed for a period of time (e.g., the unmanned aerial vehicle flies forward at t speed for t time, and after t time, the building block is considered to be executed), where a group of unmanned aerial vehicles needs special treatment for executing the building block for a period of time. The special processing includes the above steps S1 to S6, namely, determining whether at least one virtual object joining the first formation satisfies an execution condition and determining whether execution of the graphic code is completed.
In the embodiment of the application, the graphical code for the instantaneous control is considered to be generally only required to modify the numerical value of the corresponding position, so that the error possibility is low, and therefore, whether the execution condition is met or not and whether the execution is finished or not is not required to be judged, while the graphical code for the continuous control is generally required to be executed for a period of time, and particularly when more virtual objects exist in the formation, the graphical code for the continuous control is relatively easy to make errors, and therefore, whether the execution condition is met or not and whether the execution is finished or not is required to be judged for the graphical code for the continuous control. The processing mode is beneficial to ensuring the smooth execution of the graphical code and reducing the possibility of error.
The technical scheme provided by the embodiment of the application also provides an error correction function for formation.
Illustratively, after at least one virtual object added into the formation is obtained according to the first graphic code, judging the rationality of the at least one virtual object into the same formation, and if the judging result is unreasonable, suspending the execution of the code and sending formation error correction prompt information, wherein the formation error correction prompt information is used for being displayed on a user interface and is checked for a user, so that the formation is modified in time. And under the condition that the judgment result is reasonable, no formation error correction prompt information is sent. Illustratively, the formation correction hint information includes state information of unreasonable virtual objects that are considered to be not to be into the formation.
For example, the position information of each virtual object added into the formation is acquired, and when a virtual object with a large position deviation exists, the judgment result is unreasonable. If there are 10 virtual objects in a formation, there are 9 virtual objects whose distance is smaller than the threshold value, and only the 10 th virtual object and the 9 virtual objects are greater than the threshold value, then the 10 th virtual object is considered to have errors, and the judgment result is unreasonable. When no virtual object with larger position deviation exists, the judgment result is reasonable.
Illustratively, historical formation information (historical formation and objects added to the formation) is acquired, the current formation information and the historical formation information are compared, and when the error is greater than a threshold value, the judgment result is unreasonable. Illustratively, historical formation information (historical formation and objects added to the formation) is acquired, the current formation information and the historical formation information are compared, and when the error is smaller than a threshold value, the judgment result is reasonable.
According to the technical scheme provided by the embodiment of the application, when the first graphical code is detected according to the execution sequence every time the graphical programming result is obtained from the graphical programming results, the rationality is judged first, and after the rationality is reached, the first graphical code is executed. The method can avoid dominant errors in advance and report the dominant errors to the user in time, so that the user can modify the dominant errors in time, and the execution efficiency of codes is improved.
Referring to fig. 30, a flowchart of a method for executing a patterned code according to an embodiment of the application is shown. The main execution body of each step of the method may be the terminal device 10 described above, or may be a client of a target application program running on the terminal device 10, or may be the server 20. In the following method embodiments, for convenience of description, only the execution subject of each step is described as "computer device". The method may comprise at least one of the following steps (3010-3030).
Step 3010, obtaining a graphical programming result, where the graphical programming result includes a plurality of graphical codes that are executed sequentially, and the graphical codes are used to process one or more virtual objects in the virtual scene.
Step 3020, updating the first correspondence table according to the first configuration information to obtain an updated first correspondence table, where the first correspondence table is used to record a correspondence between the formation and the virtual object, and the updated first correspondence table records a correspondence between identification information of the first formation and identification information corresponding to at least one virtual object added to the first formation.
In some embodiments, the first correspondence table is updated once every time the first graphical code is encountered when executing the graphical programming result. When a first graphical code is encountered for the first time in the graphical programming result, a first corresponding relation table is created according to first configuration information in the first graphical code. When the first graphical code is encountered again later, only the first corresponding relation table needs to be updated.
In some embodiments, the first graphic code includes first configuration information, where the first configuration information is used to configure virtual objects included in the first formation, and the first configuration information includes identification information of the first formation and identification information corresponding to at least one virtual object added to the first formation respectively.
In some embodiments, before step 3030, compiling the second graphical code, where the compiling is successful, to obtain a compiled control logic code corresponding to the second graphical code, where the compiled control logic code is executed to control at least one virtual object added to the first formation, where the control logic code refers to a programming language encapsulated in the second graphical code; and under the condition that compiling is unsuccessful, transmitting modification prompt information, wherein the modification prompt information is used for indicating the second graphical code to be edited. Illustratively, the modification prompt includes text prompt, voice broadcast, picture prompt, and so forth.
In some embodiments, the programming language encapsulated in the graphical code may need to be compiled while the code is being executed to obtain compiled code that may be executed. Such as translating the graphical code into c# code that Unity can run. In some embodiments, in addition to compiling the second graphical code, it may be desirable to compile other graphical code in the graphical programming results.
In some embodiments, as shown in 3100 of fig. 31, the user manipulates the graphical code, and after the graphical code (building blocks) is manipulated (including dragged, combined, etc.) by the building block editing interface, the graphical code is compiled after clicking the run control. After compiling by a compiler, if errors are reported, prompting that the codes are problematic, and retrying after user modification; if no error is reported, the code starts to run.
Step 3030, executing a second graphical code included in the graphical programming result to uniformly control at least one virtual object added to the first formation.
In some embodiments, the second graphic code includes second configuration information, where the second configuration information is used to indicate the first formation and parameters for controlling the first formation, and the second configuration information includes identification information and control parameter information of the first formation.
In some embodiments, the identification information of the first formation is used for searching from the updated first correspondence table to obtain at least one virtual object added into the first formation, the control parameter information is used for modifying an original logic code corresponding to the second graphical code to obtain a control logic code corresponding to the second graphical code, and the control logic code refers to a programming language encapsulated in the second graphical code.
In some embodiments, the identification information herein is information used to characterize the formation, such as a number, a label, a serial number, and the like. In some embodiments, the second graphical code is "form 1 flies 1 meter to the left", the identification information of the first form included in the second configuration information is "form 1", and the control parameter information is "left, 1 meter". For example, the logic of the original control code corresponding to the second graphic code is consistent, that is, the programming language encapsulated in the second graphic code is consistent, so that no matter the second graphic code flies leftwards or rightwards, only the control parameters are different, and the logic of the original control code is consistent, so that the control logic code corresponding to the second graphic code can be obtained only by modifying the control parameters in the programming language.
In some embodiments, the control parameter information in the second configuration information is used to modify the programming language encapsulated in the second graphical code to obtain a modified programming language, and the modified programming language is used to control the at least one virtual object joining the first formation.
In some embodiments, third configuration information is obtained, where the third configuration information is used to configure a virtual object that has been created in the virtual scene, and the third configuration information includes identification information of the created virtual object and instance information of the created virtual object, where the instance information is used to indicate configuration information corresponding to the virtual object.
Illustratively, the user may create a newly added virtual object in the graphical programming tool, and may also modify the configuration information of the newly added virtual object, thereby determining instance information of the virtual object based on the last saved created virtual object and its corresponding configuration information.
In some embodiments, according to the third configuration information, updating a second correspondence table to obtain an updated second correspondence table, where the second correspondence table is used to record a correspondence between the virtual object and the instance information, and the instance information is used to provide configuration information corresponding to the virtual object when the second graphical code is executed to control the virtual object.
Illustratively, the second correspondence table is constructed from instance information of the original created virtual object.
In some embodiments, for a first virtual object in the created virtual objects, when the second graphical code is executed to control the first virtual object, according to the identification information of the first virtual object, the instance information of the first virtual object is found out from the updated second correspondence table.
Illustratively, the computer device stores in advance an original second correspondence table for recording correspondence between the virtual object and the instance information. That is, the virtual objects that have been configured in the virtual scene are recorded in a table form. Illustratively, the identification information of each virtual object corresponds to instance information of the virtual object. Illustratively, the instance information includes configuration information of the virtual object, where the configuration information includes a location, a color, a light, and the like of the virtual object.
In some embodiments, the instance information of the first virtual object in the second correspondence table is updated according to the control result of the first virtual object.
Illustratively, when the first virtual object is controlled using the control logic code corresponding to the second graphical code, the virtual object is controlled based on instance information of the virtual object. Therefore, when control is started, the corresponding instance information needs to be found. When the control is finished, the configuration information of the virtual object is changed, and then the instance information of the first virtual object needs to be updated by using the changed configuration information.
In some embodiments, the instance information of the first virtual object in the second correspondence table is not updated in the event that the second graphical code is executed without control of the first virtual object.
In an exemplary embodiment, when the second graphical code is executed and the first virtual object is not controlled, it is indicated that the first virtual object is not controlled, and the configuration information is not changed, and it is not necessary to update the instance information of the first virtual object in the second correspondence table.
Taking a virtual object as a virtual unmanned aerial vehicle as an example, the core of the code control unmanned aerial vehicle formation is the realization of the operation, the unmanned aerial vehicle formation is dependent on the life cycle provided by Unity during operation, and the state of the unmanned aerial vehicle is updated according to the attribute (the attribute refers to the basic attribute such as the speed, the direction and the like of the unmanned aerial vehicle of each unmanned aerial vehicle, the movement of the unmanned aerial vehicle of the frame is calculated according to the attribute) of each unmanned aerial vehicle in each main cycle of the life cycle. The unmanned aerial vehicle formation runtime target application program comprises a formation unmanned aerial vehicle manager and an unmanned aerial vehicle formation executor.
Referring to fig. 32, a schematic diagram of a component structure according to an embodiment of the application is shown. As shown in 3200 of fig. 32, the formation unmanned aerial vehicle manager class is a single instance (the single instance is a creation type design mode, a class is guaranteed to have only one instance, and a global node for accessing the instance is provided), functions including adding a unmanned aerial vehicle, deleting the unmanned aerial vehicle, modifying unmanned aerial vehicle information, acquiring an unmanned aerial vehicle instance (the unmanned aerial vehicle instance refers to a specific implementation of an unmanned aerial vehicle object) through identification information of the unmanned aerial vehicle, adding the unmanned aerial vehicle into the formation, deleting the unmanned aerial vehicle from the formation, and acquiring a list of identification information of a group of unmanned aerial vehicles through the identification information of the formation are provided. All operations of the unmanned aerial vehicle by the user on the function panel interface are managed by the formation unmanned aerial vehicle manager. Illustratively, the formation drone manager maintains individual virtual objects in the formation and instance information for the individual virtual objects. Illustratively, the formation unmanned aerial vehicle manager is configured to dynamically update at least one of the first correspondence table and the second correspondence table.
The unmanned aerial vehicle formation executor class is created and dynamically mounted by the unmanned aerial vehicle formation runtime class, and is responsible for executing lifecycle state management of user codes. The unmanned aerial vehicle formation executor class inherits from the Unity script base class Monobehaviour and is driven by the Mono script lifecycle function to decide whether to execute the next operation of the user code, so that the unmanned aerial vehicle formation is driven to execute the next operation and update the states of all unmanned aerial vehicles when running. That is, the unmanned aerial vehicle formation executor is configured to execute control logic codes corresponding to the respective graphic codes, so as to control at least one virtual object in the formation, update its attribute or control its movement.
In some embodiments, when compiled, the edited graphical code controls the graphical code of the single virtual object to include identification information of the unmanned aerial vehicle, and the graphical code controlling the formation (including the first graphical code and the second graphical code) includes the identification information of the formation. Taking a virtual object as an unmanned aerial vehicle as an example, a corresponding relation table of the identification information of the unmanned aerial vehicle and the instance information of the unmanned aerial vehicle exists in the unmanned aerial vehicle formation manager, the corresponding instance information of the unmanned aerial vehicle can be obtained through the identification information of the unmanned aerial vehicle, and the unmanned aerial vehicle is controlled after the instance information of the unmanned aerial vehicle is obtained.
As shown in fig. 33, the edited graphical code 3300 is used for controlling a single unmanned aerial vehicle, and the control logic code 3301 corresponding to the edited graphical code 3300 includes identification information of the single unmanned aerial vehicle (droneID). The edited first graphical code 3302 is used for adding the virtual object unmanned aerial vehicle into formation, and the control logic code 3303 corresponding to the edited first graphical code 3302 comprises identification information (droneID) of the unmanned aerial vehicle and identification information (formationID) of the added formation.
According to the technical scheme provided by the embodiment of the application, the first corresponding relation table is constructed by utilizing the first configuration information, namely, the corresponding relation between formation and the virtual object is established, the second corresponding relation table is constructed according to the third configuration information, namely, the corresponding relation between the virtual object and the instance, the identification information of the virtual object added into the formation can be searched from the first object relation table through the second configuration information, and the instance information of the virtual object added into the formation can be searched through the second corresponding relation table, so that the control of the virtual object is realized. The technical scheme provided by the embodiment of the application can realize quick searching of the virtual object, thereby being beneficial to realizing unified control of the virtual object in the formation and improving the code execution efficiency.
The following are examples of the apparatus of the present application that may be used to perform the method embodiments of the present application. For details not disclosed in the embodiments of the apparatus of the present application, please refer to the embodiments of the method of the present application.
Referring to FIG. 34, a block diagram of a graphical programming device according to one embodiment of the application is shown. As shown in fig. 34, the apparatus 3400 may include: interface display module 3410, graphical code display module 3420, and run module 3430.
The interface display module 3410 is configured to display a code editing interface of the graphical programming tool, where the code editing interface includes a plurality of preset graphical codes, and the graphical codes are used to process one or more objects;
a graphic code display module 3420 for displaying an edited first graphic code for adding at least one object to the same formation in response to an editing operation for the first graphic code among the plurality of preset graphic codes, the edited first graphic code for adding at least one object to the first formation;
The graphic code display module 3420 is further configured to display an edited second graphic code in response to an editing operation for a second graphic code in the plurality of preset graphic codes, where the second graphic code is used to uniformly control at least one object in the same formation, and the edited second graphic code is used to uniformly control at least one object in the first formation;
An execution module 3430 for executing a graphical programming result to control at least one object in the first formation, the graphical programming result including the edited first graphical code and the edited second graphical code.
In some embodiments, the first graphical code includes an object configuration area therein for configuring at least one object joining the same formation.
In some embodiments, the graphical code display module 3420 is configured to display an object selection interface in response to a trigger operation for the object configuration region in the first graphical code, where a plurality of elements are displayed, each element corresponding to an object.
And a graphical code display module 3420, configured to display the edited first graphical code in response to a selection operation for at least one first element of the plurality of elements, where an object corresponding to the at least one first element joins the first formation.
In some embodiments, the selection operation includes a click operation for each of the first elements.
In some embodiments, the selecting operation is a framing operation starting at a first location and ending at a second location, and elements included in a framing region constructed by the first location and the second location are the first elements.
In some embodiments, the first element in the selected state is displayed differently from the other elements in the unselected state; or the object corresponding to the first element in the selected state and the object corresponding to the other elements in the unselected state are displayed in the virtual scene in a distinguishing mode; or the mark of the object corresponding to the first element in the selected state in the map navigation interface is displayed in a distinguishing manner with the mark of the object corresponding to the other element in the unselected state in the map navigation interface, wherein the map navigation interface displays a map of the virtual scene and the mark of each created object in the virtual scene in the map; or the identification of the object corresponding to the first element in the selected state in the function panel interface is displayed in a distinguishing manner with the identification of the object corresponding to the other element in the unselected state in the function panel interface, wherein the function panel interface is used for displaying configuration information corresponding to each created object in the virtual scene, and the configuration information comprises the identification of the object.
In some embodiments, as shown in fig. 35, the apparatus further comprises a status adjustment module 3440.
In some embodiments, the state adjustment module 3440 is configured to determine, in response to a trigger operation for the object configuration area in the first patterned code, a state that the plurality of elements respectively correspond to according to a correspondence between an element and an object, when the element corresponds to an object that has been created in a virtual scene, the element is in an unselected state, and when the element does not correspond to an object that has been created in a virtual scene, the element is in an unselected state.
The state adjustment module 3440 is further configured to, for an mth element of the plurality of elements, not change the state of the mth element if the mth element is selected and the state of the mth element is an unestablished state; and under the condition that the mth element is selected and the state of the mth element is an unselected state, changing the state of the mth element to be a selected state, wherein m is a positive integer.
The state adjustment module 3440 is further configured to, in a case where the mth element is deselected and the state of the mth element is a selected state, change the state of the mth element to be an unselected state.
In some embodiments, the first graphical code includes an object configuration area therein for configuring at least one object joining the same formation.
In some embodiments, the graphical code display module 3420 is configured to display a virtual scene interface in an editing state in response to a trigger operation for the object configuration area in the first graphical code, where a virtual scene and an object created in the virtual scene are displayed in the virtual scene interface, and different objects have different positions in the virtual scene.
In some embodiments, the graphical code display module 3420 is configured to display the edited first graphical code in response to a selection operation for at least one first object of the created objects, the at least one first object joining the first formation.
In some embodiments, the first patterned code further includes a formation configuration area, where the formation configuration area is configured to configure formation to which the at least one object joins.
In some embodiments, the graphical code display module 3420 is further configured to display a formation selection interface in response to a trigger operation for the formation configuration area in the first graphical code, the formation selection interface having a plurality of formation numbers displayed therein, each formation number corresponding to a formation.
The graphic code display module 3420 is further configured to display the edited first graphic code in response to a selection operation for a first formation number of the plurality of formation numbers, where a formation corresponding to the first formation number is the first formation.
In some embodiments, the interface display module 3410 is further configured to, in response to a trigger operation for creating a control for an object in a function panel interface, display configuration information corresponding to a newly added object in the function panel interface, where the function panel interface is configured to display configuration information corresponding to each created object in a virtual scene, where the configuration information includes an identifier of the object.
The interface display module 3410 is further configured to display the modified configuration information in response to a modification operation for the configuration information corresponding to the new object.
The interface display module 3410 is further configured to display the new object created based on the modified configuration information in the virtual scene.
In some embodiments, a search input field is included in the function panel interface.
In some embodiments, the interface display module 3410 is further configured to display the search information for the created object input in the search input field in response to an input operation for the search input field.
The interface display module 3410 is further configured to display configuration information corresponding to the object to which the search information points.
In some embodiments, the function panel interface includes tabs corresponding to each created object.
In some embodiments, the interface display module 3410 is further configured to display configuration information corresponding to an object pointed to by the target tab in response to a selection operation of the target tab in the tabs corresponding to the created objects, respectively; or deleting the object pointed by the target tab from the virtual scene in response to the deleting operation for the target tab; or in response to a formation setting operation for the target tab, altering the formation to which the object pointed to by the target tab is added.
In some embodiments, a running module 3430 is configured to run the graphical programming result and display a control process for at least one object in the first queue in a virtual scene.
In some embodiments, the interface display module 3410 is further configured to display at least one of a scene control, a code editing control; the scene control is used for switching and displaying a virtual scene interface, the virtual scene interface comprises a virtual scene and an object created in the virtual scene, and the code editing control is used for switching and displaying the code editing interface.
The interface display module 3410 is further configured to run the graphical programming result, respond to a trigger operation on the scene control, display the virtual scene interface, and display a control process for at least one object in the first formation in the virtual scene interface.
The interface display module 3410 is further configured to redisplay the code editing interface in response to a triggering operation on the code editing control, where the code editing interface includes the graphical programming result.
In some embodiments, the interface display module 3410 is further configured to display a formation selection interface in response to a formation setting operation for a target object in the created objects in the virtual scene interface, where the formation selection interface includes a plurality of formation numbers, each formation number corresponding to one formation; and responding to the selection operation of a second formation number in the plurality of formation numbers, and determining that the target object joins in the formation corresponding to the second formation number.
It should be noted that, in the apparatus provided in the foregoing embodiment, when implementing the functions thereof, only the division of the foregoing functional modules is used as an example, in practical application, the foregoing functional allocation may be implemented by different functional modules, that is, the content structure of the device is divided into different functional modules, so as to implement all or part of the functions described above. In addition, the apparatus and the method embodiments provided in the foregoing embodiments belong to the same concept, and specific implementation processes of the apparatus and the method embodiments are detailed in the method embodiments and are not repeated herein.
Referring to FIG. 36, a block diagram of a computer device 3600 according to one embodiment of the application is shown. The computer device 3600 may be any electronic device that has data computing, processing, and storage capabilities. The computer device 3600 may be used to implement the graphical programming method provided in the embodiments described above.
Generally, the computer device 3600 includes: a processor 3601 and a memory 3602.
Processor 3601 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and the like. The processor 3601 may be implemented in at least one hardware form of DSP (DIGITAL SIGNAL Processing ), FPGA (Field Programmable GATE ARRAY, field programmable gate array), PLA (Programmable Logic Array ). The processor 3601 may also include a main processor and a coprocessor, where the main processor is a processor for processing data in an awake state, and is also called a CPU (Central Processing Unit ); a coprocessor is a low-power processor for processing data in a standby state. In some embodiments, the processor 3601 may integrate a GPU (Graphics Processing Unit, image processor) for rendering and drawing of content required to be displayed by the display screen. In some embodiments, the processor 3601 may also include an AI (ARTIFICIAL INTELLIGENCE ) processor for processing computing operations related to machine learning.
Memory 3602 may include one or more computer-readable storage media, which may be non-transitory. Memory 3602 may also include high-speed random access memory, as well as non-volatile memory, such as one or more disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer-readable storage medium in memory 3602 is used to store a computer program configured to be executed by one or more processors to implement the graphical programming method described above.
Those skilled in the art will appreciate that the architecture shown in fig. 36 is not limiting of the computer device 3600, and may include more or fewer components than shown, or may combine certain components, or employ a different arrangement of components.
In an exemplary embodiment, a computer readable storage medium is also provided, in which a computer program is stored which, when being executed by a processor, implements the above-described graphical programming method. Alternatively, the computer-readable storage medium may include: ROM (Read-Only Memory), RAM (Random Access Memory ), SSD (Solid STATE DRIVES), or optical disk, etc. The random access memory may include, among other things, reRAM (RESISTANCE RANDOM ACCESS MEMORY, resistive random access memory) and DRAM (Dynamic Random Access Memory ).
In an exemplary embodiment, a computer program product is also provided, the computer program product comprising a computer program stored in a computer readable storage medium. A processor of a computer device reads the computer program from the computer readable storage medium, and the processor executes the computer program so that the computer device performs the above-described graphical programming method.
It should be noted that, in the present application, the relevant data collection process should obtain the informed consent or the individual consent of the personal information body strictly according to the requirements of the relevant national laws and regulations during the application of the examples, and develop the subsequent data use and processing behaviors within the authorized range of the laws and regulations and the personal information body.
It should be understood that references herein to "a plurality" are to two or more. "and/or", describes an association relationship of an association object, and indicates that there may be three relationships, for example, a and/or B, and may indicate: a exists alone, A and B exist together, and B exists alone. The character "/" generally indicates that the context-dependent object is an "or" relationship. In addition, the step numbers described herein are merely exemplary of one possible execution sequence among steps, and in some other embodiments, the steps may be executed out of the order of numbers, such as two differently numbered steps being executed simultaneously, or two differently numbered steps being executed in an order opposite to that shown, which is not limiting.
The foregoing description of the exemplary embodiments of the application is not intended to limit the application to the particular embodiments disclosed, but on the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the application.

Claims (18)

1. A method of graphical programming, the method comprising:
Displaying a code editing interface of a graphical programming tool, wherein the code editing interface comprises a plurality of preset graphical codes, and the graphical codes are used for processing one or more objects;
Displaying an edited first graphical code in response to an editing operation for the first graphical code in the plurality of preset graphical codes, wherein the first graphical code is used for adding at least one object into the same formation, and the edited first graphical code is used for adding at least one object into the first formation;
Displaying an edited second graphical code in response to an editing operation for the second graphical code in the plurality of preset graphical codes, wherein the second graphical code is used for uniformly controlling at least one object in the same formation, and the edited second graphical code is used for uniformly controlling at least one object in the first formation;
And running a graphical programming result to control at least one object in the first formation, wherein the graphical programming result comprises the edited first graphical code and the edited second graphical code.
2. The method of claim 1, wherein the first graphical code includes an object configuration area for configuring at least one object joining the same formation;
the responding to the editing operation of the first graphical code in the plurality of preset graphical codes, displaying the edited first graphical code, and comprises the following steps:
In response to a triggering operation for the object configuration area in the first graphical code, displaying an object selection interface in which a plurality of elements are displayed, each element corresponding to an object;
And in response to a selection operation for at least one first element in the plurality of elements, displaying the edited first graphical code, wherein an object corresponding to the at least one first element joins the first formation.
3. The method of claim 2, wherein the selection operation comprises a click operation for each of the first elements.
4. The method of claim 2, wherein the selecting operation is a framing operation starting at a first location and ending at a second location, and wherein elements included in a framing region constructed by the first location and the second location are the first elements.
5. The method of claim 2, wherein the step of determining the position of the substrate comprises,
The first element in the selected state is displayed differently from the other elements in the unselected state; or alternatively
The object corresponding to the first element in the selected state and the object corresponding to the other elements in the unselected state are displayed in the virtual scene in a distinguishing mode; or alternatively
The mark of the object corresponding to the first element in the selected state in the map navigation interface is displayed in a distinguishing mode with the mark of the object corresponding to the other element in the unselected state in the map navigation interface, wherein the map navigation interface displays a map of the virtual scene and the mark of each created object in the virtual scene in the map; or alternatively
The identification of the object corresponding to the first element in the selected state in the function panel interface is displayed in a distinguishing mode from the identification of the object corresponding to the other element in the unselected state in the function panel interface, wherein the function panel interface is used for displaying configuration information corresponding to each created object in the virtual scene, and the configuration information comprises the identification of the object.
6. The method according to claim 2, wherein the method further comprises:
Responding to triggering operation of the object configuration area in the first graphical code, determining states respectively corresponding to the elements according to the corresponding relation between the elements and the objects, wherein when the elements correspond to the objects which are created in the virtual scene, the elements are in an unselected state, and when the elements do not correspond to the objects which are created in the virtual scene, the elements are in an unessential state;
for an mth element of the plurality of elements, not changing the state of the mth element if the mth element is selected and the state of the mth element is an unestablished state; changing the state of the mth element to be a selected state under the condition that the mth element is selected and the state of the mth element is an unselected state, wherein m is a positive integer;
And changing the state of the mth element into an unselected state under the condition that the mth element is deselected and the state of the mth element is the selected state.
7. The method of claim 1, wherein the first graphical code includes an object configuration area for configuring at least one object joining the same formation;
the responding to the editing operation of the first graphical code in the plurality of preset graphical codes, displaying the edited first graphical code, and comprises the following steps:
Responding to triggering operation of the object configuration area in the first graphical code, displaying a virtual scene interface in an editing state, wherein a virtual scene and created objects in the virtual scene are displayed in the virtual scene interface, and the positions of different objects in the virtual scene are different;
And in response to a selection operation for at least one first object in the created objects, displaying the edited first graphical code, wherein the at least one first object joins the first formation.
8. The method according to claim 2 or 7, wherein the first graphical code further comprises a formation configuration area, the formation configuration area being configured to configure formation of the at least one object join; the method further comprises the steps of:
In response to a trigger operation for the formation configuration area in the first graphical code, displaying a formation selection interface in which a plurality of formation numbers are displayed, each formation number corresponding to one formation;
And responding to the selection operation of a first formation number in the plurality of formation numbers, displaying the edited first graphical code, wherein the formation corresponding to the first formation number is the first formation.
9. The method according to claim 1, wherein the method further comprises:
Responding to triggering operation of creating a control for an object in a function panel interface, and displaying configuration information corresponding to a newly added object in the function panel interface, wherein the function panel interface is used for displaying configuration information corresponding to each created object in a virtual scene;
Responding to the modification operation of the configuration information corresponding to the newly added object, and displaying the modified configuration information;
and displaying the newly added object created based on the modified configuration information in the virtual scene.
10. The method of claim 9, wherein the function panel interface includes a search input field therein; the method further comprises the steps of:
displaying search information for the created object input at the search input field in response to an input operation for the search input field;
And displaying the configuration information corresponding to the object pointed by the search information.
11. The method of claim 9, wherein the function panel interface includes tabs corresponding to each created object; the method further comprises the steps of:
Responding to the selection operation of a target tab in tabs corresponding to each created object, and displaying configuration information corresponding to the object pointed by the target tab; or alternatively
In response to a deletion operation for the target tab, deleting the object pointed by the target tab from the virtual scene; or alternatively
And changing the formation added by the object pointed by the target tab in response to the formation setting operation for the target tab.
12. The method of claim 1, wherein the running graphical programming results to control at least one object in the first formation comprises:
and running the graphical programming result, and displaying a control process of at least one object in the first formation in a virtual scene.
13. The method according to claim 12, wherein the method further comprises:
displaying at least one of a scene control and a code editing control; the scene control is used for switching and displaying a virtual scene interface, the virtual scene interface comprises a virtual scene and an object created in the virtual scene, and the code editing control is used for switching and displaying the code editing interface;
the graphical programming result is operated, a control process of at least one object in the first formation is displayed in a virtual scene, and the control process comprises the following steps:
Running the graphical programming result, responding to the triggering operation of the scene control, displaying the virtual scene interface, and displaying the control process of at least one object in the first formation in the virtual scene interface;
The method further comprises the steps of:
and responding to the triggering operation of the code editing control, redisplaying the code editing interface, wherein the code editing interface comprises the graphical programming result.
14. The method of claim 13, wherein the method further comprises:
Responding to formation setting operation for a target object in the created object in the virtual scene interface, and displaying a formation selection interface, wherein the formation selection interface comprises a plurality of formation numbers, and each formation number corresponds to one formation;
And responding to the selection operation of a second formation number in the plurality of formation numbers, and determining that the target object joins in the formation corresponding to the second formation number.
15. A graphical programming device, the device comprising:
The interface display module is used for displaying a code editing interface of the graphical programming tool, wherein the code editing interface comprises a plurality of preset graphical codes, and the graphical codes are used for processing one or more objects;
The graphical code display module is used for responding to the editing operation of a first graphical code in the plurality of preset graphical codes, displaying the edited first graphical code, wherein the first graphical code is used for adding at least one object into the same formation, and the edited first graphical code is used for adding at least one object into the first formation;
The graphical code display module is further configured to display an edited second graphical code in response to an editing operation for the second graphical code in the plurality of preset graphical codes, where the second graphical code is used for uniformly controlling at least one object in the same formation, and the edited second graphical code is used for uniformly controlling at least one object in the first formation;
The operation module is used for operating a graphical programming result to control at least one object in the first formation, and the graphical programming result comprises the edited first graphical code and the edited second graphical code.
16. A computer device comprising a processor and a memory, the memory having stored therein a computer program that is loaded and executed by the processor to implement the graphical programming method of any of claims 1-14.
17. A computer readable storage medium having a computer program stored therein, the computer program being loaded and executed by a processor to implement the graphical programming method of any of claims 1-14.
18. A computer program product, characterized in that it comprises a computer program stored in a computer readable storage medium, from which a processor reads and executes the computer program to implement the graphical programming method of any of claims 1 to 14.
CN202410341037.4A 2024-03-22 2024-03-22 Graphical programming method, device, equipment and storage medium Pending CN118132059A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410341037.4A CN118132059A (en) 2024-03-22 2024-03-22 Graphical programming method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410341037.4A CN118132059A (en) 2024-03-22 2024-03-22 Graphical programming method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN118132059A true CN118132059A (en) 2024-06-04

Family

ID=91240306

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410341037.4A Pending CN118132059A (en) 2024-03-22 2024-03-22 Graphical programming method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN118132059A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20190104012A (en) * 2019-08-16 2019-09-05 엘지전자 주식회사 Method for controlling vehicle in autonomous driving system and apparatus thereof
CN110354494A (en) * 2019-07-23 2019-10-22 网易(杭州)网络有限公司 The control method of object, device, computer equipment and storage medium in game
CN110825121A (en) * 2018-08-08 2020-02-21 纬创资通股份有限公司 Control device and unmanned aerial vehicle control method
CN113535872A (en) * 2021-07-12 2021-10-22 中国人民解放军战略支援部队信息工程大学 Multi-granularity formation motion visualization method based on object
CN117046100A (en) * 2023-07-07 2023-11-14 网易(杭州)网络有限公司 Formation control method and device in game and electronic equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110825121A (en) * 2018-08-08 2020-02-21 纬创资通股份有限公司 Control device and unmanned aerial vehicle control method
CN110354494A (en) * 2019-07-23 2019-10-22 网易(杭州)网络有限公司 The control method of object, device, computer equipment and storage medium in game
KR20190104012A (en) * 2019-08-16 2019-09-05 엘지전자 주식회사 Method for controlling vehicle in autonomous driving system and apparatus thereof
CN113535872A (en) * 2021-07-12 2021-10-22 中国人民解放军战略支援部队信息工程大学 Multi-granularity formation motion visualization method based on object
CN117046100A (en) * 2023-07-07 2023-11-14 网易(杭州)网络有限公司 Formation control method and device in game and electronic equipment

Similar Documents

Publication Publication Date Title
US12079626B2 (en) Methods and systems for creating applications using scene trees
Lanham Learn ARCore-Fundamentals of Google ARCore: Learn to build augmented reality apps for Android, Unity, and the web with Google ARCore 1.0
CN103092612B (en) Realize method and the electronic installation of Android operation system 3D desktop pinup picture
CN111857717B (en) UI editing method, device, equipment and computer readable storage medium
MX2008000515A (en) Smooth transitions between animations.
KR20170078651A (en) Authoring tools for synthesizing hybrid slide-canvas presentations
CN109513212B (en) 2D mobile game UI (user interface) and scenario editing method and system
US20150026573A1 (en) Media Editing and Playing System and Method Thereof
US11733973B2 (en) Interactive graphic design system to enable creation and use of variant component sets for interactive objects
KR101831802B1 (en) Method and apparatus for producing a virtual reality content for at least one sequence
CN109391848A (en) A kind of interactive advertisement system
CN115080016A (en) Extended function implementation method, device, equipment and medium based on UE editor
US20110209117A1 (en) Methods and systems related to creation of interactive multimdedia applications
US20190114819A1 (en) Dimensional content surface rendering
Cannavò et al. A visual editing tool supporting the production of 3D interactive graphics assets for public exhibitions
CN117234513A (en) Page data file generation method and device, electronic equipment, medium and product
CN112700541A (en) Model updating method, device, equipment and computer readable storage medium
CN112328225A (en) Page operation method and operation system thereof
CN118132059A (en) Graphical programming method, device, equipment and storage medium
KR101806922B1 (en) Method and apparatus for producing a virtual reality content
CN116688502A (en) Position marking method, device, equipment and storage medium in virtual scene
Odom HoloLens Beginner's Guide
CN118550540B (en) Method, device, equipment and medium for controlling 3D virtual model at Web end
Grinev Mastering JavaFX 10: Build advanced and visually stunning Java applications
CN118012407B (en) Data processing method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination