WO2023142614A1 - 游戏对象的编辑方法、装置和电子设备 - Google Patents

游戏对象的编辑方法、装置和电子设备 Download PDF

Info

Publication number
WO2023142614A1
WO2023142614A1 PCT/CN2022/132048 CN2022132048W WO2023142614A1 WO 2023142614 A1 WO2023142614 A1 WO 2023142614A1 CN 2022132048 W CN2022132048 W CN 2022132048W WO 2023142614 A1 WO2023142614 A1 WO 2023142614A1
Authority
WO
WIPO (PCT)
Prior art keywords
sequence frame
target sequence
control
information
offset
Prior art date
Application number
PCT/CN2022/132048
Other languages
English (en)
French (fr)
Inventor
蚌绍诗
黄剑武
Original Assignee
网易(杭州)网络有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 网易(杭州)网络有限公司 filed Critical 网易(杭州)网络有限公司
Publication of WO2023142614A1 publication Critical patent/WO2023142614A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/822Strategy games; Role-playing games
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/80Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game specially adapted for executing a specific type of game
    • A63F2300/807Role playing or strategy games

Definitions

  • the present disclosure relates to the technical field of game model development, in particular to a game object editing method, device and electronic equipment.
  • Three-rendering two refers to rendering a 3D animation into a 2D animation.
  • the development of three-rendering and second-class game models usually requires rendering the animation of the 3D model into a series of sequence frames, attaching the character model to the 3D model through a predetermined hanging point, and completing the dynamics of the 3D model by replacing the sequence frames Effect.
  • the purpose of the present disclosure is to provide a game object editing method, device and electronic equipment, so as to simplify the development process of the game object, reduce labor cost and time cost, and improve the animation effect of the game object.
  • an embodiment of the present disclosure provides a method for editing a game object.
  • a graphical user interface is provided through a terminal device, and the graphical user interface includes an object offset control; the method includes: responding to a loading operation for a target resource, The target sequence frame corresponding to the target resource is displayed in the user interface; among them, the target sequence frame includes the virtual object and the associated objects of the virtual object; the target resource also includes the mounting information of the virtual object and the associated object; the response acts on the object offset control
  • the first operation is to control the movement of the specified object in the target sequence frame; the specified object includes a virtual object or an associated object; and update the attachment information of the specified object according to the position of the specified object after the movement.
  • the object offset control includes multiple object offset sub-controls, each object offset sub-control corresponds to an offset direction, and different object offset sub-controls correspond to different offset directions; Operation, the step of controlling the movement of the specified object in the target sequence frame, including: in response to the trigger operation of the first object offset sub-control among the plurality of object offset sub-controls, controlling the specified object to move to the first object offset sub-control corresponding to the first object Move in an offset direction.
  • the graphical user interface also includes an object preservation control; according to the position of the designated object after the movement, the step of updating the hook information of the designated object includes: responding to the trigger operation of the object preservation control, obtaining the corresponding information of the position of the designated object after the movement
  • the first attachment information determines the first attachment information as the attachment information of the specified object in the target sequence frame, and the attachment information of the specified object in the sequence frame other than the target sequence frame in the target resource; the first attachment The information is saved to the target resource.
  • the step of obtaining the first mount information corresponding to the position of the designated object after the movement includes: obtaining the first position information corresponding to the position of the designated object after the movement; converting the first position information into pixel position information corresponding to the designated object , get the first mount information.
  • the graphical user interface also includes a target sequence frame offset control; the method further includes: controlling the movement of the target sequence frame in response to the second operation acting on the target sequence frame offset control; updating the target sequence frame according to the position of the target sequence frame after the movement The location information of the sequence frame.
  • the target sequence frame offset control includes multiple target sequence frame offset sub-controls, each target sequence frame offset sub-control corresponds to an offset direction, and different target sequence frame offset sub-controls correspond to different offset directions;
  • the step of controlling the movement of the target sequence frame includes: in response to the trigger operation of the first target sequence frame offset sub-control among the multiple target sequence frame offset sub-controls, controlling The target sequence frame moves to the first offset direction corresponding to the first target sequence frame offset sub-control, and displays the moving action picture of the target sequence frame.
  • the graphical user interface also includes a target sequence frame saving control; according to the position of the target sequence frame after the movement, the step of updating the position information of the target sequence frame includes: responding to the trigger operation of the target sequence frame control, obtaining the current target sequence frame
  • the second location information is used to determine the location information of the current target sequence frame and the location information of all sequence frames corresponding to the target resource; and update the second location information to the target resource.
  • the graphical user interface also includes a target sequence frame editing control; the method further includes: in response to a trigger operation of the target sequence frame editing control, displaying a depth map of the target sequence frame; wherein, the background picture of the depth map is a mask of the target sequence frame Membrane map; in response to the color editing operation on the depth map, determine the occlusion relationship information of the virtual object and the associated object.
  • the mask map of the target sequence frame is determined through the following steps: according to the first attachment information of the specified object in the target sequence frame, the mask map of the virtual object is determined; the mask map includes a first color area and a second color area ; Wherein, the associated object of the virtual object is located in the middle area between the first color area and the second color area.
  • the step of determining the occlusion relationship information of the virtual object and the associated object includes: responding to the first color filling operation for the specified area in the depth map, determining the virtual object and the associated object in the specified area
  • the occlusion relationship information is that the virtual object blocks the associated object; in response to the second color filling operation for the specified area in the depth map, it is determined that the occlusion relationship information between the virtual object and the associated object in the specified area is that the associated object blocks the virtual object.
  • the graphical user interface also includes an object storage control; the method also includes: responding to the trigger operation of the object storage control, obtaining the spatial position corresponding to the mounting information of the specified object; converting the spatial position corresponding to the mounting information into the material, Generate a target material; the middle point of the target material is the spatial position corresponding to the attachment information; assign the created target material to the virtual object, render the virtual object, and save the rendered virtual object.
  • the graphical user interface also includes a resource loading control; in response to the loading operation for the target resource, the step of displaying the target sequence frame corresponding to the target resource in the graphical user interface includes: obtaining resource path information of the target resource; responding to resource loading The trigger operation of the control displays the target sequence frame corresponding to the target resource in the GUI.
  • the graphical user interface also includes a resource path input box and a resource path selection control; the step of obtaining the resource path information of the target resource includes: responding to an input operation acting on the resource path information of the target resource in the path input box, obtaining the target resource or, in response to the trigger operation of the resource path selection control, display the selectable resource path information, and respond to the selection operation of the resource path information of the target resource in the selectable resource path information, and obtain the resource path of the target resource information.
  • the graphical user interface further includes an animation playback control; the method further includes: in response to a trigger operation of the animation playback control, playing the action picture corresponding to the target sequence frame.
  • the graphical user interface further includes a plurality of background image display controls; the method further includes: in response to a trigger operation of the first background image display control in the plurality of background image display controls, displaying the first background image in the background area of the target sequence frame The first background image corresponding to the control is displayed.
  • the graphical user interface further includes a text display control; the method further includes: displaying preset text at a designated position of the graphical user interface in response to a trigger operation of the text display control.
  • the graphical user interface also includes a copy offset control; the method further includes: in response to a trigger operation of the copy offset control, obtaining the attachment information of the specified object in the current target sequence frame; in response to the switching operation for the current target sequence frame, Save the mounting information of the specified object in the current target sequence frame to the sequence frame corresponding to the switching operation.
  • the embodiment of the present disclosure provides a game object editing device, which provides a graphical user interface through a terminal device, and the graphical user interface includes an object offset control;
  • the device includes: a display module, used to respond to the target resource The loading operation displays the target sequence frame corresponding to the target resource in the graphical user interface; wherein, the target sequence frame includes the virtual object and the associated objects of the virtual object; the target resource also includes the mounting information of the virtual object and the associated object;
  • the control module used to control the movement of the specified object in the target sequence frame in response to the first operation acting on the object offset control;
  • the specified object includes a virtual object or an associated object;
  • the update module is used to update the specified object according to the position of the specified object after the movement Hook information.
  • an embodiment of the present disclosure provides an electronic device, including a processor and a memory, the memory stores computer-executable instructions that can be executed by the processor, and the processor executes the computer-executable instructions to implement any one of the first aspect The editing method of the game object.
  • an embodiment of the present disclosure provides a computer-readable storage medium.
  • the computer-readable storage medium stores computer-executable instructions.
  • the computer-executable instructions When invoked and executed by a processor, the computer-executable instructions prompt the processor to Implement the editing method of the game object according to any one of the first aspect.
  • the present disclosure provides a game object editing method, device, and electronic equipment.
  • a target sequence frame corresponding to the target resource is displayed, and the target sequence frame includes a virtual object and an associated object of the virtual object; the target The resource also includes the attachment information of the virtual object and the associated object; in response to the first operation acting on the object offset control, the movement of the specified object in the target sequence frame is controlled; the specified object includes the virtual object or the associated object; Position, updates the mount information for the specified object.
  • This method can directly edit the target sequence frame corresponding to the target resource, control the movement of the specified object through the object offset control, display the target sequence frame after the movement in real time, and update the mounting information of the specified object by controlling the movement of the specified object.
  • the operation is simple and clear , which simplifies the development process of game objects, reduces labor costs and time costs, and improves the animation effects of game objects.
  • FIG. 1 is a flowchart of a method for editing a game object provided by an embodiment of the present disclosure
  • FIG. 2 is a schematic diagram of a graphical user interface provided by an embodiment of the present disclosure
  • FIG. 3 is a schematic diagram of another graphical user interface provided by an embodiment of the present disclosure.
  • FIG. 4 is a schematic diagram of another graphical user interface provided by an embodiment of the present disclosure.
  • FIG. 5 is a schematic diagram of another graphical user interface provided by an embodiment of the present disclosure.
  • FIG. 6 is a schematic diagram of another graphical user interface provided by an embodiment of the present disclosure.
  • FIG. 7 is a schematic structural diagram of a game object editing device provided by an embodiment of the present disclosure.
  • FIG. 8 is a schematic structural diagram of an electronic device provided by an embodiment of the present disclosure.
  • Three-rendering two refers to rendering a 3D animation into a 2D animation.
  • the development of three-rendering and second-class game mounts needs to render the animation of the 3D model mount into a series of 2D sequence frames. While rendering the sequence frame resources, a black and white depth map will also be rendered to distinguish the layers.
  • It is the occlusion relationship between the mount and the character which is realized by splitting the mount into two blocks. One of the blocks is placed on the upper layer of the character, indicating that the mount blocks the character; one of the blocks is placed on the lower layer of the protagonist, indicating that the character blocks the mount. Mount the character on the mount through the predetermined mounting point, and complete the dynamic effect of the mount by replacing the sequence frame pictures.
  • DCC Digital Content Create
  • a software related to digital content creation such as Maya software (a three-dimensional modeling and animation software), 3DS Max software (3D Studio Max software, a 3D modeling rendering and animation production), etc.
  • Maya software a three-dimensional modeling and animation software
  • 3DS Max software 3D Studio Max software
  • 3D modeling rendering and animation production etc.
  • hang point debugging is carried out by attaching Mesh (mesh), specifically by dividing the surface model into UV (a concept in polygon modeling, mainly recording the vertices of the model in 3D space and 2D map space corresponding relationship), map out the sequence frame picture, adjust the position of the hanging point on the Mesh, and then complete the displacement of the mounting point for all the sequence frame pictures.
  • Mesh mesh
  • UV a concept in polygon modeling, mainly recording the vertices of the model in 3D space and 2D map space corresponding relationship
  • embodiments of the present disclosure provide a game object editing method, device, and electronic device, and the technology can be applied to devices provided with an object editor.
  • a method for editing a game object disclosed in the embodiment of the present disclosure is first introduced in detail, and a graphical user interface is provided through a terminal device, and the graphical user interface includes an object offset control; as shown in FIG. 1
  • the method includes the following steps:
  • Step S102 in response to the loading operation on the target resource, display the target sequence frame corresponding to the target resource in the GUI; wherein, the target sequence frame includes the virtual object and the associated objects of the virtual object; the target resource also includes the virtual object and the associated The hook information of the object;
  • the above target resources usually refer to the resources output after the art production is completed, including the sequence frame pictures of game objects and game objects; usually the target resources include a series of sequence frame pictures of virtual objects, and each sequence frame picture is mounted with the sequence The associated object corresponding to the frame picture, therefore, the target sequence frame includes the virtual object and the associated object of the virtual object.
  • the aforementioned virtual objects may be game mounts, or other objects that have a mounting relationship with virtual characters, such as virtual clouds, virtual vehicles, and so on.
  • the aforementioned associated objects usually refer to virtual characters, which are usually preset characters in the game. In a game scenario, a virtual character can be mounted on a virtual mount to increase the moving speed, skill damage, etc. of the virtual character.
  • the above attachment information includes the attachment information of the virtual object, and also includes the attachment information of the associated object.
  • the mounting information of the virtual object includes information about the location of the virtual object's attachment point in each sequence frame;
  • the attachment information of the associated object includes information about the location of the attachment point of the associated object in each sequence frame.
  • the above-mentioned graphical user interface refers to the user interface of the object editor, in which visual editing operations can be performed on the above-mentioned target sequence frame.
  • the target sequence frame is usually randomly selected from multiple sequence frames corresponding to the target resource.
  • the actual loading process usually loads the virtual object, associated object, and the mounting information of the virtual object and the associated object separately, and then mounts the associated object on the virtual object based on the mounting information of the virtual object and the associated object, and finally serves as the target Sequence frames are displayed in the designated area.
  • the above-mentioned virtual object is a two-dimensional picture
  • the above-mentioned associated object is a three-dimensional model.
  • a target sequence frame is displayed in a designated area of the interface, including virtual objects and associated objects.
  • Step S104 in response to the first operation acting on the object offset control, control the movement of the specified object in the target sequence frame;
  • the specified object includes a virtual object or an associated object;
  • the above-mentioned object offset control is used to control the movement of objects in the target sequence frame.
  • multiple offset controls with fixed moving directions can be preset. Clicking the offset controls in different directions will control the direction of the specified object. Move in this direction, and the moving distance can be preset according to actual needs.
  • the above-mentioned object offset control generally includes an object offset control for virtual objects, and also includes an object offset control for associated objects.
  • the offset control can be performed on different objects.
  • the object to be controlled can be pre-selected, and then the selected object can be offset controlled through the object offset control.
  • the above-mentioned first operation can be by clicking the offset controls in different directions in the object offset control with the mouse.
  • the developer can click the object offset control for the associated object, click the control corresponding to the direction to move, for example, click the corresponding control to move up, and then the associated object can be controlled to move upward by a preset distance from the initial position.
  • the moving process will be displayed on the graphical user interface.
  • the developer can also click the object offset control for the virtual object, and click the control corresponding to the direction you want to move, for example, click the corresponding control to move up, and you can control the virtual object to move up from the initial position by a preset distance.
  • the moving process will be displayed on the graphical user interface.
  • the object moves.
  • the purpose of this embodiment is to adjust the hanging point position of the associated object in the target sequence frame by controlling the movement of the associated object, where the associated object in the target sequence frame is a three-dimensional model, and the hanging point position of the associated object refers to The central point of contact or joint.
  • the position of the hanging point of the virtual object in the target sequence frame can be adjusted. Since the virtual object in the target sequence frame is a two-dimensional picture, the hanging point of the virtual object can also be called the K point of the picture, which means that the picture is suitable for The location of the mount.
  • Step S106 updating the mount information of the designated object according to the position of the designated object after the movement.
  • the above-mentioned mounting information of the specified object includes the mounting point position of the associated object in each sequence frame in the target resource and the mounting point position of the virtual object in each sequence frame.
  • the moved position of the designated object is displayed on the graphical user interface.
  • the mounting information of the specified object corresponding to the position can be obtained, including the mounting point position of the associated object, or the mounting point position of the virtual object, or the mounting point position of the associated object, and the mounting point position of the virtual object.
  • Hang point position After obtaining the mounting information of the specified object, the initial mounting information in the target resource can be updated to the above mounting information of the specified object according to the save operation of the developer.
  • the present disclosure provides a method for editing game objects.
  • the target sequence frame corresponding to the target resource is displayed.
  • the target sequence frame includes a virtual object and associated objects of the virtual object;
  • the target resource also includes a virtual Hooking information of the object and associated objects; in response to the first operation acting on the object offset control, control the movement of the specified object in the target sequence frame;
  • the specified object includes a virtual object or an associated object; update the specified object according to the position of the specified object after moving
  • the mount information can directly edit the target sequence frame corresponding to the target resource, control the movement of the specified object through the object offset control, display the target sequence frame after the movement in real time, and update the mounting information of the specified object by controlling the movement of the specified object.
  • the operation is simple and clear , which simplifies the development process of game objects, reduces labor costs and time costs, and improves the animation effects of game objects.
  • the above-mentioned object offset control includes multiple object offset sub-controls, each object offset sub-control corresponds to an offset direction, and different object offset sub-controls correspond to different offset directions;
  • the above-mentioned object offset control can usually include four Object offset sub-controls, where the offset direction corresponding to the first object offset sub-control is upward offset, the offset direction corresponding to the second object offset sub-control is downward offset, and the third object offset sub-control The corresponding offset direction is leftward offset, and the corresponding offset direction of the fourth object offset sub-control is rightward offset.
  • the above-mentioned object offset control includes two types, wherein the object offset control in the solid line frame is the object offset control of the associated object, including four sub-controls, and the object offset control in the dotted line frame It is the object offset control of the dummy object, including four sub-controls.
  • the object offset control in the solid line frame is the object offset control of the associated object, including four sub-controls
  • the object offset control in the dotted line frame It is the object offset control of the dummy object, including four sub-controls.
  • the graphical user interface shown in Figure 3 also includes an optional box for "mirror orientation adjustment", if this box is selected, when controlling the movement of the specified object, the movement of the specified object in the mirror target sequence frame can be adjusted at the same time, If the developer wants to watch the moving position of the specified object in the mirror target sequence frame, he can click the "direction" control in the GUI to display the display effect of the specified object in the mirror target sequence frame after moving.
  • multiple sub-controls are set in the graphical user interface, and the developer can control the movement of the specified object by clicking the sub-control, and display the moving screen in real time to achieve the expected mounting effect, so as to intuitively control the mounting point of the specified object Debugging, feel the effect of hooking, simple operation, greatly shorten the modification time of the production staff, and realize process optimization and cost reduction.
  • It avoids the complicated operation of re-importing the DCC software to modify the object mount point information and re-exporting it.
  • Mesh to perform batch operations on the rendered sequence frame pictures.
  • It also avoids the need to change the mounting information in the original resource when changing the mounting point, re-baking the sequence frame picture after the resource changes the mounting point, and the problem that it takes a long time to output the rendering resource and the cycle is long.
  • the GUI includes an associated object saving control and a virtual object saving control, such as the saving offset control shown in FIG.
  • the terminal device or the backend of the terminal device can, according to the position of the associated object after moving, Obtain the first attachment information corresponding to the position of the associated object after the move, specifically acquire the center point position information of the combination of the associated object and the virtual object after the move, and then determine the acquired first attachment information as the associated object in the target sequence frame Mounting information, and mounting information of associated objects in sequence frames other than the target sequence frame in the target resource; finally, directly save the first mounting information to the target resource to update and replace the initial mounting information of the associated object.
  • the associated object save control corresponding to the save offset control in the figure
  • the terminal device or the backend of the terminal device can obtain the moved object according to the position of the virtual object after moving.
  • the first attachment information corresponding to the position of the virtual object specifically obtain the center point position information of the combination of the moved virtual object and the associated object, and then determine the obtained first attachment information as the attachment of the virtual object in the target sequence frame information, and the mounting information of the virtual object in the sequence frame other than the target sequence frame in the target resource; finally, directly save the first mounting information to the target resource to update the initial mounting information of the replacement virtual object.
  • the graphical user interface also includes an optional box of "Save in all directions", select this optional box, and then click Save Picture to save all the pictures.
  • the object save control after adjusting the position of the specified object in the currently displayed target sequence frame, through the object save control, after modifying the mounting information of the specified object in the target sequence frame, you can directly batch process the specified objects in all sequence frames at the same time
  • the mounting information of the virtual object is saved and updated to the target resource, which avoids the problem that a large amount of time is required to modify the position of the mounting point after the virtual object is moved.
  • the above method is easy to operate, greatly shortens the modification time of the production staff, and realizes process optimization. and cost reduction.
  • a possible implementation of the step of obtaining the first mount information corresponding to the position of the designated object after the movement is as follows: obtain the first position information corresponding to the position of the designated object after the movement; convert the first position information into the designated object The corresponding pixel position information is used to obtain the first hook information.
  • the realization method of the attachment information of the virtual object and the associated object included in the target resource is the same as the 3D production method.
  • the virtual object is a 2D object in the form of a picture and does not support bone output
  • the program can only read the position information of the hanging point.
  • the hanging point information is to take the world space position of Sphere01, that is, the above-mentioned first position information, and then convert this first position information into the pixel position information corresponding to the specified object, that is, the pixel position of the picture of the specified object.
  • the first position information corresponding to the position of the designated object after the movement is obtained, and then the first position information is converted into the pixel position information corresponding to the designated object, and the first position information corresponding to the designated object is obtained.
  • the hanging point animation has no link relationship and has its own animation. There is a little ups and downs, not as big as the movement itself, and there is no rotation.
  • the above-mentioned GUI also includes a text display control; controls. Specifically, in response to a trigger operation of the text display control, preset text is displayed at a designated position of the GUI.
  • the preset text is usually set with the size, quantity and format of the text. As shown in Figure 4, there are two lines of text, They are "Appellation Effect Preview" and "Player Name Six". For example, players usually have their own nickname (such as "cool player"), and the game character will also have a tribe's name displayed above the player's nickname. It should be noted that the position of the above preset text will not change, and will always be at the initial display position.
  • the preset text can be displayed at the designated position of the GUI through the text display control, which is more in line with the real animation display effect.
  • the above graphical user interface not only the movement of virtual objects and associated objects in the target sequence frame can be controlled, but also the movement of the target sequence frame can be controlled; the above graphical user interface also includes a target sequence frame offset control; the above method also includes: responding to the target sequence The second operation of the frame offset control controls the movement of the target sequence frame; according to the position of the target sequence frame after the movement, the position information of the target sequence frame is updated.
  • the target sequence frame Since the above-mentioned adjustment of the hook information will move the virtual object or the associated object, it will change the position of the target sequence frame. Since the position of the above-mentioned preset text will not change, after changing the specified object, the target sequence frame may be different from the preset Let the text overlap each other.
  • the target sequence frame can be moved through the target sequence frame offset control, and the moved position of the target sequence frame can be displayed in real time, so as to maintain an appropriate position between the target sequence frame and the preset text.
  • the above-mentioned target sequence frame offset control is used to control the movement of the target sequence frame.
  • multiple offset controls with fixed moving directions can be preset.
  • the direction of movement, and the distance of movement can be preset according to actual needs.
  • the above-mentioned second operation may be to click the offset controls in different directions in the target sequence frame with the mouse.
  • the above position information of the target sequence frame usually includes the position of the target sequence frame, and may also include the relative position of the target sequence frame and the preset text. Specifically, after the movement of the target sequence frame is controlled, the position of the moved target sequence frame is displayed on the GUI. Then, according to the position of the moved target sequence frame, the position information of the position can be obtained, including the position information of the target sequence frame and the relative position information of the target sequence frame and the preset text. After the location information is obtained, the initial location information in the target resource can be updated to the location information of the above target sequence frame according to the developer's saving operation.
  • the target sequence frame corresponding to the target resource can be visually edited in the object editor, and the movement of the target sequence frame is controlled by the target sequence frame offset control, and the moved target sequence frame is displayed in real time, and then the target sequence frame is saved.
  • the location information simple and clear operation, simplifies the development process of game objects, reduces labor costs and time costs, and improves the animation effect of game objects.
  • the above target sequence frame offset control includes multiple target sequence frame offset sub-controls, each target sequence frame offset sub-control corresponds to an offset direction, and different target sequence frame offset sub-controls correspond to different offset directions;
  • the target sequence frame offset control in the box includes four target sequence frame offset sub-controls, wherein the offset direction corresponding to the first target sequence frame offset sub-control is upward offset, the offset direction corresponding to the second target sequence frame offset sub-control is downward offset, the corresponding offset direction of the third target sequence frame offset sub-control is left offset, and the fourth target sequence frame The offset direction corresponding to the offset sub-control is offset to the right.
  • the target sequence frame offset control to control the target sequence frame to move in the specified direction, and display the moving picture in real time to achieve the desired position effect , so as to intuitively debug the position of the target sequence frame, and the operation is simple and convenient.
  • the above graphical user interface also includes a target sequence frame saving control; the step of updating the position information of the target sequence frame according to the position of the target sequence frame after moving, a possible implementation mode: in response to the trigger operation of the target sequence frame control, acquire For the second position information of the current target sequence frame, the second position information is used to determine the position information of the current target sequence frame and the position information in all sequence frames corresponding to the target resource; and the second position information is updated to the target resource.
  • the "Save Lua” control shown in Figure 4 is the above target sequence frame save control. Specifically, after the developer moves the target sequence frame to a satisfactory position, he can click the target sequence frame save control (corresponding to the save Lua control in the figure), and the terminal device or the backend of the terminal device can Position, to obtain the second position information of the target sequence frame after movement, specifically to obtain the relative position information of the target sequence frame after movement and the preset text, and then obtain the position information of the target sequence frame and the target resource except the target sequence frame The position information of the sequence frame is determined as the above-mentioned second position information; finally, the second position information is directly saved to the target resource, so as to update the initial position information of the replacement target sequence frame.
  • the position information of all sequence frames can be directly processed in batches and saved and updated to the target resource through the target sequence frame saving control after modifying the position of the target sequence frame.
  • the operation is simple and convenient.
  • PS an image processing software
  • This adjustment method requires PS software for batch processing, and the operation process is cumbersome.
  • this embodiment provides a method for editing a game object.
  • This embodiment is implemented on the basis of the above-mentioned embodiments, and the above-mentioned graphical user interface also includes a target sequence frame editing control; the method specifically includes: responding to the target sequence frame The trigger operation of the editing control displays the depth image of the target sequence frame; wherein, the background image of the depth image is the mask image of the target sequence frame; in response to the color editing operation on the depth image, the occlusion relationship information of the virtual object and the associated object is determined.
  • the graphical user interface above also includes a target sequence frame edit control, namely the "edit" control in FIG. 5 , and the above depth map is mainly used to represent the occlusion relationship between the virtual object and the associated object in the target sequence frame.
  • the depth map of the target sequence frame will be displayed in the GUI.
  • setting the mask image as the background of the depth image can enable developers to intuitively perform color editing operations according to the occlusion relationship of objects in the background.
  • the background is pure white, a pure black depth image will be displayed, which is not conducive to developers to observe whether there is a problem with the depth image.
  • the cursor of the mouse can become a brush, and the size of the brush can be modified according to the "thickness" input control in Figure 5.
  • the brush The finer the editing range, the finer the range. Then the developer can select the color of the brush through the "white” and “black” selection boxes in Figure 5, and then fill the area with white or black as needed in the area that needs to be modified, and click "load” after filling Control, you can determine the occlusion relationship between the virtual object and the associated object according to the current color.
  • the depth map of the target sequence frame corresponding to the target resource can be visually edited in the object editor. Click the edit control and adjust the brush size to directly clean the depth map and display the edited target sequence frame in real time.
  • the mask map, and then save the edited occlusion relationship information the operation is simple and clear, the development process of the game object is simplified, the labor cost and time cost are reduced, and the animation effect of the game object is improved.
  • the following describes how to determine the mask map of the target sequence frame, specifically, according to the first attachment information of the specified object in the target sequence frame, determine the mask map of the virtual object; the mask map includes the first color area and the second color area ; Wherein, the associated object of the virtual object is located in the middle area between the first color area and the second color area.
  • the edit control of the target sequence frame when you click the edit control of the target sequence frame, it will automatically output the front and rear layers of your object, that is, the upper and lower first color areas and The second color area, usually the first color area is in the front layer, and the second color area is in the back layer, and then the associated object is set in the middle area between the first color area and the second color area; it can be understood that the first color area of the virtual object
  • the color area is in front, always blocking part of the area of the associated object.
  • the second color area of the dummy object is behind, which is always partially blocked by the associated object. This method reduces the amount of calculation, the output is realistic and convenient, and does not require manual splitting and synthesis technology optimization.
  • the virtual object is divided into front and back, and the resources of the front and back layers of the virtual object are generated.
  • the program and the game are combined to achieve the effect of a normal character sitting on a mount. There is no need to calculate at runtime, and the efficiency is increased by 30% compared with the original one.
  • the above-mentioned first color may be black, and the second color may be white.
  • the brush is black, after filling the specified area in the depth map with black, the virtual objects in this area will block the associated objects; when the brush is white, after filling the specified area in the depth map with white, the associated objects in this area It will block the virtual object, mainly by controlling the color of the brush, controlling the front and back occlusion relationship between the virtual object and the associated object, similar to the principle of the depth map, and judging the position between each object. Specifically, after the filling operation is completed, the mask map and depth map of the target sequence frame after the operation will be displayed, and the occlusion relationship between the virtual object and the associated object will be obtained according to the displayed map. When the editing effect is satisfactory, the The occlusion relationship is saved and updated.
  • the depth map can be visualized in the editor, and the function can be directly optimized into the editor, and the depth map can be cleaned directly by clicking the edit button to adjust the brush size. It avoids processing the depth map through PS software, and the operation is convenient and simple.
  • the above-mentioned graphical user interface also includes other operation controls, such as "copy, paste, partial paste, zoom in, restore, undo, restore” and other controls in Figure 5, wherein, clicking the copy button can copy the current mask map information, Then click the paste button to copy to the next frame.
  • Partial paste is an overlay mode that will not completely change the next frame of the image, and will accumulate the existing changes and the original mask image information together.
  • functions such as undo and restore operations have improved the efficiency of image processing operations.
  • the GUI above also includes an object saving control; the saving control below the loading control as shown in FIG. 5 .
  • the above method also includes: responding to the trigger operation of the object saving control, obtaining the spatial position corresponding to the mounting information of the specified object; converting the spatial position corresponding to the mounting information into the material to generate a target material; the middle point of the target material is Mount the spatial position corresponding to the information; assign the prepared target material to the virtual object, render the virtual object, and save the rendered virtual object.
  • use script voice such as MAX script language (MaxScritp)
  • MAX script language MaxScritp
  • the target material such as Gradient material
  • the target material is assigned to the virtual object, rendered, and the rendered virtual object is obtained and saved. Pay attention to the spatial position and axial direction, and ensure that the color gradient in the material is a gradient from the beginning to the end of the virtual object.
  • the above-mentioned graphical user interface also includes a resource loading control; the above-mentioned step of responding to the loading operation of the target resource, and displaying the target sequence frame corresponding to the target resource in the graphical user interface, a possible implementation mode: obtaining the resource of the target resource Path information; in response to the trigger operation of the resource loading control, the target sequence frame corresponding to the target resource is displayed in the GUI.
  • the above graphical user interface also includes a resource path input box and a resource path selection control; the step of obtaining the resource path information of the target resource, a possible implementation mode: in response to an input operation acting on the resource path information of the target resource in the path input box, Get the resource path information of the target resource.
  • the path input box and the resource path selection control in actual implementation, the developer needs to select the optional box of "specified directory", and then obtain the resource path information of the target resource in other files, and then copy and paste In the path input box, you can also click the resource path selection control to display multiple resource files. At the same time, you can copy the resource path information of the target resource, and then paste it into the path input box to obtain the resource path information of the target resource.
  • selectable resource path information In response to the trigger operation of the resource path selection control, selectable resource path information is displayed, and in response to the selection operation of the resource path information of the target resource in the selectable resource path information, the resources of the target resource are acquired path information.
  • developers can click the resource path selection control to display multiple resource files, and then click to select the target resource, the path input box will display the resource path information of the target resource, and then obtain the resource path information of the target resource.
  • the above-mentioned graphical user interface also includes a resource path clearing control, and clicking on this control can delete the previously selected path. It should be noted that when the object editor is started, the editor will obtain a preset resource path by default.
  • developers can directly load resources by selecting or entering paths in the object editor, avoiding problems such as incomplete copying, read-only and non-editable, etc. in manual operations.
  • This method can solve resource path reading Complicated and many steps, and this method has a low fault tolerance rate, which greatly facilitates the workflow of artists.
  • the above GUI also includes an animation playback control; specifically: in response to the trigger operation of the animation playback control, play the action picture corresponding to the target sequence frame.
  • the playback control shown in Figure 5 when the developer clicks to play, the action picture corresponding to the current target sequence frame can be played on the GUI.
  • the target resource includes various actions of virtual objects and associated objects.
  • Action switching controls such as the " ⁇ " and " ⁇ " controls in Figure 5, change the action screen corresponding to the target resource being played. For example, click the control " ⁇ " to switch the next action to play the next action corresponding to the target resource.
  • the action to play is a preset action in the target resource, specifically playing a series of sequence frames to complete animation playback.
  • the final animation effect can be directly previewed in the object editor, so that developers can watch the edited animation effect more intuitively, and the production efficiency is improved.
  • the above-mentioned graphical user interface also includes a plurality of background image display controls; specifically: in response to the first A trigger operation of the background picture display control displays the first background picture corresponding to the first background picture display control in the background area of the target sequence frame.
  • each background picture display control corresponds to a game scene picture.
  • the above graphical user interface also includes a copy offset control; specifically: in response to the trigger operation of the copy offset control, obtain the current target The attachment information of the specified object in the sequence frame; in response to the switching operation for the current target sequence frame, save the mounting information of the specified object in the current target sequence frame to the sequence frame corresponding to the switching operation.
  • the developer wants to adjust the mounting point information of the object in a single frame, he can click the copy offset control after controlling the movement of the current target sequence frame, then obtain the mounting information of the specified object in the current target sequence frame, and then click Next Frame Then, the attachment information of the specified object in the acquired target sequence frame can be saved to the sequence frame of the next frame. That is to say, in the object editor, developers can not only process the attachment information of objects in sequence frames in batches, but also process the attachment information of objects in one of the sequence frames individually. This editing and processing method is more flexible.
  • the graphical user interface also includes a switching control for the object ID in the target sequence frame.
  • the user can enter the ID of the mount to be replaced in the corresponding input box, and click the load control to display the replaced mount.
  • the user If you want to load a widget for the character, you can enter the ID of the widget you want to replace, and click the load control to display the loaded widget.
  • the loading and replacement operations of other object IDs are also the same process.
  • the first area displays the number, action, frame number, etc. of the current virtual object
  • the second area is the mask Paste the mobile panel, when copying and pasting the mask, you can adjust the offset of the mask to be pasted
  • the third area is the protagonist parameter panel, the number of the associated object to be loaded, the number of the virtual object, etc.
  • the fourth area is the offset control panel , adjust the hanging point, the offset of the specified object, etc.
  • the fifth area is the mask paste operation panel, copy and paste the mask image
  • the sixth area is the resource display panel: preview the effect of the currently specified object
  • the seventh area is other information panel,
  • the refresh control is used to refresh the current modification, and the edit control is used to switch to the mask image editing mode.
  • the specified object is a sequence frame resource, it is edited frame by frame when editing, and the upper frame/lower frame control is used for Switch sequence frames as needed, and the direction control is used to switch the orientation of resources.
  • the action control is used to switch the mount (virtual object) action, the load control is used to reload resources, and the save control is used to save the edited mask image.
  • an embodiment of the present disclosure provides a game object editing device, which provides a graphical user interface through a terminal device, and the graphical user interface includes an object offset control; as shown in FIG. 7 , the device includes:
  • the display module 71 is used to respond to the loading operation for the target resource, and display the target sequence frame corresponding to the target resource in the graphical user interface; wherein, the target sequence frame includes a virtual object and associated objects of the virtual object; the target resource also includes a virtual object. Hook information for objects and associated objects;
  • the control module 72 is configured to control the movement of the specified object in the target sequence frame in response to the first operation acting on the object offset control;
  • the specified object includes a virtual object or an associated object;
  • the updating module 73 is configured to update the mounting information of the specified object according to the position of the specified object after the movement.
  • An embodiment of the present disclosure provides an editing device for a game object.
  • the target sequence frame corresponding to the target resource is displayed, and the target sequence frame includes a virtual object and associated objects of the virtual object; the target resource also includes Including the attachment information of virtual objects and associated objects; in response to the first operation acting on the object offset control, control the movement of the specified object in the target sequence frame; the specified object includes virtual objects or associated objects; according to the position of the specified object after moving, update Specifies the mount information for the object.
  • This method can directly edit the target sequence frame corresponding to the target resource, control the movement of the specified object through the object offset control, display the target sequence frame after the movement in real time, and update the mounting information of the specified object by controlling the movement of the specified object.
  • the operation is simple and clear , which simplifies the development process of game objects, reduces labor costs and time costs, and improves the animation effects of game objects.
  • the above-mentioned object offset control includes multiple object offset sub-controls, each object offset sub-control corresponds to an offset direction, and different object offset sub-controls correspond to different offset directions; the above-mentioned control module is also used for : In response to the trigger operation of the first object offset sub-control among the multiple object offset sub-controls, control the movement of the specified object to the first offset direction corresponding to the first object offset sub-control.
  • the above graphical user interface also includes an object storage control; the above update module is also used for: responding to the trigger operation of the object storage control, acquiring the first mount information corresponding to the position of the designated object after the movement, and converting the first mount information to Determine the attachment information of the specified object in the target sequence frame, and the attachment information of the specified object in the sequence frame other than the target sequence frame in the target resource; save the first mounting information to the target resource.
  • the update module is further configured to: obtain first position information corresponding to the position of the designated object after the movement; convert the first position information into pixel position information corresponding to the designated object to obtain the first attachment information.
  • the above-mentioned graphical user interface also includes a target sequence frame offset control;
  • the above-mentioned device also includes: a second control module, configured to control the movement of the target sequence frame in response to a second operation acting on the target sequence frame offset control;
  • the update module is configured to update the position information of the target sequence frame according to the position of the target sequence frame after the movement.
  • the above-mentioned target sequence frame offset control includes multiple target sequence frame offset sub-controls, each target sequence frame offset sub-control corresponds to an offset direction, and different target sequence frame offset sub-controls correspond to different offset directions ;
  • the above-mentioned second control module is also used for: responding to the trigger operation of the first target sequence frame offset sub-control among the multiple target sequence frame offset sub-controls, controlling the correspondence between the target sequence frame and the first target sequence frame offset sub-control Move in the first offset direction, and display the moving action picture of the target sequence frame.
  • the above-mentioned graphical user interface also includes a target sequence frame saving control; the above-mentioned second updating module is also used for: responding to the trigger operation of the target sequence frame control, acquiring the second position information of the current target sequence frame, and storing the second position information Determine the position information of the current target sequence frame and the position information in all sequence frames corresponding to the target resource; update the second position information to the target resource.
  • the above graphical user interface also includes a target sequence frame editing control; the above device also includes: a second display module, configured to: display the depth map of the target sequence frame in response to a trigger operation of the target sequence frame editing control; wherein, the depth The background image of the image is the mask image of the target sequence frame; the occlusion relationship determination module is used to determine the occlusion relationship information of the virtual object and the associated object in response to the color editing operation on the depth image.
  • the above device also includes a mask map determining module, configured to: determine the mask map of the virtual object according to the first attachment information of the specified object in the target sequence frame; the mask map includes a first color area and a second color area area; wherein, the associated object of the virtual object is located in the middle area between the first color area and the second color area.
  • a mask map determining module configured to: determine the mask map of the virtual object according to the first attachment information of the specified object in the target sequence frame; the mask map includes a first color area and a second color area area; wherein, the associated object of the virtual object is located in the middle area between the first color area and the second color area.
  • the above-mentioned occlusion relationship determination module is also used to: respond to the first color filling operation for the specified area in the depth map, determine the occlusion relationship information between the virtual object and the associated object in the specified area as the virtual object occludes the associated object;
  • the second color filling operation in the specified area in the figure determines the occlusion relationship information between the virtual object and the associated object in the specified area so that the associated object blocks the virtual object.
  • the above-mentioned graphical user interface also includes an object storage control; the above-mentioned device also includes an object storage module, configured to: respond to the trigger operation of the object storage control, obtain the spatial position corresponding to the mounting information of the specified object; The spatial position of the target material is converted into the material to generate a target material; the middle point of the target material is the spatial position corresponding to the mounting information; the produced target material is assigned to the virtual object, the virtual object is rendered, and the rendered virtual object is saved. object.
  • the above graphical user interface also includes a resource loading control; the above display module is also used to: obtain resource path information of the target resource; respond to the trigger operation of the resource loading control, and display the target sequence frame corresponding to the target resource in the graphical user interface .
  • the above-mentioned graphical user interface also includes a resource path input box and a resource path selection control; the above-mentioned display module is also used for: responding to an input operation acting on the resource path information of the target resource in the path input box, to obtain the resource path information of the target resource ; or, in response to a trigger operation of the resource path selection control, selectable resource path information is displayed, and in response to a selection operation on the resource path information of the target resource in the selectable resource path information, resource path information of the target resource is acquired.
  • the above-mentioned graphical user interface further includes an animation playback control; the above-mentioned device further includes a playback module, configured to play the action picture corresponding to the target sequence frame in response to a trigger operation of the animation playback control.
  • the above graphical user interface also includes a plurality of background image display controls; the above device also includes a background image display module, configured to: respond to the trigger operation of the first background image display control in the plurality of background image display controls, in the target sequence
  • the background area of the frame displays the first background image corresponding to the first background image display control.
  • the above graphical user interface further includes a text display control; the above device further includes a text display module, configured to display preset text at a designated position of the graphical user interface in response to a trigger operation of the text display control.
  • the above-mentioned graphical user interface also includes a copy offset control; the above-mentioned device also includes a copy offset module, which is used to: respond to the trigger operation of the copy offset control, obtain the attachment information of the specified object in the current target sequence frame; respond For the switching operation of the current target sequence frame, the mounting information of the specified object in the current target sequence frame is saved in the sequence frame corresponding to the switching operation.
  • the game object editing device provided by the embodiment of the present disclosure has the same technical features as the game object editing method provided by the above embodiment, so it can also solve the same technical problem and achieve the same technical effect.
  • This embodiment also provides an electronic device, including a processor and a memory, the memory stores computer-executable instructions that can be executed by the processor, and the processor executes the computer-executable instructions to realize the above-mentioned game object editing method.
  • the electronic device may be a server or a terminal device.
  • the electronic device includes a processor 100 and a memory 101, the memory 101 stores computer-executable instructions that can be executed by the processor 100, and the processor 100 executes the computer-executable instructions to realize the above-mentioned game object editing method.
  • the processor 100 executing the computer-executable instructions may also implement the following steps:
  • the object offset control includes multiple object offset sub-controls, each object offset sub-control corresponds to an offset direction, and different object offset sub-controls correspond to different offset directions; Operation, the step of controlling the movement of the specified object in the target sequence frame, including: in response to the trigger operation of the first object offset sub-control among the plurality of object offset sub-controls, controlling the specified object to move to the first object offset sub-control corresponding to the first object Move in an offset direction.
  • the graphical user interface also includes an object preservation control; according to the position of the designated object after the movement, the step of updating the hook information of the designated object includes: responding to the trigger operation of the object preservation control, obtaining the corresponding information of the position of the designated object after the movement
  • the first attachment information determines the first attachment information as the attachment information of the specified object in the target sequence frame, and the attachment information of the specified object in the sequence frame other than the target sequence frame in the target resource; the first attachment The information is saved to the target resource.
  • the step of obtaining the first mount information corresponding to the position of the designated object after the movement includes: obtaining the first position information corresponding to the position of the designated object after the movement; converting the first position information into pixel position information corresponding to the designated object , get the first mount information.
  • the graphical user interface also includes a target sequence frame offset control; the method further includes: controlling the movement of the target sequence frame in response to the second operation acting on the target sequence frame offset control; updating the target sequence frame according to the position of the target sequence frame after the movement The location information of the sequence frame.
  • the target sequence frame offset control includes multiple target sequence frame offset sub-controls, each target sequence frame offset sub-control corresponds to an offset direction, and different target sequence frame offset sub-controls correspond to different offset directions;
  • the step of controlling the movement of the target sequence frame includes: in response to the trigger operation of the first target sequence frame offset sub-control among the multiple target sequence frame offset sub-controls, controlling The target sequence frame moves to the first offset direction corresponding to the first target sequence frame offset sub-control, and displays the moving action picture of the target sequence frame.
  • the graphical user interface also includes a target sequence frame saving control; according to the position of the target sequence frame after the movement, the step of updating the position information of the target sequence frame includes: responding to the trigger operation of the target sequence frame control, obtaining the current target sequence frame
  • the second location information is used to determine the location information of the current target sequence frame and the location information of all sequence frames corresponding to the target resource; and update the second location information to the target resource.
  • the graphical user interface also includes a target sequence frame editing control; the method further includes: in response to a trigger operation of the target sequence frame editing control, displaying a depth map of the target sequence frame; wherein, the background picture of the depth map is a mask of the target sequence frame Membrane map; in response to the color editing operation on the depth map, determine the occlusion relationship information of the virtual object and the associated object.
  • the mask map of the target sequence frame is determined through the following steps: according to the first attachment information of the specified object in the target sequence frame, the mask map of the virtual object is determined; the mask map includes a first color area and a second color area ; Wherein, the associated object of the virtual object is located in the middle area between the first color area and the second color area.
  • the step of determining the occlusion relationship information of the virtual object and the associated object includes: responding to the first color filling operation for the specified area in the depth map, determining the virtual object and the associated object in the specified area
  • the occlusion relationship information is that the virtual object blocks the associated object; in response to the second color filling operation for the specified area in the depth map, it is determined that the occlusion relationship information between the virtual object and the associated object in the specified area is that the associated object blocks the virtual object.
  • the graphical user interface also includes an object storage control; the method also includes: responding to the trigger operation of the object storage control, obtaining the spatial position corresponding to the mounting information of the specified object; converting the spatial position corresponding to the mounting information into the material, Generate a target material; the middle point of the target material is the spatial position corresponding to the attachment information; assign the created target material to the virtual object, render the virtual object, and save the rendered virtual object.
  • the graphical user interface also includes a resource loading control; in response to the loading operation for the target resource, the step of displaying the target sequence frame corresponding to the target resource in the graphical user interface includes: obtaining resource path information of the target resource; responding to resource loading The trigger operation of the control displays the target sequence frame corresponding to the target resource in the GUI.
  • the graphical user interface also includes a resource path input box and a resource path selection control; the step of obtaining the resource path information of the target resource includes: responding to an input operation acting on the resource path information of the target resource in the path input box, obtaining the target resource or, in response to the trigger operation of the resource path selection control, display the selectable resource path information, and respond to the selection operation of the resource path information of the target resource in the selectable resource path information, obtain the resource path of the target resource information.
  • the graphical user interface further includes an animation playback control; the method further includes: in response to a trigger operation of the animation playback control, playing the action picture corresponding to the target sequence frame.
  • the graphical user interface further includes a plurality of background image display controls; the method further includes: in response to a trigger operation of the first background image display control in the plurality of background image display controls, displaying the first background image in the background area of the target sequence frame The first background image corresponding to the control is displayed.
  • the graphical user interface further includes a text display control; the method further includes: displaying preset text at a designated position of the graphical user interface in response to a trigger operation of the text display control.
  • the graphical user interface also includes a copy offset control; the method further includes: in response to a trigger operation of the copy offset control, obtaining the attachment information of the specified object in the current target sequence frame; in response to the switching operation for the current target sequence frame, Save the mounting information of the specified object in the current target sequence frame to the sequence frame corresponding to the switching operation.
  • This method can directly edit the target sequence frame corresponding to the target resource, control the movement of the specified object through the object offset control, display the target sequence frame after the movement in real time, and update the mounting information of the specified object by controlling the movement of the specified object.
  • the operation is simple and clear , which simplifies the development process of game objects, reduces labor costs and time costs, and improves the animation effects of game objects.
  • the electronic device shown in FIG. 8 further includes a bus 102 and a communication interface 103 , and the processor 100 , the communication interface 103 and the memory 101 are connected through the bus 102 .
  • the memory 101 may include a high-speed random access memory (RAM, Random Access Memory), and may also include a non-volatile memory (non-volatile memory), such as at least one disk memory.
  • RAM Random Access Memory
  • non-volatile memory such as at least one disk memory.
  • the communication connection between the system network element and at least one other network element is realized through at least one communication interface 103 (which may be wired or wireless), and the Internet, wide area network, local network, metropolitan area network, etc. can be used.
  • the bus 102 may be an ISA bus, a PCI bus, or an EISA bus, etc.
  • the bus can be divided into address bus, data bus, control bus and so on. For ease of representation, only one double-headed arrow is used in FIG. 8 , but it does not mean that there is only one bus or one type of bus.
  • the processor 100 may be an integrated circuit chip with signal processing capability. In the implementation process, each step of the above method may be implemented by an integrated logic circuit of hardware in the processor 100 or instructions in the form of software.
  • the above-mentioned processor 100 can be a general-purpose processor, including a central processing unit (Central Processing Unit, referred to as CPU), a network processor (Network Processor, referred to as NP), etc.; it can also be a digital signal processor (Digital Signal Processor, referred to as DSP) ), Application Specific Integrated Circuit (ASIC for short), Field Programmable Gate Array (Field-Programmable Gate Array, FPGA for short) or other programmable logic devices, discrete gate or transistor logic devices, and discrete hardware components.
  • CPU Central Processing Unit
  • NP Network Processor
  • DSP Digital Signal Processor
  • ASIC Application Specific Integrated Circuit
  • FPGA Field-Programmable Gate Array
  • a general-purpose processor may be a microprocessor, or the processor may be any conventional processor, or the like.
  • the steps of the method disclosed in the embodiments of the present disclosure can be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor.
  • the software module can be located in a mature storage medium in the field such as random access memory, flash memory, read-only memory, programmable read-only memory or electrically erasable programmable memory, register.
  • the storage medium is located in the memory 101, and the processor 100 reads the information in the memory 101, and completes the steps of the method in the foregoing embodiments in combination with its hardware.
  • This embodiment also provides a computer-readable storage medium.
  • the computer-readable storage medium stores computer-executable instructions.
  • the computer-executable instructions When the computer-executable instructions are called and executed by the processor, the computer-executable instructions prompt the processor to implement the game object described above. edit method.
  • the above-mentioned computer-readable storage medium can also be configured to store computer-executable instructions for performing the following steps:
  • the object offset control includes multiple object offset sub-controls, each object offset sub-control corresponds to an offset direction, and different object offset sub-controls correspond to different offset directions; Operation, the step of controlling the movement of the specified object in the target sequence frame, including: in response to the trigger operation of the first object offset sub-control among the plurality of object offset sub-controls, controlling the specified object to move to the first object offset sub-control corresponding to the first object Move in an offset direction.
  • the graphical user interface also includes an object preservation control; according to the position of the designated object after the movement, the step of updating the hook information of the designated object includes: responding to the trigger operation of the object preservation control, obtaining the corresponding information of the position of the designated object after the movement
  • the first attachment information determines the first attachment information as the attachment information of the specified object in the target sequence frame, and the attachment information of the specified object in the sequence frame other than the target sequence frame in the target resource; the first attachment The information is saved to the target resource.
  • the step of obtaining the first mount information corresponding to the position of the designated object after the movement includes: obtaining the first position information corresponding to the position of the designated object after the movement; converting the first position information into pixel position information corresponding to the designated object , get the first mount information.
  • the graphical user interface also includes a target sequence frame offset control; the method further includes: controlling the movement of the target sequence frame in response to the second operation acting on the target sequence frame offset control; updating the target sequence frame according to the position of the target sequence frame after the movement The location information of the sequence frame.
  • the target sequence frame offset control includes multiple target sequence frame offset sub-controls, each target sequence frame offset sub-control corresponds to an offset direction, and different target sequence frame offset sub-controls correspond to different offset directions;
  • the step of controlling the movement of the target sequence frame includes: in response to the trigger operation of the first target sequence frame offset sub-control among the multiple target sequence frame offset sub-controls, controlling The target sequence frame moves to the first offset direction corresponding to the first target sequence frame offset sub-control, and displays the moving action picture of the target sequence frame.
  • the graphical user interface also includes a target sequence frame saving control; according to the position of the target sequence frame after the movement, the step of updating the position information of the target sequence frame includes: responding to the trigger operation of the target sequence frame control, obtaining the current target sequence frame
  • the second location information is used to determine the location information of the current target sequence frame and the location information of all sequence frames corresponding to the target resource; and update the second location information to the target resource.
  • the graphical user interface also includes a target sequence frame editing control; the method further includes: in response to a trigger operation of the target sequence frame editing control, displaying a depth map of the target sequence frame; wherein, the background picture of the depth map is a mask of the target sequence frame Membrane map; in response to the color editing operation on the depth map, determine the occlusion relationship information of the virtual object and the associated object.
  • the mask map of the target sequence frame is determined through the following steps: according to the first attachment information of the specified object in the target sequence frame, the mask map of the virtual object is determined; the mask map includes a first color area and a second color area ; Wherein, the associated object of the virtual object is located in the middle area between the first color area and the second color area.
  • the step of determining the occlusion relationship information of the virtual object and the associated object includes: responding to the first color filling operation for the specified area in the depth map, determining the virtual object and the associated object in the specified area
  • the occlusion relationship information is that the virtual object blocks the associated object; in response to the second color filling operation for the specified area in the depth map, it is determined that the occlusion relationship information between the virtual object and the associated object in the specified area is that the associated object blocks the virtual object.
  • the graphical user interface also includes an object storage control; the method also includes: responding to the trigger operation of the object storage control, obtaining the spatial position corresponding to the mounting information of the specified object; converting the spatial position corresponding to the mounting information into the material, Generate a target material; the middle point of the target material is the spatial position corresponding to the attachment information; assign the created target material to the virtual object, render the virtual object, and save the rendered virtual object.
  • the graphical user interface also includes a resource loading control; in response to the loading operation for the target resource, the step of displaying the target sequence frame corresponding to the target resource in the graphical user interface includes: obtaining resource path information of the target resource; responding to resource loading The trigger operation of the control displays the target sequence frame corresponding to the target resource in the GUI.
  • the graphical user interface also includes a resource path input box and a resource path selection control; the step of obtaining the resource path information of the target resource includes: responding to an input operation acting on the resource path information of the target resource in the path input box, obtaining the target resource or, in response to the trigger operation of the resource path selection control, display the selectable resource path information, and respond to the selection operation of the resource path information of the target resource in the selectable resource path information, and obtain the resource path of the target resource information.
  • the graphical user interface further includes an animation playback control; the method further includes: in response to a trigger operation of the animation playback control, playing the action picture corresponding to the target sequence frame.
  • the graphical user interface further includes a plurality of background image display controls; the method further includes: in response to a trigger operation of the first background image display control in the plurality of background image display controls, displaying the first background image in the background area of the target sequence frame The first background image corresponding to the control is displayed.
  • the graphical user interface further includes a text display control; the method further includes: displaying preset text at a designated position of the graphical user interface in response to a trigger operation of the text display control.
  • the graphical user interface also includes a copy offset control; the method further includes: in response to a trigger operation of the copy offset control, obtaining the attachment information of the specified object in the current target sequence frame; in response to the switching operation for the current target sequence frame, Save the mounting information of the specified object in the current target sequence frame to the sequence frame corresponding to the switching operation.
  • This method can directly edit the target sequence frame corresponding to the target resource, control the movement of the specified object through the object offset control, display the target sequence frame after the movement in real time, and update the mounting information of the specified object by controlling the movement of the specified object.
  • the operation is simple and clear , which simplifies the development process of game objects, reduces labor costs and time costs, and improves the animation effects of game objects.
  • the computer program product of the game object editing method, device, electronic equipment, and storage medium provided by the embodiments of the present disclosure includes a computer-readable storage medium storing program codes, and the instructions included in the program codes can be used to implement the preceding methods.
  • the specific implementation of the method described in the example can refer to the method embodiment, and will not be repeated here.
  • connection in the description of the embodiments of the present disclosure, unless otherwise specified and limited, the terms “installation”, “connection” and “connection” should be interpreted in a broad sense, for example, it may be a fixed connection or a detachable connection , or integrally connected; it may be mechanically connected or electrically connected; it may be directly connected or indirectly connected through an intermediary, and it may be the internal communication of two components. Those skilled in the art can understand the specific meanings of the above terms in the present disclosure in specific situations.
  • the functions described above are realized in the form of software function units and sold or used as independent products, they can be stored in a computer-readable storage medium.
  • the technical solution of the present disclosure is essentially or the part that contributes to the prior art or the part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium, including Several instructions are used to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the methods described in various embodiments of the present disclosure.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disk or optical disc, etc., which can store program codes. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Computer Graphics (AREA)
  • Processing Or Creating Images (AREA)

Abstract

本公开提供了一种游戏对象的编辑方法、装置和电子设备,响应针对目标资源的加载操作,显示目标资源对应的目标序列帧,该目标序列帧中包括虚拟对象和虚拟对象的关联对象;目标资源中还包括虚拟对象和关联对象的挂接信息;响应作用于对象偏移控件的第一操作,控制目标序列帧中指定对象移动;指定对象包括虚拟对象或关联对象;根据移动后指定对象的位置,更新指定对象的挂接信息。该方式可以直视的编辑目标资源对应的目标序列帧,通过对象偏移控件控制指定对象移动,实时显示移动后的目标序列帧,通过控制指定对象移动更新指定对象的挂接信息,操作简单清晰,简化了游戏对象的开发过程,减少了人工成本和时间成本,提高了游戏对象的动画效果。

Description

游戏对象的编辑方法、装置和电子设备
本公开要求于2022年01月26日提交中国专利局、申请号为202210092395.7、申请名称为“游戏对象的编辑方法、装置和电子设备”的专利申请的优先权,其全部内容通过引用结合在本公开中。
技术领域
本公开涉及游戏模型开发技术领域,尤其是涉及一种游戏对象的编辑方法、装置和电子设备。
背景技术
三渲二,指的是将三维动画渲染成二维动画。三渲二类的游戏模型的开发通常需要将三维模型的动画渲染成一系列的序列帧,把角色模型通过预先确定的挂点挂接在三维模型上,通过替换序列帧的方式完成三维模型的动态效果。相关技术中,通常需要在相关软件中确定并输出制作完成的游戏模型的美术资源,其中包括角色和序列帧图片以及挂点,然后将输出的游戏模型输入至编辑器中并查看游戏模型是否有问题,如果出现问题,比如游戏模型的挂点不合适,则需要在软件中重新调整挂点位置,通过网格的方式重新对游戏模型进行渲染并输出,如果角色模型种类较多或者游戏模型的材质较复杂时,需要在软件中花费较长时间进行调整和渲染,这种方式操作较为繁琐,人工成本和时间成本较高,且渲染得到的游戏模型动画效果不佳。
发明内容
有鉴于此,本公开的目的在于提供一种游戏对象的编辑方法、装置和电子设备,以简化游戏对象的开发过程,减少人工成本和时间成本,同时提高游戏对象的动画效果。
第一方面,本公开实施例提供了一种游戏对象的编辑方法,通过终端设备提供一图形用户界面,图形用户界面中包括对象偏移控件;方法包括:响应针对目标资源的加载操作,在图形用户界面中显示目标资源对应的目标序列帧;其中,目标序列帧中包括虚拟对象以及虚拟对象的关联对象;目标资源中还包括虚拟对象和关联对象的挂接信息;响应作用于对象偏移控件的第一操作,控制目标序列帧中指定对象移动;指定对象包括虚拟对象或关联对象;根据移动后指定对象的位置,更新指定对象的挂接信息。
对象偏移控件包括多个对象偏移子控件,每个对象偏移子控件对应一个偏移方向,不同的对象偏移子控件对应的偏移方向不同;响应作用于对象偏移控件的第一操作,控制目标序列帧中指定对象移动的步骤,包括:响应于多个对象偏移子控件中第一对象偏移子控件的触发操作,控制指定对象向第一对象偏移子控件对应的第一偏移方向移动。
进一步的,图形用户界面还包括对象保存控件;根据移动后指定对象的位置,更新指定对象的挂接信息的步骤,包括:响应于对象保存控件的触发操作,获取移动后指定对象的位置对应的第一挂接信息,将第一挂接信息确定为目标序列帧中指定对象的挂接信息,以及目标资源中除目标序列帧以外的序列帧中指定对象的挂接信息;将第一挂接信息保存至目标资源。
进一步的,获取移动后指定对象的位置对应的第一挂接信息的步骤,包括:获取移动后指定对象的位置对应的第一位置信息;将第一位置信息转换为指定对象对应的像素位置信息,得到第一挂接信息。
进一步的,图形用户界面还包括目标序列帧偏移控件;方法还包括:响应作用于目标序列帧偏移控件的第二操作,控制目标序列帧移动;根据移动后目标序列帧的位置,更新目标序列帧的位置信息。
进一步的,目标序列帧偏移控件包括多个目标序列帧偏移子控件,每个目标序列帧偏移子控件对应一个偏移方向,不同目标序列帧偏移子控件对应的偏移方向不同;响应作用于目标序列帧偏移控件的第二操作,控制目标序列帧移动的步骤,包括:响应于多个目标序列帧偏移子控件中第一目标序列帧偏移子控件的触发操作,控制目标序列帧向第一目标序列帧偏移子控件对应的第一偏移方向移动,并显示目标序列帧的移动动作画面。
进一步的,图形用户界面还包括目标序列帧保存控件;根据移动后目标序列帧的位置,更新目标序列帧的位置信息的步骤,包括:响应于目标序列帧控件的触发操作,获取当前目标序列帧的第二位置信息,将第二位置信息确定当前目标序列帧的位置信息,以及目标资源对应的所有序列帧中的位置信息;将第二位置信息更新至目标资源。
进一步的,图形用户界面还包括目标序列帧编辑控件;方法还包括:响应于目标序列帧编辑控件的触发操作,显示目标序列帧的深度图;其中,深度图的背景画面为目标序列帧的掩膜图;响应针对深度图的颜色编辑操作,确定虚拟对象和关联对象的遮挡关系信息。
进一步的,目标序列帧的掩膜图通过以下步骤确定:根据目标序列帧中指定对象的第一挂接信息,确定虚拟对象的掩膜图;掩膜图包括第一颜色区域和第二颜色区域;其中,虚拟对象的关联对象位于第一颜色区域和第二颜色区域的中间区域。
进一步的,响应针对深度图的颜色编辑操作,确定虚拟对象和关联对象的遮挡关系信息的步骤,包括:响应针对深度图中指定区域的第一颜色填充操作,确定指定区域中虚拟对象和关联对象的遮挡关系信息为虚拟对象遮挡关联对象;响应针对深度图中指定区域的第二颜色填充操作,确定指定区域中虚拟对象和关联对象的遮挡关系信息为关联对象遮挡虚拟对象。
进一步的,图形用户界面还包括对象保存控件;方法还包括:响应于对象保存控件 的触发操作,获取指定对象的挂接信息对应的空间位置;将挂接信息对应的空间位置转换到材质中,生成一个目标材质;目标材质的中间点为挂接信息对应的空间位置;将制作好的目标材质赋予给虚拟对象,对虚拟对象进行渲染,并保存渲染后的虚拟对象。
进一步的,图形用户界面还包括资源加载控件;响应针对目标资源的加载操作,在图形用户界面中显示目标资源对应的目标序列帧的步骤,包括:获取目标资源的资源路径信息;响应于资源加载控件的触发操作,在图形用户界面中显示目标资源对应的目标序列帧。
进一步的,图形用户界面还包括资源路径输入框和资源路径选择控件;获取目标资源的资源路径信息的步骤,包括:响应作用于路径输入框中目标资源的资源路径信息的输入操作,获取目标资源的资源路径信息;或者,响应于资源路径选择控件的触发操作,显示可选择的资源路径信息,响应针对可选择的资源路径信息中目标资源的资源路径信息的选择操作,获取目标资源的资源路径信息。
进一步的,图形用户界面还包括动画播放控件;方法还包括:响应于动画播放控件的触发操作,播放目标序列帧对应的动作画面。
进一步的,图形用户界面还包括多个背景画面显示控件;方法还包括:响应于多个背景画面显示控件中第一背景画面显示控件的触发操作,在目标序列帧的背景区域显示第一背景画面显示控件对应的第一背景画面。
进一步的,图形用户界面还包括文本显示控件;方法还包括:响应于文本显示控件的触发操作,在图形用户界面的指定位置显示预设文本。
进一步的,图形用户界面还包括复制偏移控件;方法还包括:响应于复制偏移控件的触发操作,获取当前目标序列帧中指定对象的挂接信息;响应针对当前目标序列帧的切换操作,将当前目标序列帧中指定对象的挂接信息保存至切换操作对应的序列帧中。
第二方面,本公开实施例提供了一种游戏对象的编辑装置,通过终端设备提供一图形用户界面,图形用户界面中包括对象偏移控件;装置包括:显示模块,用于响应针对目标资源的加载操作,在图形用户界面中显示目标资源对应的目标序列帧;其中,目标序列帧中包括虚拟对象以及虚拟对象的关联对象;目标资源中还包括虚拟对象和关联对象的挂接信息;控制模块,用于响应作用于对象偏移控件的第一操作,控制目标序列帧中指定对象移动;指定对象包括虚拟对象或关联对象;更新模块,用于根据移动后指定对象的位置,更新指定对象的挂接信息。
第三方面,本公开实施例提供了一种电子设备,包括处理器和存储器,存储器存储有能够被处理器执行的计算机可执行指令,处理器执行计算机可执行指令以实现第一方面任一项的游戏对象的编辑方法。
第四方面,本公开实施例提供了一种计算机可读存储介质,计算机可读存储介质存储有计算机可执行指令,计算机可执行指令在被处理器调用和执行时,计算机可执行指 令促使处理器实现第一方面任一项的游戏对象的编辑方法。
本公开实施例带来了以下有益效果:
本公开提供了一种游戏对象的编辑方法、装置和电子设备,响应针对目标资源的加载操作,显示目标资源对应的目标序列帧,该目标序列帧中包括虚拟对象和虚拟对象的关联对象;目标资源中还包括虚拟对象和关联对象的挂接信息;响应作用于对象偏移控件的第一操作,控制目标序列帧中指定对象移动;指定对象包括虚拟对象或关联对象;根据移动后指定对象的位置,更新指定对象的挂接信息。该方式可以直视的编辑目标资源对应的目标序列帧,通过对象偏移控件控制指定对象移动,实时显示移动后的目标序列帧,通过控制指定对象移动更新指定对象的挂接信息,操作简单清晰,简化了游戏对象的开发过程,减少了人工成本和时间成本,提高了游戏对象的动画效果。
本公开的其他特征和优点将在随后的说明书中阐述,并且,部分地从说明书中变得显而易见,或者通过实施本公开而了解。本公开的目的和其他优点在说明书、权利要求书以及附图中所特别指出的结构来实现和获得。
为使本公开的上述目的、特征和优点能更明显易懂,下文特举较佳实施例,并配合所附附图,作详细说明如下。
附图说明
为了更清楚地说明本公开具体实施方式或现有技术中的技术方案,下面将对具体实施方式或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图是本公开的一些实施方式,对于本领域技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本公开实施例提供的一种游戏对象的编辑方法的流程图;
图2为本公开实施例提供的一种图形用户界面的示意图;
图3为本公开实施例提供的另一种图形用户界面的示意图;
图4为本公开实施例提供的另一种图形用户界面的示意图;
图5为本公开实施例提供的另一种图形用户界面的示意图;
图6为本公开实施例提供的另一种图形用户界面的示意图;
图7为本公开实施例提供的一种游戏对象的编辑装置的结构示意图;
图8为本公开实施例提供的一种电子设备的结构示意图。
具体实施方式
为使本公开实施例的目的、技术方案和优点更加清楚,下面将结合附图对本公开的技术方案进行清楚、完整地描述,显然,所描述的实施例是本公开一部分实施例,而不是全部的实施例。基于本公开中的实施例,本领域技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本公开保护的范围。
三渲二,指的是将三维动画渲染成二维动画。通常三渲二类的游戏坐骑的开发,需要将三维模型坐骑的动画渲染成一系列的2D序列帧,在渲染序列帧资源的同时,也会渲染一份黑白的深度图,用来区分层级,也就是坐骑和角色之间的遮挡关系,是通过把坐骑拆分成两个分块来实现的。其中一个分块放在角色上层,表示坐骑遮挡角色;其中一个分块放在主角下层,表示角色遮挡坐骑。把角色通过预先确定的挂接点挂接坐骑上,通过替换序列帧图片,完成坐骑动态效果。
相关技术中,通常需要在相关软件,即DCC(Digital Content Create)软件,一种为数字内容创建相关的软件,比如,Maya软件(一种三维建模和动画软件)、3DS Max软件(3D Studio Max软件,一种三维建模渲染和动画制作)等,确定并输出制作完成的游戏坐骑和角色的美术资源,其中包括角色和序列帧图片以及挂点。然后在游戏引擎中通过附加Mesh(即网格)的方法进行挂点调试,具体通过对面片模型进行分UV(为多边形建模中的一个概念,主要是记录模型在三维空间的顶点和二维贴图空间对应关系的数),映射出序列帧图片,调节Mesh上的挂点位置,即可完成对所有序列帧图片的挂接点位移。
最后将输出的目标资源,即坐骑和角色输入至编辑器中并查看游戏坐骑和角色是否有问题,没问题才可提交进行游戏测试。通常查看如下几点:
1.如果角色挂点出现问题,美术人员需要返回至DCC软件重新调整角色挂点并输出,如果角色体型较多,就会出现难以统一的情况,需要调试到一个所有体型都合适的点较难,因此需要花费较长的时间和精力。
2.如果坐骑挂点出现问题,则美术人员需要调整坐骑的root骨骼,重新渲染,如果坐骑的材质较为复杂则需要花费较长的时间进行渲染,如果调整的次数较多可能会花费一整天的时间。
因此上述方式操作较为繁琐,人工成本和时间成本较高,且渲染得到的游戏模型动画效果不佳。基于此,本公开实施例提供的一种游戏对象的编辑方法、装置和电子设备,该技术可以应用于设置有对象编辑器的设备。
为便于对本实施例进行理解,首先对本公开实施例所公开的一种游戏对象的编辑方法进行详细介绍,通过终端设备提供一图形用户界面,图形用户界面中包括对象偏移控件;如图1所示,该方法包括如下步骤:
步骤S102,响应针对目标资源的加载操作,在图形用户界面中显示目标资源对应的目标序列帧;其中,目标序列帧中包括虚拟对象以及虚拟对象的关联对象;目标资源中还包括虚拟对象和关联对象的挂接信息;
上述目标资源通常是指美术制作完成后输出的资源,其中包括游戏对象的序列帧图片以及游戏对象;通常目标资源中包括虚拟对象一系列的序列帧图片,每张序列帧图片挂载有该序列帧图片对应的关联对象,因此上述目标序列帧中包括虚拟对象以及虚拟对 象的关联对象。上述虚拟对象可以是游戏坐骑,也可以是其他与虚拟角色具有挂载关系的对象,比如,虚拟云朵,虚拟车辆等。上述关联对象通常是指虚拟角色,该虚拟角色通常是游戏中预设的人物角色。一种游戏场景中,虚拟角色可以挂载在虚拟坐骑上,以增加虚拟角色的移动速度、技能伤害等。
上述挂接信息包括虚拟对象的挂接信息,还包括关联对象的挂接信息。其中虚拟对象的挂接信息包括每张序列帧中虚拟对象的挂点位置信息;关联对象的挂接信息包括每张序列帧中关联对象的挂点位置信息。上述图形用户界面是指对象编辑器的用户界面,在该界面中可以对上述目标序列帧进行可视的编辑操作。
实际实现时,可以在图形用户界面中选择目标资源,然后点击资源加载控件,可以将目标资源对象的目标序列帧加载到该编辑器中,同时在图形用户界面的指定区域中显示目标资源对应的目标序列帧。该目标序列帧通常从目标资源对应的多张序列帧中随机选择的。实际的加载过程,通常是分别加载虚拟对象、关联对象以及虚拟对象和关联对象的挂接信息,然后基于虚拟对象和关联对象的挂接信息,将关联对象挂接在虚拟对象上,最后作为目标序列帧显示在指定区域。其中,上述虚拟对象为二维图片,上述关联对象为三维模型。如图2所述的图形用户界面,在界面的指定区域显示有目标序列帧,其中包括虚拟对象和关联对象。
步骤S104,响应作用于对象偏移控件的第一操作,控制目标序列帧中指定对象移动;指定对象包括虚拟对象或关联对象;
上述对象偏移控件用于控制目标序列帧中的对象移动,为了简化开发人员的操作,可以预先设置多个固定移动方向的偏移控件,开发人员点击不同方向的偏移控件会控制指定对象向该方向移动,其中移动的距离可以根据实际需要进行预先设置。
由于目标序列帧中包括对象和关联对象,且这两个对象都设置有挂点信息,在进行挂点调试时,需要分别调试每个对象的挂点位置。所以上述对象偏移控件通常包括针对虚拟对象的对象偏移控件,还包括针对关联对象的对象偏移控件。或者,上述对象偏移控件只有一种,可以分别针对不同对象进行偏移控制,该种情况可以预先选定控制的对象,然后通过对象偏移控件对选定的对象进行偏移控制。上述第一操作可以是通过鼠标点击对象偏移控件中不同方向的偏移控件,比如想要控制对象向上移动可以鼠标点击向上方向的偏移控件,还可以是通过键盘输入对象控件中不同方向对应的指令,比如想要控制对象向上移动可以输入键盘中的“↑”。
举例说明,开发人员可以点击针对关联对象的对象偏移控件,点击想要移动的方向对应的控件,比如点击向上移动对应的控件,则可以控制关联对象从初始位置向上移动预设的距离。此时在图形用户界面则会显示该移动过程。另外,开发人员也可以点击针对虚拟对象的对象偏移控件,点击想要移动的方向对应的控件,比如点击向上移动对应的控件,则可以控制虚拟对象从初始位置向上移动预设的距离。此时在图形用户界面则 会显示该移动过程。
可以理解,响应作用于针对虚拟对象的对象偏移控件的第一操作,控制目标序列帧中虚拟对象移动;响应作用于针对关联对象的对象偏移控件的第一操作,控制目标序列帧中关联对象移动。本实施例的目的是通过控制关联对象移动,可以调整目标序列帧中关联对象的挂点位置,其中目标序列帧中关联对象为三维模型,关联对象的挂点位置是指关联对象中与虚拟对象相接触或者相结合的中心点。通过控制虚拟对象移动,可以调整目标序列帧中虚拟对象的挂点位置,其中由于目标序列帧中虚拟对象为二维图片,所以虚拟对象的挂点也可以称为图片K点,表示图片中适合挂接的位置。
步骤S106,根据移动后指定对象的位置,更新指定对象的挂接信息。
上述指定对象的挂接信息包括目标资源中每张序列帧中关联对象的挂点位置以及每张序列中虚拟对象的挂点位置。具体的,当控制指定对象移动后,在图形用户界面显示移动后的指定对象的位置。然后可以根据移动后的指定对象的位置,获取该位置对应的指定对象的挂接信息,包括关联对象的挂点位置,或虚拟对象的挂点位置,或关联对象的挂点位置以及虚拟对象的挂点位置。得到指定对象的挂接信息后,可以根据开发人员的保存操作,把目标资源中的初始挂接信息更新为上述指定对象的挂接信息。
本公开提供了一种游戏对象的编辑方法,响应针对目标资源的加载操作,显示目标资源对应的目标序列帧,该目标序列帧中包括虚拟对象和虚拟对象的关联对象;目标资源中还包括虚拟对象和关联对象的挂接信息;响应作用于对象偏移控件的第一操作,控制目标序列帧中指定对象移动;指定对象包括虚拟对象或关联对象;根据移动后指定对象的位置,更新指定对象的挂接信息。该方式可以直视的编辑目标资源对应的目标序列帧,通过对象偏移控件控制指定对象移动,实时显示移动后的目标序列帧,通过控制指定对象移动更新指定对象的挂接信息,操作简单清晰,简化了游戏对象的开发过程,减少了人工成本和时间成本,提高了游戏对象的动画效果。
上述对象偏移控件包括多个对象偏移子控件,每个对象偏移子控件对应一个偏移方向,不同的对象偏移子控件对应的偏移方向不同;上述对象偏移控件通常可以包括四个对象偏移子控件,其中第一对象偏移子控件对应的偏移方向为向上偏移,第二对象偏移子控件对应的偏移方向为向下偏移,第三对象偏移子控件对应的偏移方向为向左偏移,第四对象偏移子控件对应的偏移方向为向右偏移。
下面具体描述如何控制目标序列帧中指定对象移动,一种可能的实施方式:响应于多个对象偏移子控件中第一对象偏移子控件的触发操作,控制指定对象向第一对象偏移子控件对应的第一偏移方向移动。
具体的,如图3所示,上述对象偏移控件包括两种,其中实线框中的对象偏移控件是关联对象的对象偏移控件,包括四个子控件,虚线框中的对象偏移控件是虚拟对象的对象偏移控件,包括四个子控件。可以理解,响应于关联对象的多个对象偏移子控件中 第一对象偏移子控件的触发操作,控制关联对象向第一对象偏移子控件对应的第一偏移方向移动。响应于虚拟对象的多个对象偏移子控件中第一对象偏移子控件的触发操作,控制虚拟对象向第一对象偏移子控件对应的第一偏移方向移动。
实际实现时,需要先勾选图3中“设置角色偏移”和“调整坐骑偏移”前面的可选框。然后才可以点击对象偏移子控件,控制指定对象偏移。
另外,如图3所示的图形用户界面还包括“镜像朝向也调”的可选框,如果选择该框,则在控制指定对象移动时,同时可以调整镜像目标序列帧中指定对象的移动,如果开发人员想要观看镜像目标序列帧中指定对象的移动位置,可以点击图形用户界面中的“方向”控件,即可显示移动后镜像目标序列帧中指定对象的显示效果。
上述方式中,在图形用户界面设置多个子控件,开发人员可以通过点击子控件即可控制指定对象移动,并实时显示移动画面,以达到预想的挂接效果,从而直观的对指定对象的挂点进行调试,感受挂接效果,操作简单,大大的缩短了制作人员的修改时间,实现了流程优化和成本降低。避免了重新导入DCC软件修改对象挂接点信息后重新输出的繁复操作。同时避免了使用Mesh的方式,对渲染出来的序列帧图片进行批量操作。且避免了更改挂接点需更改原资源中的挂接信息,资源更改了挂接点后重新烘焙序列帧图片,输出渲染资源耗时久,周期长的问题。
下面具体描述如何更新指定对象的挂接信息,一种可能的实施方式:响应于对象保存控件的触发操作,获取移动后指定对象的位置对应的第一挂接信息,将第一挂接信息确定为目标序列帧中指定对象的挂接信息,以及目标资源中除目标序列帧以外的序列帧中指定对象的挂接信息;将第一挂接信息保存至目标资源。
通常上述图形用户界面包括关联对象保存控件和虚拟对象保存控件,例如图3所示的保存偏移控件,即上述关联对象保存控件,保存图片控件,即上述虚拟对象保存控件。
具体的,当开发人员在移动关联对象到满意的位置后,可以点击关联对象保存控件(对应图中的保存偏移控件),终端设备或者终端设备后端即可根据移动后关联对象的位置,获取移动后关联对象的位置对应的第一挂接信息,具体获取移动后关联对象与虚拟对象相结合的中心点位置信息,然后将获取的第一挂接信息确定为目标序列帧中关联对象的挂接信息,以及目标资源中除目标序列帧以外的序列帧中关联对象的挂接信息;最后直接将第一挂接信息保存至目标资源,以更新替换关联对象的初始挂接信息。
另外,当开发人员在移动虚拟对象到满意的位置后,可以点击虚拟对象保存控件(对应图中的保存图片控件),终端设备或者终端设备后端即可根据移动后虚拟对象的位置,获取移动后虚拟对象的位置对应的第一挂接信息,具体获取移动后虚拟对象与关联对象相结合的中心点位置信息,然后将获取的第一挂接信息确定为目标序列帧中虚拟对象的挂接信息,以及目标资源中除目标序列帧以外的序列帧中虚拟对象的挂接信息;最后直接将第一挂接信息保存至目标资源,以更新替换虚拟对象的初始挂接信息。另外,图形 用户界面中还包括“所有方向都存”的可选框,选中该可选框,再点击保存图片即可将多有图片都保存。
上述方式中,调整当前显示的目标序列帧中指定对象的位置的操作后,通过对象保存控件可以在修改目标序列帧中指定对象的挂接信息之后,同时可以直接批量处理所有序列帧中指定对象的挂接信息并保存更新至目标资源,避免了虚拟对象移动后,挂点位置偏移导致需要大量时间修改的问题,上述方式操作简单,大大的缩短了制作人员的修改时间,实现了流程优化和成本降低。
另外,上述取移动后指定对象的位置对应的第一挂接信息的步骤,一种可能的实施方式:获取移动后指定对象的位置对应的第一位置信息;将第一位置信息转换为指定对象对应的像素位置信息,得到第一挂接信息。
首先需要说明的是,实际上目标资源中包括的虚拟对象和关联对象的挂接信息的实现方式,与3D制作方式一样,创建挂接点放到虚拟对象合适的背部位置,做好挂点动画。由于虚拟对象是图片形式的2D对象不支持骨骼输出,就只能让程序读取挂点的位置信息。该挂点信息是取Sphere01的世界空间位置,即上述第一位置信息,再把这个第一位置信息转换成指定对象对应的像素位置信息,即指定对象的图片的像素位置。基于上述挂接信息实现方式,在控制指定对象移动后,获取移动后指定对象的位置对应的第一位置信息,然后把第一位置信息转换为指定对象对应的像素位置信息,得到所述第一挂接信息。其中的挂点动画没有链接关系,有自身的动画。稍微有一点起伏,不会像本身动作这么大,没有旋转。
由于游戏对象在实际的游戏场景中会显示游戏对象以及玩家的身份信息,为了提高在游戏中实际的动画效果,上述图形用户界面还包括文本显示控件;如图4所示的“显示名字称谓”的控件。具体的,响应于文本显示控件的触发操作,在图形用户界面的指定位置显示预设文本。
由于在开发游戏角色时,玩家都具有名字,该名字需要显示在目标序列帧的下方位置,在实际的游戏场景中显示整个目标序列帧和称谓的一个整体预览形式。实际实现时,可以点击文本显示控件,即可在图形用户界面的指定位置显示预设文本,该预设文本通常设置有文本的大小、数量和格式,如图4所示显示有两行文本,分别为“称谓效果预览”和“玩家名字六子”。比如,玩家通常会有自己的昵称(如“酷炫玩家”)这个游戏角色也会有一个部落的名字显示在玩家昵称的上方。需要说明的是,上述预设文本的位置不会发生改变,始终位于初始显示位置。
上述方式中,通过文本显示控件可以使预设文本显示在图形用户界面的指定位置,更加符合真实的动画显示效果。
在图形用户界面中不仅可以控制目标序列帧中虚拟对象和关联对象移动,还可以控制目标序列帧移动;上述图形用户界面还包括目标序列帧偏移控件;上述方法还包括: 响应作用于目标序列帧偏移控件的第二操作,控制目标序列帧移动;根据移动后目标序列帧的位置,更新目标序列帧的位置信息。
由于前述调整挂接信息时会移动虚拟对象或关联对象,因此会改变目标序列帧的位置,由于上述的预设文本的位置不会发生变化,所以改变指定对象后,目标序列帧可能会与预设文本相互重叠。本实施例中,可以通过目标序列帧偏移控件,使目标序列帧移动,实时显示目标序列帧移动后的位置,以使目标序列帧与预设文本之间保持一个合适的位置。
上述目标序列帧偏移控件用于控制目标序列帧移动,为了简化开发人员的操作,可以预先设置多个固定移动方向的偏移控件,开发人员点击不同方向的偏移控件会控制目标序列帧向该方向移动,其中移动的距离可以很具实际需要进行预先设置。上述第二操作可以是通过鼠标点击目标序列帧中不同方向的偏移控件。
实际上,如图4所示,可以在目标序列帧偏移控件的附近位置设置一个“与坐骑偏移同步”的选择框,当点击该选择框后,点击目标序列帧偏移控件即可控制目标序列帧移动,其中的移动操作于控制指定对象的过程相同在此不在赘述。
上述目标序列帧的位置信息通常包括目标序列帧在的位置,还可以包括目标序列帧与预设文本的相对位置。具体的,当控制目标序列帧移动后,在图形用户界面显示移动后的目标序列帧的位置。然后可以根据移动后的目标序列帧的位置,获取该位置的位置信息,包括目标序列帧的位置信息以及目标序列帧与预设文本的相对位置信息。得到位置信息后,可以根据开发人员的保存操作,把目标资源中的初始位置信息更新为上述目标序列帧的位置信息。
上述方式中,在对象编辑器中可直视化编辑目标资源对应的目标序列帧,通过目标序列帧偏移控件控制指目标序列帧移动,实时显示移动后的目标序列帧,进而保存目标序列帧的位置信息,操作简单清晰,简化了游戏对象的开发过程,减少了人工成本和时间成本,提高了游戏对象的动画效果。
上述目标序列帧偏移控件包括多个目标序列帧偏移子控件,每个目标序列帧偏移子控件对应一个偏移方向,不同目标序列帧偏移子控件对应的偏移方向不同;
如图4所示,方框内的目标序列帧偏移控件,上述目标序列帧偏移控件包括四个目标序列帧偏移子控件,其中第一目标序列帧偏移子控件对应的偏移方向为向上偏移,第二目标序列帧偏移子控件对应的偏移方向为向下偏移,第三目标序列帧偏移子控件对应的偏移方向为向左偏移,第四目标序列帧偏移子控件对应的偏移方向为向右偏移。
下面具体描述如何控制目标序列帧移动,一种可能的实施方式:响应于多个目标序列帧偏移子控件中第一目标序列帧偏移子控件的触发操作,控制目标序列帧向第一目标序列帧偏移子控件对应的第一偏移方向移动,并显示目标序列帧的移动动作画面。
实际实现时,需要先选择图4中“与坐骑偏移同步”前面的可选框,然后才点击目 标序列帧子控件,控制目标序列帧偏移。同时可以在图形用户界面显示目标序列帧的移动动作画面,可以使开发人员实时观察移动后的位置。需要说明的是,控制目标序列帧移动并不会改变提前调试好的虚拟对象和关联对象的挂接信息。
上述方式中,为了使目标序列帧与预设文字之间没有遮挡,可以通过目标序列帧偏移控件,控制目标序列帧向指定的方向移动,并实时显示移动画面,以达到想要的位置效果,从而直观的对目标序列帧的位置进行调试,操作简单方便。
上述图形用户界面还包括目标序列帧保存控件;上述根据移动后目标序列帧的位置,更新目标序列帧的位置信息的步骤,一种可能的实施方式:响应于目标序列帧控件的触发操作,获取当前目标序列帧的第二位置信息,将第二位置信息确定当前目标序列帧的位置信息,以及目标资源对应的所有序列帧中的位置信息;将第二位置信息更新至目标资源。
如图4所示的“保存Lua”控件,即上述目标序列帧保存控件。具体的,当开发人员在移动目标序列帧到满意的位置后,可以点击目标序列帧保存控件(对应图中的保存Lua控件),终端设备或者终端设备后端即可根据移动后目标序列帧的位置,获取移动后目标序列帧的第二位置信息,具体获取移动后目标序列帧与预设文本的相对位置信息,然后将获取的目标序列帧的位置信息,以及目标资源中除目标序列帧以外的序列帧的位置信息确定为上述第二位置信息;最后直接将第二位置信息保存至目标资源,以更新替换目标序列帧的初始位置信息。
上述方式中,调整当前显示的目标序列帧的位置后,通过目标序列帧保存控件可以在修改目标序列帧的位置之后,同时可以直接批量处理所有序列帧的位置信息并保存更新至目标资源,该方式操作简单方便。
相关技术中,如果相关软件输出的游戏对象的美术资源中包括的深度图阈值有问题,则需要美术人员通过PS软件(Adobe Photoshop,简称“PS”,一种图像处理软件),手动调整深度图的边缘,手动去掉灰色或者白色像素,调整为纯黑和纯白的状态,确保坐骑图片和角色的前后遮挡关系是完美的。该种调整方式需要PS软件进行批处理,操作过程繁琐。
基于此本实施例提供了一种游戏对象的编辑方法,本实施例在上述实施例的基础上实现,且上述图形用户界面还包括目标序列帧编辑控件;该方法具体包括:响应于目标序列帧编辑控件的触发操作,显示目标序列帧的深度图;其中,深度图的背景画面为目标序列帧的掩膜图;响应针对深度图的颜色编辑操作,确定虚拟对象和关联对象的遮挡关系信息。
上述图形用户界面还包括目标序列帧编辑控件,即图5中的“编辑”控件,上述深度图主要用于表示目标序列帧中虚拟对象和关联对象之间的遮挡关系。当开发人员可以点击目标序列帧编辑控件后,此时在图形用户界面会显示目标序列帧的深度图。其中设 置掩膜图为深度图的背景,可以使开发人员能够直观的根据背景中对象的遮挡关系进行颜色编辑操作。当然如果背景是纯白色的,则会显示一个纯黑色的深度图片,不利于开发人员观察深度图是否有问题。
实际实现时,当当开发人员可以点击目标序列帧编辑控件后,则鼠标的光标即可变成一个笔刷,该笔刷的大小可以根据图5中的“粗细”输入控件进行修改,一般笔刷越细编辑的范围则越精细。然后开发人员可以通过图5中的“白色”和“黑色”选择框,选择笔刷的颜色,然后在需要修改的区域,根据需要将该区域填充为白色或者黑色,填充完成后点击“加载”控件,即可根据当前的颜色确定虚拟对象和关联对象的遮挡关系。
上述方式中,在对象编辑器中可直视化编辑目标资源对应的目标序列帧的深度图,点击编辑控件然后调整笔刷大小,可以直接对深度图进行清理,实时显示编辑后的目标序列帧的掩膜图,进而保存编辑后的遮挡关系信息,操作简单清晰,简化了游戏对象的开发过程,减少了人工成本和时间成本,提高了游戏对象的动画效果。
下面描述如何确定目标序列帧的掩膜图,具体的,根据目标序列帧中指定对象的第一挂接信息,确定虚拟对象的掩膜图;掩膜图包括第一颜色区域和第二颜色区域;其中,虚拟对象的关联对象位于第一颜色区域和第二颜色区域的中间区域。
具体的,当点击目标序列帧编辑控件时,会根据目标序列帧中指定对象的第一挂接信息中的挂接点位置为坐标,自动输出许你对象的前后层,即上第一颜色区域和第二颜色区域,通常第一颜色区域在前层,第二颜色区域在后层,然后将关联对象设置在第一颜色区域和第二颜色区域的中间区域;可以理解为,虚拟对象的第一颜色区域在前,始终挡住关联对象的部分区域。虚拟对象的第二颜色区域在后,始终被关联对象挡住部分区域。该方式减少了计算量,输出现实方便,不需要手动拆分合成技术优化。
其中,美术和程序统一后,虚拟对象进行前后切分,生成虚拟对象前后层资源,程序同学游戏运行时进行结合就可以实现正常角色坐在坐骑上的效果了。不需要再运行时进行计算,则效率比原有提高30%。
下面具体描述如何确定虚拟对象和关联对象的遮挡关系信息,一种可能的实施方式:响应针对深度图中指定区域的第一颜色填充操作,确定指定区域中虚拟对象和关联对象的遮挡关系信息为虚拟对象遮挡关联对象;响应针对深度图中指定区域的第二颜色填充操作,确定指定区域中虚拟对象和关联对象的遮挡关系信息为关联对象遮挡虚拟对象。
上述第一颜色可以是黑色,第二颜色可以是白色。当笔刷为黑色时,对深度图中指定区域填充黑色后,该区域内虚拟对象则会遮挡关联对象,当笔刷为白色时,对深度图中指定区域填充白色后,该区域内关联对象则会遮挡虚拟对象,主要是通过控制笔刷的颜色,控制虚拟对象和关联对象的前后遮挡关系,类似于深度图的原理,判断各个对象之间的位置。具体的,在填充操作完成后,则会显示操作后的目标序列帧的掩膜图和深 度图,根据显示的图获取虚拟对象与关联对象的遮挡关系,当编辑到满意的效果后,可以将该遮挡关系进行保存更新。
上述方式中,深度图可以在编辑器中可视化处理,直接把功能优化到编辑器内,点击编辑按钮调整笔刷大小,可以直接对深度图进行清理。避免了通过PS软件对深度图进行处理,操作方便简单。
另外,上述图形用户界面还包括其他的操作控件,如图5中的“复制、粘贴、部分粘贴、放大、还原、撤销、恢复”等控件,其中,点击复制按钮可以复制当前掩膜图信息,然后点击粘贴按钮复制到下一帧,部分黏贴是一个叠加模式,不会完全更改下一帧图片,而且将现有的更改和原有掩膜图信息累计在一起。其中撤销恢复操作等功能,提高了图像处理操作的效率。
另外,上述图形用户界面还包括对象保存控件;如图5所示的加载控件下方的保存控件。上述方法还包括:响应于对象保存控件的触发操作,获取指定对象的挂接信息对应的空间位置;将挂接信息对应的空间位置转换到材质中,生成一个目标材质;目标材质的中间点为挂接信息对应的空间位置;将制作好的目标材质赋予给虚拟对象,对虚拟对象进行渲染,并保存渲染后的虚拟对象。
具体的,利用脚本语音,比如MAX脚本语言(MaxScritp),获取第一挂接信息中挂接点的空间位置,将该空间位置转换到材质中,生成一个目标材,比如Gradient材质;通常,中间点则是挂接点的空间位置。目标材质制作好后再赋予给虚拟对象,进行渲染,得到渲染后的虚拟对象,并将其保存。注意空间位置和轴向,保证材质中的颜色渐变是从虚拟对象的头到尾的方向渐变即可。
现有技术中,开发人员在制作坐骑等美术资源的过程中,资源(包括角色及序列帧坐骑图片)通常存在于SVN(Subversion,是一个开放源代码的版本控制系统,通过采用分支管理系统的高效管理,简而言之就是用于多个人共同开发同一个项目,实现共享资源,实现最终集中式的管理)工程及外包递交P4V(Perforce Helix Core,一种版本控制软件,跟踪并管理对源代码、数字资产和大型二进制文件的更改,它创建了一个单一的数据来源和协作平台,可以帮助团队更快地推进工作)工程里,开发资源往往要在指定目录下,脚本功能才可生效,切换后就会影响功能引用。当需要对资源进行加载时,需要开发人员手动操作,如果把外包递交的资源拷贝到内部开发SVN工程里,开发人员较多,总是会出现一些问题,比如拷贝不全,只读不可编辑和保存等问题。
基于此,上述图形用户界面还包括资源加载控件;上述响应针对目标资源的加载操作,在图形用户界面中显示目标资源对应的目标序列帧的步骤,一种可能的实施方式:获取目标资源的资源路径信息;响应于资源加载控件的触发操作,在图形用户界面中显示目标资源对应的目标序列帧。
具体可以通过图5虚线框中的控件获取目标资源的资源路径信息,比如直接输入资 源路径信息,或者通过选择控件进行选择,总之不需要打开其他文件进行拷贝操作,即可获取目标资源的资源路径信息。在获取到目标资源的资源路径信息后,可以点击图5中加载控件,即可加载目标资源中的虚拟对象和关联对象以及他们之间的挂接信息,然后根据挂接信息将关联对象挂接在虚拟对象上,并显示在图形用户界面。通常以序列帧的方式显示。
上述方式中,通过在对象编辑器中通过切换路径的方式,可以直接加载资源,避免了手动操作出现的如拷贝不全、只读不可编辑等问题,该方式可以解决资源路径读取复杂,操作步骤多的问题,且该方式容错率低,大大方便了美术人员的工作流程。
上述图形用户界面还包括资源路径输入框和资源路径选择控件;获取目标资源的资源路径信息的步骤,一种可能的实施方式:响应作用于路径输入框中目标资源的资源路径信息的输入操作,获取目标资源的资源路径信息。
如图5所示的路径输入框和资源路径选择控件,实际实现时,开发人员需要先选择“指定目录”的可选框,然后可以在其他文件中获取目标资源的资源路径信息,然后复制粘贴到路径输入框,也可以点击资源路径选择控件,显示多个资源文件,同时可以复制目标资源的资源路径信息,然后粘贴到路径输入框,即可获取目标资源的资源路径信息。
另一种可能的实施方式:响应于资源路径选择控件的触发操作,显示可选择的资源路径信息,响应针对可选择的资源路径信息中目标资源的资源路径信息的选择操作,获取目标资源的资源路径信息。
实际实现时,开发人员可以点击资源路径选择控件,显示多个资源文件,然后点击选择目标资源,路径输入框就会显示目标资源的资源路径信息,即可获取目标资源的资源路径信息。
另外,上述图形用户界面还包括资源路径清空控件,点击该控件即可删除之前选择的路径。需要说明的是,在启动对象编辑器时,该编辑器会默认获取一个预设的资源路径。
上述方式中,开发人员可以在对象编辑器中通过选择的方式或者输入路径的方式直接加载资源,避免了手动操作出现的如拷贝不全、只读不可编辑等问题,该方式可以解决资源路径读取复杂,操作步骤多的问题,且该方式容错率低,大大方便了美术人员的工作流程。
为了使开发人员能够更加直观的观看编辑对象后目标资源对应的动画效果,上述图形用户界面还包括动画播放控件;具体的:响应于动画播放控件的触发操作,播放目标序列帧对应的动作画面。
如图5所示的播放控件,当开发人员点击播放时,即可在图形用户界面播放当前目标序列帧对应的动作画面,通常目标资源中包括虚拟对象和关联对象的多种动作,用户 可以根据动作切换控件,如图5中的“←”和“→”控件,更改正在播放的目标资源对应的动作画面,比如,点击切换下一个动作的控件“→”,即可播放目标资源对应的下一个动作画面。其中,播放的动作是目标资源中预设的动作,具体为播放一系列的序列帧,完成动画播放。
上述方式中,可以在对象编辑器中直接预览最终的动画效果,使开发人员可以更加直观的观看编辑后的动画效果,提高了制作效率。
为了进一步提高动画效果,使动画更直观的相融于场景之中,达到实时预览最终效果;上述图形用户界面还包括多个背景画面显示控件;具体的:响应于多个背景画面显示控件中第一背景画面显示控件的触发操作,在目标序列帧的背景区域显示第一背景画面显示控件对应的第一背景画面。
如图5所示的12个背景画面显示控件,每个背景画面显示控件对应有一种游戏场景画面。实际实现时,在播放动画的过程中,或者在播放动画之前可以点击想要预览的目标序列帧的背景画面,使播放的动画更直观的相融于场景之中,达到实时预览最终画面的效果。
如果开发人员想要单独更新其中一个序列帧的挂接点信息,为了满足开发人员的需求,上述图形用户界面还包括复制偏移控件;具体的:响应于复制偏移控件的触发操作,获取当前目标序列帧中指定对象的挂接信息;响应针对当前目标序列帧的切换操作,将当前目标序列帧中指定对象的挂接信息保存至切换操作对应的序列帧中。
如果开发人员想要调整单个帧中对象的挂点信息,可以在控制当前目标序列帧移动后,点击复制偏移控件,然后获取当前目标序列帧中指定对象的挂接信息,然后点击下一帧就可以将前述获取的目标序列帧的中指定对象的挂接信息保存到下一帧的序列帧上。也就是说在对象编辑器中,开发人员不仅可以批量处理序列帧中对象的挂接信息,还可以单独处理其中一个序列帧中对象的挂接信息。该种编辑和处理方式更加灵活。
另外,在图形用户界面中还包括目标序列帧中对象ID的切换控件,用户可以在对应的输入框中输入想要更换的坐骑ID,点击加载控件即可显示更换后的坐骑,同样的,用户想要为角色加载一个挂件,则可以输入想要更换的挂件ID,点击加载控件即可显示加载的挂件。其他对象ID的加载和更换操作也是同样的过程。
上述方式中,总结来说就是采用挂点和深度图结合处理,达到了2d坐骑能够满足和3d角色匹配,多种时装共存的效果。
参见图6所示的完整的图形用户界面,其中包括很多区域,下面分别介绍每个区域的具体功能,第一区域,显示当前虚拟对象的编号、动作、帧数等;第二区域为遮罩粘贴移动面板,复制粘贴遮罩时,可以调节要粘贴的遮罩的偏移;第三区域为主角参数面板,要加载的关联对象的编号、虚拟对象编号等;第四区域为偏移控制面板,调节挂点、指定对象偏移等;第五区域为遮罩粘贴操作面板,复制粘贴遮罩图;第六区域为资源展 示面板:预览当前指定对象的效果;第七区域为其他信息面板,其中的刷新控件用于刷新当前的修改,编辑控件用于切换到遮罩图编辑模式,由于指定对象是序列帧资源,所以编辑的时候是一帧一帧编辑,上帧/下帧控件用于按需切换序列帧,方向控件用于切换资源的朝向。动作控件用于切换坐骑(虚拟对象)动作,加载控件用于重新加载资源,保存控件用于保存编辑完成的遮罩图。
对应上述的方法实施例,本公开实施例提供了一种游戏对象的编辑装置,通过终端设备提供一图形用户界面,图形用户界面中包括对象偏移控件;如图7所示,该装置包括:
显示模块71,用于响应针对目标资源的加载操作,在图形用户界面中显示目标资源对应的目标序列帧;其中,目标序列帧中包括虚拟对象以及虚拟对象的关联对象;目标资源中还包括虚拟对象和关联对象的挂接信息;
控制模块72,用于响应作用于对象偏移控件的第一操作,控制目标序列帧中指定对象移动;指定对象包括虚拟对象或关联对象;
更新模块73,用于根据移动后指定对象的位置,更新指定对象的挂接信息。
本公开实施例提供了一种游戏对象的编辑装置,响应针对目标资源的加载操作,显示目标资源对应的目标序列帧,该目标序列帧中包括虚拟对象和虚拟对象的关联对象;目标资源中还包括虚拟对象和关联对象的挂接信息;响应作用于对象偏移控件的第一操作,控制目标序列帧中指定对象移动;指定对象包括虚拟对象或关联对象;根据移动后指定对象的位置,更新指定对象的挂接信息。该方式可以直视的编辑目标资源对应的目标序列帧,通过对象偏移控件控制指定对象移动,实时显示移动后的目标序列帧,通过控制指定对象移动更新指定对象的挂接信息,操作简单清晰,简化了游戏对象的开发过程,减少了人工成本和时间成本,提高了游戏对象的动画效果。
进一步的,上述对象偏移控件包括多个对象偏移子控件,每个对象偏移子控件对应一个偏移方向,不同的对象偏移子控件对应的偏移方向不同;上述控制模块还用于:响应于多个对象偏移子控件中第一对象偏移子控件的触发操作,控制指定对象向第一对象偏移子控件对应的第一偏移方向移动。
进一步的,上述图形用户界面还包括对象保存控件;上述更新模块还用于:响应于对象保存控件的触发操作,获取移动后指定对象的位置对应的第一挂接信息,将第一挂接信息确定为目标序列帧中指定对象的挂接信息,以及目标资源中除目标序列帧以外的序列帧中指定对象的挂接信息;将第一挂接信息保存至目标资源。
进一步的,上述更新模块还用于:获取移动后指定对象的位置对应的第一位置信息;将第一位置信息转换为指定对象对应的像素位置信息,得到第一挂接信息。
进一步的,上述图形用户界面还包括目标序列帧偏移控件;上述装置还包括:第二控制模块,用于响应作用于目标序列帧偏移控件的第二操作,控制目标序列帧移动;第 二更新模块,用于根据移动后目标序列帧的位置,更新目标序列帧的位置信息。
进一步的,上述目标序列帧偏移控件包括多个目标序列帧偏移子控件,每个目标序列帧偏移子控件对应一个偏移方向,不同目标序列帧偏移子控件对应的偏移方向不同;上述第二控制模块还用于:响应于多个目标序列帧偏移子控件中第一目标序列帧偏移子控件的触发操作,控制目标序列帧向第一目标序列帧偏移子控件对应的第一偏移方向移动,并显示目标序列帧的移动动作画面。
进一步的,上述图形用户界面还包括目标序列帧保存控件;上述第二更新模块还用于:响应于目标序列帧控件的触发操作,获取当前目标序列帧的第二位置信息,将第二位置信息确定当前目标序列帧的位置信息,以及目标资源对应的所有序列帧中的位置信息;将第二位置信息更新至目标资源。
进一步的,上述图形用户界面还包括目标序列帧编辑控件;上述装置还包括:第二显示模块,用于:响应于目标序列帧编辑控件的触发操作,显示目标序列帧的深度图;其中,深度图的背景画面为目标序列帧的掩膜图;遮挡关系确定模块,用于响应针对深度图的颜色编辑操作,确定虚拟对象和关联对象的遮挡关系信息。
进一步的,上述装置还包括掩膜图确定模块,用于:根据目标序列帧中指定对象的第一挂接信息,确定虚拟对象的掩膜图;掩膜图包括第一颜色区域和第二颜色区域;其中,虚拟对象的关联对象位于第一颜色区域和第二颜色区域的中间区域。
进一步的,上述遮挡关系确定模块,还用于:响应针对深度图中指定区域的第一颜色填充操作,确定指定区域中虚拟对象和关联对象的遮挡关系信息为虚拟对象遮挡关联对象;响应针对深度图中指定区域的第二颜色填充操作,确定指定区域中虚拟对象和关联对象的遮挡关系信息为关联对象遮挡虚拟对象。
进一步的,上述图形用户界面还包括对象保存控件;上述装置还包括对象保存模块,用于:响应于对象保存控件的触发操作,获取指定对象的挂接信息对应的空间位置;将挂接信息对应的空间位置转换到材质中,生成一个目标材质;目标材质的中间点为挂接信息对应的空间位置;将制作好的目标材质赋予给虚拟对象,对虚拟对象进行渲染,并保存渲染后的虚拟对象。
进一步的,上述图形用户界面还包括资源加载控件;上述显示模块还用于:获取目标资源的资源路径信息;响应于资源加载控件的触发操作,在图形用户界面中显示目标资源对应的目标序列帧。
进一步的,上述图形用户界面还包括资源路径输入框和资源路径选择控件;上述显示模块还用于:响应作用于路径输入框中目标资源的资源路径信息的输入操作,获取目标资源的资源路径信息;或者,响应于资源路径选择控件的触发操作,显示可选择的资源路径信息,响应针对可选择的资源路径信息中目标资源的资源路径信息的选择操作,获取目标资源的资源路径信息。
进一步的,上述图形用户界面还包括动画播放控件;上述装置还包括播放模块,用于:响应于动画播放控件的触发操作,播放目标序列帧对应的动作画面。
进一步的,上述图形用户界面还包括多个背景画面显示控件;上述装置还包括背景画面显示模块,用于:响应于多个背景画面显示控件中第一背景画面显示控件的触发操作,在目标序列帧的背景区域显示第一背景画面显示控件对应的第一背景画面。
进一步的,上述图形用户界面还包括文本显示控件;上述装置还包括文本显示模块,用于响应于文本显示控件的触发操作,在图形用户界面的指定位置显示预设文本。
进一步的,上述图形用户界面还包括复制偏移控件;上述装置还包括复制偏移模块,用于:响应于复制偏移控件的触发操作,获取当前目标序列帧中指定对象的挂接信息;响应针对当前目标序列帧的切换操作,将当前目标序列帧中指定对象的挂接信息保存至切换操作对应的序列帧中。
本公开实施例提供的游戏对象的编辑装置,与上述实施例提供的游戏对象的编辑方法具有相同的技术特征,所以也能解决相同的技术问题,达到相同的技术效果。
本实施例还提供一种电子设备,包括处理器和存储器,存储器存储有能够被处理器执行的计算机可执行指令,处理器执行计算机可执行指令以实现上述游戏对象的编辑方法。该电子设备可以是服务器,也可以是终端设备。
参见图8所示,该电子设备包括处理器100和存储器101,该存储器101存储有能够被处理器100执行的计算机可执行指令,该处理器100执行计算机可执行指令以实现上述游戏对象的编辑方法。
上述处理器100执行计算机可执行指令还可以实现以下步骤:
对象偏移控件包括多个对象偏移子控件,每个对象偏移子控件对应一个偏移方向,不同的对象偏移子控件对应的偏移方向不同;响应作用于对象偏移控件的第一操作,控制目标序列帧中指定对象移动的步骤,包括:响应于多个对象偏移子控件中第一对象偏移子控件的触发操作,控制指定对象向第一对象偏移子控件对应的第一偏移方向移动。
进一步的,图形用户界面还包括对象保存控件;根据移动后指定对象的位置,更新指定对象的挂接信息的步骤,包括:响应于对象保存控件的触发操作,获取移动后指定对象的位置对应的第一挂接信息,将第一挂接信息确定为目标序列帧中指定对象的挂接信息,以及目标资源中除目标序列帧以外的序列帧中指定对象的挂接信息;将第一挂接信息保存至目标资源。
进一步的,获取移动后指定对象的位置对应的第一挂接信息的步骤,包括:获取移动后指定对象的位置对应的第一位置信息;将第一位置信息转换为指定对象对应的像素位置信息,得到第一挂接信息。
进一步的,图形用户界面还包括目标序列帧偏移控件;方法还包括:响应作用于目标序列帧偏移控件的第二操作,控制目标序列帧移动;根据移动后目标序列帧的位置, 更新目标序列帧的位置信息。
进一步的,目标序列帧偏移控件包括多个目标序列帧偏移子控件,每个目标序列帧偏移子控件对应一个偏移方向,不同目标序列帧偏移子控件对应的偏移方向不同;响应作用于目标序列帧偏移控件的第二操作,控制目标序列帧移动的步骤,包括:响应于多个目标序列帧偏移子控件中第一目标序列帧偏移子控件的触发操作,控制目标序列帧向第一目标序列帧偏移子控件对应的第一偏移方向移动,并显示目标序列帧的移动动作画面。
进一步的,图形用户界面还包括目标序列帧保存控件;根据移动后目标序列帧的位置,更新目标序列帧的位置信息的步骤,包括:响应于目标序列帧控件的触发操作,获取当前目标序列帧的第二位置信息,将第二位置信息确定当前目标序列帧的位置信息,以及目标资源对应的所有序列帧中的位置信息;将第二位置信息更新至目标资源。
进一步的,图形用户界面还包括目标序列帧编辑控件;方法还包括:响应于目标序列帧编辑控件的触发操作,显示目标序列帧的深度图;其中,深度图的背景画面为目标序列帧的掩膜图;响应针对深度图的颜色编辑操作,确定虚拟对象和关联对象的遮挡关系信息。
进一步的,目标序列帧的掩膜图通过以下步骤确定:根据目标序列帧中指定对象的第一挂接信息,确定虚拟对象的掩膜图;掩膜图包括第一颜色区域和第二颜色区域;其中,虚拟对象的关联对象位于第一颜色区域和第二颜色区域的中间区域。
进一步的,响应针对深度图的颜色编辑操作,确定虚拟对象和关联对象的遮挡关系信息的步骤,包括:响应针对深度图中指定区域的第一颜色填充操作,确定指定区域中虚拟对象和关联对象的遮挡关系信息为虚拟对象遮挡关联对象;响应针对深度图中指定区域的第二颜色填充操作,确定指定区域中虚拟对象和关联对象的遮挡关系信息为关联对象遮挡虚拟对象。
进一步的,图形用户界面还包括对象保存控件;方法还包括:响应于对象保存控件的触发操作,获取指定对象的挂接信息对应的空间位置;将挂接信息对应的空间位置转换到材质中,生成一个目标材质;目标材质的中间点为挂接信息对应的空间位置;将制作好的目标材质赋予给虚拟对象,对虚拟对象进行渲染,并保存渲染后的虚拟对象。
进一步的,图形用户界面还包括资源加载控件;响应针对目标资源的加载操作,在图形用户界面中显示目标资源对应的目标序列帧的步骤,包括:获取目标资源的资源路径信息;响应于资源加载控件的触发操作,在图形用户界面中显示目标资源对应的目标序列帧。
进一步的,图形用户界面还包括资源路径输入框和资源路径选择控件;获取目标资源的资源路径信息的步骤,包括:响应作用于路径输入框中目标资源的资源路径信息的输入操作,获取目标资源的资源路径信息;或者,响应于资源路径选择控件的触发操作, 显示可选择的资源路径信息,响应针对可选择的资源路径信息中目标资源的资源路径信息的选择操作,获取目标资源的资源路径信息。
进一步的,图形用户界面还包括动画播放控件;方法还包括:响应于动画播放控件的触发操作,播放目标序列帧对应的动作画面。
进一步的,图形用户界面还包括多个背景画面显示控件;方法还包括:响应于多个背景画面显示控件中第一背景画面显示控件的触发操作,在目标序列帧的背景区域显示第一背景画面显示控件对应的第一背景画面。
进一步的,图形用户界面还包括文本显示控件;方法还包括:响应于文本显示控件的触发操作,在图形用户界面的指定位置显示预设文本。
进一步的,图形用户界面还包括复制偏移控件;方法还包括:响应于复制偏移控件的触发操作,获取当前目标序列帧中指定对象的挂接信息;响应针对当前目标序列帧的切换操作,将当前目标序列帧中指定对象的挂接信息保存至切换操作对应的序列帧中。
该方式可以直视的编辑目标资源对应的目标序列帧,通过对象偏移控件控制指定对象移动,实时显示移动后的目标序列帧,通过控制指定对象移动更新指定对象的挂接信息,操作简单清晰,简化了游戏对象的开发过程,减少了人工成本和时间成本,提高了游戏对象的动画效果。
进一步地,图8所示的电子设备还包括总线102和通信接口103,处理器100、通信接口103和存储器101通过总线102连接。
其中,存储器101可能包含高速随机存取存储器(RAM,Random Access Memory),也可能还包括非不稳定的存储器(non-volatile memory),例如至少一个磁盘存储器。通过至少一个通信接口103(可以是有线或者无线)实现该系统网元与至少一个其他网元之间的通信连接,可以使用互联网,广域网,本地网,城域网等。总线102可以是ISA总线、PCI总线或EISA总线等。所述总线可以分为地址总线、数据总线、控制总线等。为便于表示,图8中仅用一个双向箭头表示,但并不表示仅有一根总线或一种类型的总线。
处理器100可能是一种集成电路芯片,具有信号的处理能力。在实现过程中,上述方法的各步骤可以通过处理器100中的硬件的集成逻辑电路或者软件形式的指令完成。上述的处理器100可以是通用处理器,包括中央处理器(Central Processing Unit,简称CPU)、网络处理器(Network Processor,简称NP)等;还可以是数字信号处理器(Digital Signal Processor,简称DSP)、专用集成电路(Application Specific Integrated Circuit,简称ASIC)、现场可编程门阵列(Field-Programmable Gate Array,简称FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。可以实现或者执行本公开实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。结合本公开实施例所公开的方法的步骤可以直接体现为硬件 译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器101,处理器100读取存储器101中的信息,结合其硬件完成前述实施例的方法的步骤。
本实施例还提供一种计算机可读存储介质,计算机可读存储介质存储有计算机可执行指令,计算机可执行指令在被处理器调用和执行时,计算机可执行指令促使处理器实现上述游戏对象的编辑方法。
上述计算机可读存储介质还可以被设置为存储用于执行以下步骤的计算机可执行指令:
对象偏移控件包括多个对象偏移子控件,每个对象偏移子控件对应一个偏移方向,不同的对象偏移子控件对应的偏移方向不同;响应作用于对象偏移控件的第一操作,控制目标序列帧中指定对象移动的步骤,包括:响应于多个对象偏移子控件中第一对象偏移子控件的触发操作,控制指定对象向第一对象偏移子控件对应的第一偏移方向移动。
进一步的,图形用户界面还包括对象保存控件;根据移动后指定对象的位置,更新指定对象的挂接信息的步骤,包括:响应于对象保存控件的触发操作,获取移动后指定对象的位置对应的第一挂接信息,将第一挂接信息确定为目标序列帧中指定对象的挂接信息,以及目标资源中除目标序列帧以外的序列帧中指定对象的挂接信息;将第一挂接信息保存至目标资源。
进一步的,获取移动后指定对象的位置对应的第一挂接信息的步骤,包括:获取移动后指定对象的位置对应的第一位置信息;将第一位置信息转换为指定对象对应的像素位置信息,得到第一挂接信息。
进一步的,图形用户界面还包括目标序列帧偏移控件;方法还包括:响应作用于目标序列帧偏移控件的第二操作,控制目标序列帧移动;根据移动后目标序列帧的位置,更新目标序列帧的位置信息。
进一步的,目标序列帧偏移控件包括多个目标序列帧偏移子控件,每个目标序列帧偏移子控件对应一个偏移方向,不同目标序列帧偏移子控件对应的偏移方向不同;响应作用于目标序列帧偏移控件的第二操作,控制目标序列帧移动的步骤,包括:响应于多个目标序列帧偏移子控件中第一目标序列帧偏移子控件的触发操作,控制目标序列帧向第一目标序列帧偏移子控件对应的第一偏移方向移动,并显示目标序列帧的移动动作画面。
进一步的,图形用户界面还包括目标序列帧保存控件;根据移动后目标序列帧的位置,更新目标序列帧的位置信息的步骤,包括:响应于目标序列帧控件的触发操作,获取当前目标序列帧的第二位置信息,将第二位置信息确定当前目标序列帧的位置信息,以及目标资源对应的所有序列帧中的位置信息;将第二位置信息更新至目标资源。
进一步的,图形用户界面还包括目标序列帧编辑控件;方法还包括:响应于目标序列帧编辑控件的触发操作,显示目标序列帧的深度图;其中,深度图的背景画面为目标序列帧的掩膜图;响应针对深度图的颜色编辑操作,确定虚拟对象和关联对象的遮挡关系信息。
进一步的,目标序列帧的掩膜图通过以下步骤确定:根据目标序列帧中指定对象的第一挂接信息,确定虚拟对象的掩膜图;掩膜图包括第一颜色区域和第二颜色区域;其中,虚拟对象的关联对象位于第一颜色区域和第二颜色区域的中间区域。
进一步的,响应针对深度图的颜色编辑操作,确定虚拟对象和关联对象的遮挡关系信息的步骤,包括:响应针对深度图中指定区域的第一颜色填充操作,确定指定区域中虚拟对象和关联对象的遮挡关系信息为虚拟对象遮挡关联对象;响应针对深度图中指定区域的第二颜色填充操作,确定指定区域中虚拟对象和关联对象的遮挡关系信息为关联对象遮挡虚拟对象。
进一步的,图形用户界面还包括对象保存控件;方法还包括:响应于对象保存控件的触发操作,获取指定对象的挂接信息对应的空间位置;将挂接信息对应的空间位置转换到材质中,生成一个目标材质;目标材质的中间点为挂接信息对应的空间位置;将制作好的目标材质赋予给虚拟对象,对虚拟对象进行渲染,并保存渲染后的虚拟对象。
进一步的,图形用户界面还包括资源加载控件;响应针对目标资源的加载操作,在图形用户界面中显示目标资源对应的目标序列帧的步骤,包括:获取目标资源的资源路径信息;响应于资源加载控件的触发操作,在图形用户界面中显示目标资源对应的目标序列帧。
进一步的,图形用户界面还包括资源路径输入框和资源路径选择控件;获取目标资源的资源路径信息的步骤,包括:响应作用于路径输入框中目标资源的资源路径信息的输入操作,获取目标资源的资源路径信息;或者,响应于资源路径选择控件的触发操作,显示可选择的资源路径信息,响应针对可选择的资源路径信息中目标资源的资源路径信息的选择操作,获取目标资源的资源路径信息。
进一步的,图形用户界面还包括动画播放控件;方法还包括:响应于动画播放控件的触发操作,播放目标序列帧对应的动作画面。
进一步的,图形用户界面还包括多个背景画面显示控件;方法还包括:响应于多个背景画面显示控件中第一背景画面显示控件的触发操作,在目标序列帧的背景区域显示第一背景画面显示控件对应的第一背景画面。
进一步的,图形用户界面还包括文本显示控件;方法还包括:响应于文本显示控件的触发操作,在图形用户界面的指定位置显示预设文本。
进一步的,图形用户界面还包括复制偏移控件;方法还包括:响应于复制偏移控件的触发操作,获取当前目标序列帧中指定对象的挂接信息;响应针对当前目标序列帧的 切换操作,将当前目标序列帧中指定对象的挂接信息保存至切换操作对应的序列帧中。
该方式可以直视的编辑目标资源对应的目标序列帧,通过对象偏移控件控制指定对象移动,实时显示移动后的目标序列帧,通过控制指定对象移动更新指定对象的挂接信息,操作简单清晰,简化了游戏对象的开发过程,减少了人工成本和时间成本,提高了游戏对象的动画效果。
本公开实施例所提供的游戏对象的编辑方法、装置、电子设备及存储介质的计算机程序产品,包括存储了程序代码的计算机可读存储介质,所述程序代码包括的指令可用于执行前面方法实施例中所述的方法,具体实现可参见方法实施例,在此不再赘述。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统和装置的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
另外,在本公开实施例的描述中,除非另有明确的规定和限定,术语“安装”、“相连”、“连接”应做广义理解,例如,可以是固定连接,也可以是可拆卸连接,或一体地连接;可以是机械连接,也可以是电连接;可以是直接相连,也可以通过中间媒介间接相连,可以是两个元件内部的连通。对于本领域技术人员而言,可以具体情况理解上述术语在本公开中的具体含义。
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本公开的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本公开各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。
在本公开的描述中,需要说明的是,术语“中心”、“上”、“下”、“左”、“右”、“竖直”、“水平”、“内”、“外”等指示的方位或位置关系为基于附图所示的方位或位置关系,仅是为了便于描述本公开和简化描述,而不是指示或暗示所指的装置或元件必须具有特定的方位、以特定的方位构造和操作,因此不能理解为对本公开的限制。此外,术语“第一”、“第二”、“第三”仅用于描述目的,而不能理解为指示或暗示相对重要性。
最后应说明的是:以上实施例,仅为本公开的具体实施方式,用以说明本公开的技术方案,而非对其限制,本公开的保护范围并不局限于此,尽管参照前述实施例对本公开进行了详细的说明,本领域技术人员应当理解:任何熟悉本技术领域的技术人员在本公开揭露的技术范围内,其依然可以对前述实施例所记载的技术方案进行修改或可轻易想到变化,或者对其中部分技术特征进行等同替换;而这些修改、变化或者替换,并不使相应技术方案的本质脱离本公开实施例技术方案的精神和范围,都应涵盖在本公开的 保护范围之内。因此,本公开的保护范围应以权利要求的保护范围为准。

Claims (20)

  1. 一种游戏对象的编辑方法,通过终端设备提供一图形用户界面,所述图形用户界面中包括对象偏移控件;所述方法包括:
    响应针对目标资源的加载操作,在所述图形用户界面中显示所述目标资源对应的目标序列帧;其中,所述目标序列帧中包括虚拟对象以及所述虚拟对象的关联对象;所述目标资源中还包括所述虚拟对象和所述关联对象的挂接信息;
    响应作用于所述对象偏移控件的第一操作,控制所述目标序列帧中指定对象移动;所述指定对象包括所述虚拟对象或所述关联对象;
    根据移动后所述指定对象的位置,更新所述指定对象的挂接信息。
  2. 根据权利要求1所述的方法,其中,所述对象偏移控件包括多个对象偏移子控件,每个所述对象偏移子控件对应一个偏移方向,不同的所述对象偏移子控件对应的偏移方向不同;
    所述响应作用于所述对象偏移控件的第一操作,控制所述目标序列帧中指定对象移动的步骤,包括:
    响应于所述多个对象偏移子控件中第一对象偏移子控件的触发操作,控制所述指定对象向所述第一对象偏移子控件对应的第一偏移方向移动。
  3. 根据权利要求1所述的方法,其中,所述图形用户界面还包括对象保存控件;
    所述根据移动后所述指定对象的位置,更新所述指定对象的挂接信息的步骤,包括:
    响应于所述对象保存控件的触发操作,获取移动后所述指定对象的位置对应的第一挂接信息,将所述第一挂接信息确定为所述目标序列帧中所述指定对象的挂接信息,以及所述目标资源中除所述目标序列帧以外的序列帧中指定对象的挂接信息;
    将所述第一挂接信息保存至所述目标资源。
  4. 根据权利要求3所述的方法,其中,获取移动后所述指定对象的位置对应的第一挂接信息的步骤,包括:
    获取所述移动后所述指定对象的位置对应的第一位置信息;
    将所述第一位置信息转换为所述指定对象对应的像素位置信息,得到所述第一挂接信息。
  5. 根据权利要求1所述的方法,其中,所述图形用户界面还包括目标序列帧偏移控件;所述方法还包括:
    响应作用于所述目标序列帧偏移控件的第二操作,控制所述目标序列帧移动;
    根据移动后所述目标序列帧的位置,更新所述目标序列帧的位置信息。
  6. 根据权利要求5所述的方法,其中,所述目标序列帧偏移控件包括多个目标序列帧偏移子控件,每个所述目标序列帧偏移子控件对应一个偏移方向,不同所述目标序列帧偏移子控件对应的偏移方向不同;
    所述响应作用于所述目标序列帧偏移控件的第二操作,控制所述目标序列帧移动的步骤,包括:
    响应于所述多个目标序列帧偏移子控件中第一目标序列帧偏移子控件的触发操作,控制所述目标序列帧向所述第一目标序列帧偏移子控件对应的第一偏移方向移动,并显示所述目标序列帧的移动动作画面。
  7. 根据权利要求5所述的方法,其中,所述图形用户界面还包括目标序列帧保存控件;
    所述根据移动后所述目标序列帧的位置,更新所述目标序列帧的位置信息的步骤,包括:
    响应于所述目标序列帧控件的触发操作,获取当前所述目标序列帧的第二位置信息,将所述第二位置信息确定当前所述目标序列帧的位置信息,以及所述目标资源对应的所有序列帧中的位置信息;
    将所述第二位置信息更新至所述目标资源。
  8. 根据权利要求1所述的方法,其中,所述图形用户界面还包括目标序列帧编辑控件;所述方法还包括:
    响应于所述目标序列帧编辑控件的触发操作,显示所述目标序列帧的深度图;其中,所述深度图的背景画面为所述目标序列帧的掩膜图;
    响应针对所述深度图的颜色编辑操作,确定所述虚拟对象和所述关联对象的遮挡关系信息。
  9. 根据权利要求8所述的方法,其中,所述目标序列帧的掩膜图通过以下步骤确定:
    根据所述目标序列帧中指定对象的第一挂接信息,确定所述虚拟对象的掩膜图;所述掩膜图包括第一颜色区域和第二颜色区域;其中,所述虚拟对象的关联对象位于所述第一颜色区域和所述第二颜色区域的中间区域。
  10. 根据权利要求8所述的方法,其中,响应针对所述深度图的颜色编辑操作,确定所述虚拟对象和所述关联对象的遮挡关系信息的步骤,包括:
    响应针对所述深度图中指定区域的第一颜色填充操作,确定所述指定区域中所述虚拟对象和所述关联对象的遮挡关系信息为所述虚拟对象遮挡所述关联对象;
    响应针对所述深度图中指定区域的第二颜色填充操作,确定所述指定区域中所述虚拟对象和所述关联对象的遮挡关系信息为所述关联对象遮挡所述虚拟对象。
  11. 根据权利要求1所述的方法,其中,所述图形用户界面还包括对象保存控件;所述方法还包括:
    响应于所述保存控件的触发操作,获取所述指定对象的挂接信息对应的空间位置;
    将所述挂接信息对应的空间位置转换到材质中,生成一个目标材质;所述目标材质 的中间点为所述挂接信息对应的空间位置;
    将制作好的所述目标材质赋予给所述虚拟对象,对所述虚拟对象进行渲染,并保存渲染后的所述虚拟对象。
  12. 根据权利要求1所述的方法,其中,所述图形用户界面还包括资源加载控件;
    所述响应针对目标资源的加载操作,在所述图形用户界面中显示所述目标资源对应的目标序列帧的步骤,包括:
    获取所述目标资源的资源路径信息;
    响应于所述资源加载控件的触发操作,在所述图形用户界面中显示所述目标资源对应的目标序列帧。
  13. 根据权利要求12所述的方法,其中,所述图形用户界面还包括资源路径输入框和资源路径选择控件;
    获取所述目标资源的资源路径信息的步骤,包括:
    响应作用于所述路径输入框中目标资源的资源路径信息的输入操作,获取所述目标资源的资源路径信息;或者,
    响应于所述资源路径选择控件的触发操作,显示可选择的资源路径信息,响应针对所述可选择的资源路径信息中目标资源的资源路径信息的选择操作,获取所述目标资源的资源路径信息。
  14. 根据权利要求1所述的方法,其中,所述图形用户界面还包括动画播放控件;所述方法还包括:
    响应于所述动画播放控件的触发操作,播放所述目标序列帧对应的动作画面。
  15. 根据权利要求1所述的方法,其中,所述图形用户界面还包括多个背景画面显示控件;所述方法还包括:
    响应于所述多个背景画面显示控件中第一背景画面显示控件的触发操作,在所述目标序列帧的背景区域显示所述第一背景画面显示控件对应的第一背景画面。
  16. 根据权利要求1所述的方法,其中,所述图形用户界面还包括文本显示控件;所述方法还包括:
    响应于所述文本显示控件的触发操作,在所述图形用户界面的指定位置显示预设文本。
  17. 根据权利要求1所述的方法,其中,所述图形用户界面还包括复制偏移控件;所述方法还包括:
    响应于所述复制偏移控件的触发操作,获取当前所述目标序列帧中指定对象的挂接信息;
    响应针对当前所述目标序列帧的切换操作,将当前所述目标序列帧中指定对象的挂接信息保存至所述切换操作对应的序列帧中。
  18. 一种游戏对象的编辑装置,通过终端设备提供一图形用户界面,所述图形用户界面中包括对象偏移控件;所述装置包括:
    显示模块,用于响应针对目标资源的加载操作,在所述图形用户界面中显示所述目标资源对应的目标序列帧;其中,所述目标序列帧中包括虚拟对象以及所述虚拟对象的关联对象;所述目标资源中还包括所述虚拟对象和所述关联对象的挂接信息;
    控制模块,用于响应作用于所述对象偏移控件的第一操作,控制所述目标序列帧中指定对象移动;所述指定对象包括所述虚拟对象或所述关联对象;
    更新模块,用于根据移动后所述指定对象的位置,更新所述指定对象的挂接信息。
  19. 一种电子设备,包括处理器和存储器,所述存储器存储有能够被所述处理器执行的计算机可执行指令,所述处理器执行所述计算机可执行指令以实现权利要求1-17任一项所述的游戏对象的编辑方法。
  20. 一种计算机可读存储介质,所述计算机可读存储介质存储有计算机可执行指令,所述计算机可执行指令在被处理器调用和执行时,所述计算机可执行指令促使所述处理器实现权利要求1-17任一项所述的游戏对象的编辑方法。
PCT/CN2022/132048 2022-01-26 2022-11-15 游戏对象的编辑方法、装置和电子设备 WO2023142614A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210092395.7A CN114549708A (zh) 2022-01-26 2022-01-26 游戏对象的编辑方法、装置和电子设备
CN202210092395.7 2022-01-26

Publications (1)

Publication Number Publication Date
WO2023142614A1 true WO2023142614A1 (zh) 2023-08-03

Family

ID=81672950

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/132048 WO2023142614A1 (zh) 2022-01-26 2022-11-15 游戏对象的编辑方法、装置和电子设备

Country Status (2)

Country Link
CN (1) CN114549708A (zh)
WO (1) WO2023142614A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114549708A (zh) * 2022-01-26 2022-05-27 网易(杭州)网络有限公司 游戏对象的编辑方法、装置和电子设备
CN115170709A (zh) * 2022-05-30 2022-10-11 网易(杭州)网络有限公司 动态图像的编辑方法、装置和电子设备

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8366546B1 (en) * 2012-01-23 2013-02-05 Zynga Inc. Gamelets
CN112190948A (zh) * 2020-10-15 2021-01-08 网易(杭州)网络有限公司 一种游戏地图的生成方法、装置、电子设备及存储介质
CN112546631A (zh) * 2020-12-23 2021-03-26 上海米哈游天命科技有限公司 一种角色控制方法、装置、设备及存储介质
CN112870704A (zh) * 2021-03-18 2021-06-01 腾讯科技(深圳)有限公司 一种游戏数据处理方法、装置及存储介质
CN114549708A (zh) * 2022-01-26 2022-05-27 网易(杭州)网络有限公司 游戏对象的编辑方法、装置和电子设备

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8366546B1 (en) * 2012-01-23 2013-02-05 Zynga Inc. Gamelets
CN112190948A (zh) * 2020-10-15 2021-01-08 网易(杭州)网络有限公司 一种游戏地图的生成方法、装置、电子设备及存储介质
CN112546631A (zh) * 2020-12-23 2021-03-26 上海米哈游天命科技有限公司 一种角色控制方法、装置、设备及存储介质
CN112870704A (zh) * 2021-03-18 2021-06-01 腾讯科技(深圳)有限公司 一种游戏数据处理方法、装置及存储介质
CN114549708A (zh) * 2022-01-26 2022-05-27 网易(杭州)网络有限公司 游戏对象的编辑方法、装置和电子设备

Also Published As

Publication number Publication date
CN114549708A (zh) 2022-05-27

Similar Documents

Publication Publication Date Title
WO2023142614A1 (zh) 游戏对象的编辑方法、装置和电子设备
US20220319095A1 (en) Three-dimensional model and material merging method, device, terminal, storage medium and rendering method
CA2795739C (en) File format for representing a scene
US8698809B2 (en) Creation and rendering of hierarchical digital multimedia data
KR101793017B1 (ko) 컴퓨터 스크린 상에 객체를 디스플레이하는 방법, 장치 및 프로그램
US20230120253A1 (en) Method and apparatus for generating virtual character, electronic device and readable storage medium
WO2023231537A1 (zh) 地形图像渲染方法、装置、设备及计算机可读存储介质及计算机程序产品
US8522201B2 (en) Methods and apparatus for sub-asset modification
CN110333924A (zh) 一种图像渐变调整方法、装置、设备及存储介质
WO2023231235A1 (zh) 动态图像的编辑方法、装置和电子设备
CN114494024B (zh) 图像渲染方法、装置、设备及存储介质
US11625900B2 (en) Broker for instancing
US11393180B2 (en) Applying non-destructive edits to nested instances for efficient rendering
JP4769230B2 (ja) 画像処理装置および画像処理方法、並びにプログラム
KR100817506B1 (ko) 지능형 콘텐츠 생성방법
JP3062488B1 (ja) テクスチャマッピング装置、方法、及び記録媒体
JP2010282498A (ja) ポリゴン処理装置,プログラム及び情報記録媒体
CN116450017B (zh) 显示对象的显示方法、装置、电子设备及介质
WO2023134537A1 (zh) 分屏特效道具生成方法、装置、设备和介质
US9569875B1 (en) Ordered list management
JP4086002B2 (ja) プログラム、画像処理装置及び方法、並びに記録媒体
CN115779439A (zh) 游戏配置文件的编辑方法、装置、终端设备及存储介质
KR101861129B1 (ko) 데이터 용량 감축 기능을 갖는 삼차원 애니메이션 제작방법
CN117788647A (zh) 用于制作轨道动画的方法、装置及计算机可读介质
CN115904192A (zh) 界面显示方法、装置、电子设备及可读存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22923408

Country of ref document: EP

Kind code of ref document: A1