CN111589151A - Method, device, equipment and storage medium for realizing interactive function - Google Patents

Method, device, equipment and storage medium for realizing interactive function Download PDF

Info

Publication number
CN111589151A
CN111589151A CN202010424564.3A CN202010424564A CN111589151A CN 111589151 A CN111589151 A CN 111589151A CN 202010424564 A CN202010424564 A CN 202010424564A CN 111589151 A CN111589151 A CN 111589151A
Authority
CN
China
Prior art keywords
built
interactive
function
target
interactive function
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010424564.3A
Other languages
Chinese (zh)
Inventor
韦成龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202010424564.3A priority Critical patent/CN111589151A/en
Publication of CN111589151A publication Critical patent/CN111589151A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • A63F13/57Simulating properties, behaviour or motion of objects in the game world, e.g. computing tyre load in a car race game
    • A63F13/577Simulating properties, behaviour or motion of objects in the game world, e.g. computing tyre load in a car race game using determination of contact between game characters or objects, e.g. to avoid collision between virtual racing cars

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the application discloses a method, a device, equipment and a storage medium for realizing an interactive function, wherein when the interactive function is interacted with an object to be built, the target interactive function aiming at the object to be built is determined according to the type of the object. The target interactive function may include an interactive function implemented based on a relatively complex operation, such as a first number of operations, in order to enable the interactive function to be implemented more conveniently, a reduced-order operation function entry is provided according to the target interactive function, the reduced-order operation function entry is used to implement the target interactive function based on a second number of operations, and since the second number is smaller than the first number, when the reduced-order operation function entry is opened, if the interactive operation of the user is received, even if a relatively simple operation is performed during the interactive operation, the target interactive function of the object to be built may be executed in the virtual scene based on the interactive operation. Therefore, the interaction function with extremely high complexity is realized through the minimum interaction operation, the interaction operation for realizing the interaction function is simplified, and the interaction convenience is improved.

Description

Method, device, equipment and storage medium for realizing interactive function
Technical Field
The present application relates to the field of data processing, and in particular, to a method, an apparatus, a device, and a storage medium for implementing an interactive function.
Background
With the rapid development of interactive applications, there are increasing types of interactive applications, and among them, building-class interactive applications are also popular with users, and building-class application scenes are widely introduced into interactive applications such as three-dimensional (3D) role-playing games.
In a construction-type application scenario, a fence, a staircase, furniture, a house, or the like is often constructed using model components based on user operations. However, in the related art, a user is often required to perform a great number of redundant operations during construction, so that interaction is very inconvenient.
Disclosure of Invention
In order to solve the technical problems, the application provides an interactive function implementation method, an interactive function implementation device, interactive function implementation equipment and a storage medium, the interactive function with extremely high complexity is implemented through the least interactive operation, the arrangement effect which is required at will is accurately achieved, the interactive operation for implementing the interactive function is simplified, and the convenience of interaction is improved.
The embodiment of the application discloses the following technical scheme:
in a first aspect, an embodiment of the present application provides a method for implementing an interactive function, where the method includes:
acquiring the object type of an object to be built in the virtual scene;
determining a target interactive function for the object to be built according to the object type, wherein the target interactive function comprises an interactive function realized based on a first number of operations;
providing a reduced-order operation function entry according to the target interactive function, wherein the reduced-order operation function entry is used for realizing the target interactive function based on a second number of operations, and the second number is smaller than the first number;
and if the reduced order operation function entrance is opened, responding to the interactive operation of a user, and executing a target interactive function on the object to be built in the virtual scene.
In a second aspect, an embodiment of the present application provides an apparatus for implementing an interactive function, where the apparatus includes an obtaining unit, a determining unit, a providing unit, and an executing unit:
the acquisition unit is used for acquiring the object type of the object to be built in the virtual scene;
the determining unit is used for determining a target interactive function aiming at the object to be built according to the object type, wherein the target interactive function comprises an interactive function realized based on a first number of operations;
the providing unit is configured to provide a reduced-order operation function entry according to the target interactive function, where the reduced-order operation function entry is configured to implement the target interactive function based on a second number of operations, and the second number is smaller than the first number;
and the execution unit is used for responding to the interactive operation of a user and executing the target interactive function of the object to be built in the virtual scene if the reduced-order operation function entrance is opened.
In a third aspect, an embodiment of the present application provides an implementation apparatus for an interactive function, where the apparatus includes a processor and a memory:
the memory is used for storing program codes and transmitting the program codes to the processor;
the processor is configured to execute the method for implementing the interaction function according to the first aspect according to instructions in the program code.
In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium for storing program codes, where the program codes are used to execute an implementation method of the interaction function according to the first aspect.
According to the technical scheme, when interaction with the to-be-built object is required in the building type virtual scene, the object type of the to-be-built object in the virtual scene can be obtained, and the interaction functions of the to-be-built objects of different object types are different, so that the target interaction function for the to-be-built object is determined according to the object type. The target interactive functions may include interactive functions implemented based on relatively complex operations, such as a first number of operations, in order to enable the implementation of the interactive functions to be more convenient, in the present application, the interactive operations may be simplified for various interactive functions, so the present application provides a reduced-order operation function entry according to the target interactive functions, the reduced-order operation function entry is used to implement the target interactive functions based on a second number of operations, and since the second number is smaller than the first number, when the relatively complex target interactive functions are implemented through the reduced-order operation function entry, a user may implement the target functions through only one or relatively simple interactive operations. Therefore, when the reduced-order operation function entrance is opened, if the interactive operation of the user is received, even if the interactive operation is simple, the target interactive function of the object to be built can be executed in the virtual scene based on the interactive operation. According to the method and the device, the interaction function is provided according to the object type, the interaction function with high complexity can be realized through the least interaction operation, the arrangement effect which is required at will is accurately achieved, the interaction operation for realizing the interaction function is simplified, and the interaction convenience is improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without inventive exercise.
Fig. 1 is a schematic view of an application scenario of a method for implementing an interactive function according to an embodiment of the present application;
fig. 2 is a flowchart of a method for implementing an interactive function according to an embodiment of the present application;
fig. 3 is a schematic diagram of a top view of a virtual scene provided in an embodiment of the present application;
fig. 4 is a schematic view of an interaction interface for interacting with an object to be built according to an embodiment of the present application;
fig. 5 is a schematic diagram of an L-shaped corner before an associative link is established according to an embodiment of the present application;
fig. 6 is a schematic diagram of an L-shaped corner after an associative link is constructed according to an embodiment of the present application;
fig. 7 is a schematic diagram of a T-shaped corner before an associative link is established according to an embodiment of the present application;
fig. 8 is a schematic diagram of a T-shaped corner after an associative link is constructed according to an embodiment of the present application;
fig. 9 is a schematic diagram of a cross-shaped corner before an association connection is established according to the embodiment of the present application;
fig. 10 is a schematic diagram of a cross-shaped corner after an association connection is established according to an embodiment of the present application;
fig. 11 is an interface diagram before associative connection is established for a fence model and a fretting model according to an embodiment of the present application;
fig. 12 is an effect diagram after a correlation connection is established for a fence model and a fretwork model according to the embodiment of the present application;
fig. 13 is an interface diagram before an associative link is constructed for a bridge model according to an embodiment of the present application;
fig. 14 is an effect diagram after an associative link is established for a bridge model according to an embodiment of the present application;
fig. 15 is an interface diagram showing a selection control for establishing an association connection according to an embodiment of the present application;
fig. 16 is a schematic diagram of an interaction interface for interacting with an object to be built according to an embodiment of the present application;
fig. 17 is a schematic view of an interaction interface for interacting with an object to be built according to an embodiment of the present application;
fig. 18 is a schematic view of an interaction interface for interacting with an object to be built according to an embodiment of the present application;
fig. 19 is a flowchart of a method for implementing an interactive function according to an embodiment of the present application;
fig. 20 is a structural diagram of an apparatus for implementing an interactive function according to an embodiment of the present application;
fig. 21 is a block diagram of an implementation apparatus for an interactive function according to an embodiment of the present application;
fig. 22 is a block diagram of a server according to an embodiment of the present application.
Detailed Description
Embodiments of the present application are described below with reference to the accompanying drawings.
The traditional interactive function implementation mode often requires a user to perform a great deal of redundant operations, for example, the user can select the operation by pressing furniture for a long time; when the enclosing wall is built, the 'continuous paving' button needs to be clicked repeatedly, so that the single model assemblies are placed repeatedly and spliced; when the enclosing wall turns, an orientation is usually selected by default to turn, and the orientation is not necessarily the orientation the user wants, so that the user may need to adjust the orientation several times to obtain the desired turning. Therefore, in the related art, for some complex interactive operations, the user needs to perform the complex interactive operations, so that the interaction in the building game is very inconvenient.
In order to solve the above technical problem, an embodiment of the present application provides an implementation method of an interactive function, where all interactive functions required to be implemented by different objects to be built of different object types of a game are different, so as to provide a corresponding reduced-order operation function entry according to the interactive functions, where the reduced-order operation function entry can implement an interactive function with a higher complexity through a minimum of interactive operations, so that when the reduced-order operation function entry is opened, when a simple interactive operation of a user is detected, the interactive function with the higher complexity can be implemented, an arbitrary desired arrangement effect can be precisely achieved, thereby simplifying the interactive operation for implementing the interactive function, and improving convenience of interaction.
The method for realizing the interaction function provided by the embodiment of the application can be applied to construction-like game business, is mainly embodied in a home system, and comprises games suitable for home design-like games or games with home design-like playing methods and the like. For example, a user can build a complete and exclusive large house in a virtual scene by using various models and maps, and in the process, different and simplified interaction modes are customized for various types of object models to be built.
It is understood that the method can be applied to a terminal device, such as a smart terminal, a computer, a tablet computer, a Personal Digital Assistant (PDA) device, and the like.
In order to facilitate understanding of the technical solution of the present application, a method for implementing an interactive function provided in the embodiments of the present application is introduced below in combination with an actual application scenario.
Referring to fig. 1, fig. 1 is a schematic view of an application scenario of a method for implementing an interactive function according to an embodiment of the present application. The application scene includes the terminal device 101, the terminal device 101 runs a game application program with a family design class or a playing method with the family design class, and the terminal device 101 is used for executing the method for implementing the interactive function provided by the embodiment of the application.
When the terminal device 101 executes the method for implementing the interactive function provided by the embodiment of the present application, the object type of the object to be built in the virtual scene may be obtained. The virtual scene may be an environment that enables construction-like interactions and may include an indoor virtual scene, which may be, for example, an arrangement within a house, and an outdoor virtual scene, which may be, for example, an arrangement in a courtyard.
The interactive functions of the to-be-built objects of different object types are different, and the interactive functions can mean that the to-be-built objects react to the operation of a user to achieve the effect desired by the user. The terminal apparatus 101 can determine a target interactive function for the object to be built according to the object type, the target interactive function including an interactive function realized based on the first number of operations, i.e., an interactive function with higher complexity.
Different target interactive functions have corresponding reduced-order operation function entries, the reduced-order operation function entries are used for realizing the target interactive functions based on second number operation, the second number is smaller than the first number, namely, interactive operation of the interactive functions with higher complexity can be simplified through the reduced-order operation function entries.
If the reduced-order operation function entry is opened, when the terminal device 101 receives the interactive operation of the user, the interactive operation may be an operation performed by the user and aimed at interacting with the object to be built, for example, a moving operation, a clicking operation, and the like. The terminal device 101 performs a target interactive function for the object to be built in the virtual scene according to the interactive operation. Because the reduced-order operation function entry is opened, even if the interactive operation is simple operation such as mobile operation, a more complex target interactive function can be realized, thereby realizing the interactive operation of the simplified interactive function with extremely high complexity.
Next, a method for implementing the interactive function provided by the embodiment of the present application will be described with reference to the drawings.
Referring to fig. 2, fig. 2 is a flowchart illustrating a method for implementing an interactive function, which may be applied to the terminal device illustrated in fig. 1, and the method includes:
s201, acquiring the object type of the object to be built in the virtual scene.
As a basis of the interactive function implementation method provided in the embodiment of the present application, all objects related in the virtual scene may be classified according to object types, and a variety of classification methods are provided in the embodiment of the present application. The first classification mode can be classified according to spatial forms, namely, the classification modes are divided into three types according to points, lines and surfaces, wherein the point type mainly refers to single models with different sizes, such as a guest room model and a bench model; the line type mainly refers to a model needing continuous splicing in one direction, namely a one-dimensional connecting type, such as an enclosing wall model, a gallery bridge model and the like; the "surface" type mainly refers to a map or a model needing to be laid in a large area, namely a two-dimensional connection type, such as a lawn model, a pool model and the like.
The second classification method can be to classify the objects into four categories according to the characteristics of the objects, wherein the four categories are common-common furniture, such as a table model, a gallery bridge model and the like; the ground class-ground objects such as lawn models, pool models and the like are characterized in that only one object is arranged at the same position and the objects are replaced with each other; scaling, i.e. scalable furniture, which does not deform after scaling, and stretching of which is achieved by scaling, such as wall models; the roof type is special for indoor use and can only be placed on the roof, such as a lamp model and the like.
Different types of objects can implement different interactive functions, and the interactive functions that can be implemented by each type of object can be set. Taking the first classification as an example, the "point" class object generally implements basic interactive functions, such as supporting movement, rotation operation, etc.; the 'line' type object can support special interaction functions of stretching, continuous paving, absorbing, inserting and the like besides the basic interaction function, such as an enclosure or a gallery bridge; the 'surface' type object can support special interaction functions such as a pull frame, a brush and the like.
The special interaction function cannot be realized in most cases for the point-type objects, however, in some special virtual scenes, the special interaction function may also be set for the point-type objects, for example, the point-type objects are table models, in a virtual scene of a conference, some continuously placed table models may be needed, and at this time, the point-type objects may be set to realize continuous paving and stretching interaction functions.
It can be understood that, in the family design game or the game with the family design game method, the user can build a family in the virtual scene, and therefore, the user can place various objects in the virtual scene according to the preference or the requirement of the user to complete the building of the family.
After an object, such as an object to be built, is placed in the virtual scene, the object type of the object to be built can be obtained so as to determine which interactive functions the object to be built may implement, and thus, when the interactive operation is received, the interactive function is implemented according to the interactive operation.
During the construction process, the user may be provided with a virtual scene at different perspectives, such as a side view of the virtual scene, a front view of the virtual scene, a top view of the virtual scene, and so on. Generally, at the beginning of construction, in order to facilitate the user to interact with various objects, the top view of the virtual scene can be automatically switched to, and the schematic diagram of the top view of the virtual scene can be seen in fig. 3, and for the pavilion in fig. 3, the user sees the top shape of the pavilion.
The objects provided in the game are different in size, some objects are particularly large, some objects may be particularly small, and under the same visual angle range, the visual perception of the user by the particularly large objects is very close to the user, while the visual perception of the user by the particularly small objects is very far away from the user, and under the two conditions, the visual experience of the user is not good, and the user is not convenient to perform interactive operation on the objects. Therefore, after the object to be built is placed in the virtual scene, the size of the object to be built can be determined, so that the visual angle range of the virtual scene is adjusted according to the size of the object to be built, and the adjusted visual angle range is adaptive to the size of the object.
In general, if the size of the object indicates that the object to be built is a particularly large object, the visual angle range can be expanded, so that the visual perception of the object to be built for a user is moderate, and the user can conveniently perform interactive operation on the object to be built; the object size indicates that the object to be built is a very small object, so that the visual sense of the object to be built for a user is moderate, and the user can conveniently carry out interactive operation on the object to be built.
The viewing angle range can be determined by the height of the virtual camera in the virtual scene, and the viewing angle range can be expanded by increasing the height of the virtual camera, for example, the viewing angle range can be increased to the height of the full view virtual scene. The height of the virtual camera can be adjusted down to reduce the range of the view angle, for example, the height of the virtual camera can be adjusted down to only see a certain object (such as a gallery bridge) in the virtual scene.
S202, determining a target interaction function aiming at the object to be built according to the object type.
Since the interactive functions that can be realized by the object of each object type are preset, when the object type of the object to be built is determined, the target interactive function for the object to be built can be determined according to the object type. For example, the object to be built is a wall model, and the target interactive function can be determined to include interactive functions such as stretching, continuous paving, absorbing, inserting and the like through the arrangement of the interactive functions which can be realized by the different object types.
The target interactive function includes interactive functions implemented based on the first number of operations, that is, the target interactive function includes some interactive functions with higher complexity, that is, the target interactive function may require a user to perform more complicated interactive operations to implement.
And S203, providing a reduced order operation function entrance according to the target interaction function.
The embodiment of the application mainly aims to realize the interaction function with higher complexity through the least interaction operation, so that the interaction convenience is improved. Therefore, the embodiment of the application can provide a corresponding reduced-order operation function entry according to the target interactive function, where the reduced-order operation function entry is used to implement the target interactive function based on the second number of operations, and since the second number is smaller than the first number, the reduced-order operation function entry can simplify the interactive operation for implementing the target interactive function.
If the object to be built is a wall model and the target interactive function is continuous paving, a reduced-order operation function entry provided for the user can be shown as a continuous paving button in fig. 4, for example, and the reduced-order operation function entry can execute the target interactive function for the object to be built and can realize the target interactive function based on the minimum interactive operation implemented by the user.
And S204, if the reduced order operation function entrance is opened, responding to the interactive operation of a user, and executing a target interactive function on the object to be built in the virtual scene.
The interactive operation may be an operation performed by a user for the purpose of interacting with the object to be built, and may be, for example, a moving operation, a clicking operation, or the like. In the game, the trigger mode of the interactive operation can include many, such as triggering through a mouse, a control handle or a finger, etc.
In some possible embodiments, for each type of object, a corresponding controller may be created according to the type of object, each controller containing the interactive functions that the type of object can implement. The items are classified according to different classification methods, and the created controllers are different. If the classification is performed according to the second classification method, the following controllers may be included: a general controller, for example, implementing a target interaction function for a table model, a gallery bridge model, and the like; a ground controller, for example, implementing a target interaction function for a lawn model, pool model, etc.; a scaling controller, e.g. for a wall model or the like, implementing a target interaction function; a rooftop controller, for example, implementing a target interaction function for a luminaire model or the like, is similar to a general controller except that its reference coordinates are not the ground but the rooftop.
After the object type of the object to be built is determined, the controller corresponding to the object type can be called to receive the interactive operation of the user. The controller can control the operation of the controller by calling various functions, and taking the object type as a one-dimensional continuous type as an example, the controller can be controlled by the following functions: StartWork () indicates that the controller starts to work and starts to receive the interactive operation of the user; FinishWork () represents that the controller finishes working, and other functions of the game process the interactive operation of the user; OnMove () represents a function that implements a move function; oncopy () represents a function that implements a continuous tiling function; OnConfirm () represents a function for confirming this interactive operation, and OnCancel () represents a function for canceling this interactive operation. For other object types, the controller generally relates to StartWork (), FinishWork (), OnConfirm (), and oncorcel () functions, and other functions may be adjusted according to the target interaction function corresponding to the object type, which is not limited in this embodiment.
If the reduced-order operation function entry is opened, the interactive operation may be received through the controller, and the received receiving operation is generally a simplified operation, such as a move operation, but a target interactive function with higher complexity may be executed through the interactive operation. If the object to be built is the enclosure model shown in fig. 4 and the target interactive function is continuous paving, after the user opens the step-down operation function entry through the "continuous paving" button in fig. 4, the user may receive the interactive operation through the controller identified by the dashed box in fig. 4, and if the user performs the interactive operation according to the operation prompt of dragging the "finger" button in fig. 4, for example, the user drags the "finger" to continue moving to the right, the target interactive function of continuously paving the enclosure model to the right may be performed.
Compared with the prior art, the user can place one enclosure model through one interactive operation in the prior art, and needs to perform multiple operations if the whole enclosure is to be completed.
According to the technical scheme, when interaction with the to-be-built object is required in the building type virtual scene, the object type of the to-be-built object in the virtual scene can be obtained, and the interaction functions of the to-be-built objects of different object types are different, so that the target interaction function for the to-be-built object is determined according to the object type. The target interactive functions may include interactive functions implemented based on relatively complex operations, such as a first number of operations, in order to enable the implementation of the interactive functions to be more convenient, in the present application, the interactive operations may be simplified for various interactive functions, so the present application provides a reduced-order operation function entry according to the target interactive functions, the reduced-order operation function entry is used to implement the target interactive functions based on a second number of operations, and since the second number is smaller than the first number, when the relatively complex target interactive functions are implemented through the reduced-order operation function entry, a user may implement the target functions through only one or relatively simple interactive operations. Therefore, when the reduced-order operation function entrance is opened, if the interactive operation of the user is received, even if the interactive operation is simple, the target interactive function of the object to be built can be executed in the virtual scene based on the interactive operation. According to the method and the device, the interaction function is provided according to the object type, the interaction function with high complexity can be realized through the least interaction operation, the arrangement effect which is required at will is accurately achieved, the interaction operation for realizing the interaction function is simplified, and the interaction convenience is improved.
It should be noted that, in the embodiment of the present application, for the objects to be built of different object types, the manner of performing the target interactive function on the objects to be built in the virtual scene in response to the interactive operation of the user in S204 is different.
In the case of classifying the objects by using the first classification method, if the object type is a one-dimensional continuous type, the interactive operation is a moving operation, and after receiving the interactive operation of the user, the target interactive function of the object to be built is executed in the virtual scene by obtaining a start position and an end position of the moving operation, and calculating the target number of the object to be built to be placed on a moving track of the moving operation based on the start position and the end position, so that the object to be built with the target number is continuously placed on the moving track.
Wherein the moving direction of one moving operation can place the objects to be built successively in a single direction, i.e., in the same direction. Of course, in some cases, the moving direction of one moving operation may also include multiple moving directions, for example, the object to be built is placed continuously along a certain direction, and then the direction is changed to place the object to be built continuously, so that when the direction-changing interactive function needs to be implemented, for example, a fence is built around a house, the building of the whole fence can be completed only by one interacting operation (for example, the moving track of the interacting operation is a rectangle), and the convenience of the interaction is further improved. Whether the moving direction can be changed by one moving operation may be preset.
If the moving direction of one moving operation is a single direction, the moving track can be determined according to the starting position and the ending position, and the moving track is a straight line segment between the starting position and the ending position. In this case, if it is necessary to change the direction of attachment of the objects to be built, the successive placement of the objects to be built in the current direction of attachment may be completed first, and then the aforementioned steps may be re-performed to complete the successive placement of the objects to be built in the other direction. For example, as shown in fig. 5, the fence modules may be placed continuously in the horizontal direction (the current direction of placement) through the above steps, and then "confirmation" is clicked to complete the construction in the horizontal direction. And then, finishing the continuous placement of the upper enclosure wall modules in the vertical direction through the steps again, thereby constructing the L-shaped enclosure wall.
If the moving direction of one moving operation includes a plurality of moving directions, it is difficult to determine the moving track according to the starting position and the ending position during the moving operation, and the moving direction of the moving operation needs to be determined in real time, and the moving direction determines the connecting direction of the object to be built. That is, before acquiring the start position and the end position of the moving operation, the link direction of the object to be built may be determined according to the change of the position of the manipulation point of the moving operation, thereby determining the moving trajectory according to the link direction and the position of the manipulation point.
In some possible embodiments, after S204 is executed, a plurality of objects to be built may be included in the virtual scene, and some objects to be built may have an association relationship therebetween, so that an association connection between a plurality of types of objects to be built may be constructed according to the association relationship between the plurality of objects to be built, so that a building effect meeting user requirements may be obtained and user experience may be improved under the condition of simplifying user interaction operations.
In this embodiment, after determining that the target interactive operation for the object to be built is completed, the building of the associated connection between the multiple types of the object to be built may be triggered, for example, after the user clicks the "confirm" button in fig. 5, the building of the associated connection between the multiple types of the object to be built is triggered, so that inconvenience caused by the interactive operation performed on a single object to be built after the building of the associated connection is avoided, and user experience is improved.
The association relationship between the objects to be built is different, and the constructed association connection may also be different. The associative relationship between the objects to be built may be a turning relationship, and the associative link may be a turning angle. For example, as shown in fig. 6, the object to be built is a wall model, and the multiple objects to be built are shown in a turning relationship by a dashed frame, then an associative connection between the multiple objects to be built is built, which may be a corner, as shown by the dashed frame in fig. 6, and an L-shaped corner is shown in fig. 6. Compared with the L-shaped corner identified by the dashed frame in FIG. 5, after the association connection is established, the obtained corner is more mellow and beautiful, and accords with the actual construction scene, and the user experience is improved.
According to different shapes of the constructed enclosing walls, the constructed association connection can be a T-shaped corner, a cross-shaped corner and the like, fig. 7 shows a schematic diagram of the T-shaped corner before the association connection is constructed, fig. 8 shows a schematic diagram of the T-shaped corner after the association connection is constructed, fig. 9 shows a schematic diagram of the cross-shaped corner before the association connection is constructed, and fig. 10 shows a schematic diagram of the cross-shaped corner after the association connection is constructed.
The association between the items to be built may be a dependency, and the association connection may be an embedded connection. For example, as shown in fig. 11, the object to be created is an enclosure model and a hollow model, the hollow model is generally located on the enclosure model, and the hollow model is embedded into the enclosure model to form an enclosure with a hollow, so that when it is determined that the two have an association relationship, an embedding connection can be established, so that the hollow model is automatically embedded into the enclosure model to form the enclosure with the hollow as shown in fig. 12.
The associative relationship between the items to be built may be a combinatorial relationship and the associative link may be a combinatorial link. For example, as shown in fig. 13, the object to be built is two gallery bridge models, and the two gallery bridge models are continuously combined to form a complete gallery bridge, so that when the two gallery bridge models are determined to have the association relationship, the combined connection can be constructed, and the two gallery bridge models are spliced together to form the complete gallery bridge shown in fig. 14.
By the method, the association connection can be automatically established according to the association relation, and the requirement of establishing the association connection among a plurality of objects to be built by a user can be met without redundant interaction operation of the user, so that the interaction convenience is improved, and the interaction mode is enriched.
In some cases, although there is an associative relationship between the items to be built, the user may not have a need to build an associative link between the two. Generally, if the positional relationship between the objects to be built may satisfy a certain condition, such as a relatively close distance, it indicates that the user has a large requirement for building the associative link between the objects to be built, otherwise, there may be no requirement for building the associative link between the objects to be built. Therefore, in order to further meet the actual requirements of the user and avoid the occurrence of false interaction, in a possible implementation manner, according to the association relationship among the multiple objects to be built, the manner of constructing the association connection among the multiple object types to be built may be to determine the position relationship among the multiple objects to be built, if the position relationship satisfies a first preset condition, it indicates that the user may have a requirement for constructing the association connection among the objects to be built, and according to the association relationship among the multiple objects to be built, construct the association connection among the multiple object types to be built.
If the position relationship does not satisfy the first preset condition, it indicates that the user has less possibility or does not have the requirement for constructing the associative link between the objects to be built, but still may have the requirement for constructing the associative link between the objects to be built, in this case, in order to simplify the interactive operation in the case that the associative link needs to be constructed, if the position relationship satisfies the second preset condition and does not satisfy the first preset condition, the selection control for establishing the associative link may be displayed to the user. The user can select whether to construct the associated connection or not through the selection control according to actual requirements. For example, the selection control may refer to a "suction" control shown in fig. 15, and when the user clicks the "suction" control, the two gallery bridge models in fig. 15 automatically construct an association connection to form the complete gallery bridge shown in fig. 14.
If the position relationship is a distance between the objects to be built, the first preset condition and the second preset condition may be distance conditions, for example, the first preset condition is that the distance is smaller than a first threshold value, the second preset condition is that the distance is smaller than a second threshold value, and the first threshold value is smaller than the second threshold value.
Under the condition that the first classification mode is adopted to classify the objects, if the object type is a two-dimensional continuous type, the interactive operation is a mobile operation, and a proper method can be selected according to the characteristics of the object to be built to execute the target interactive function of the object to be built in the virtual scene.
In a possible embodiment, after receiving the interactive operation of the user, the way of executing the target interactive function of the object to be built in the virtual scene may be to determine a target area to be replaced in the virtual scene according to the starting position and the ending position of the moving operation, and replace the target area with the object to be built. The move operation may be referred to as a draw box operation.
For example, when the ground material is switched (for example, the object to be constructed is a lawn model), the entire virtual scene may be divided in grid units, a rectangular area is determined as a target area by taking the start position and the end position as diagonal lines, and all grids in the rectangular area are replaced by lawn models, so as to form lawns in the rectangular area. Referring to fig. 16, the object to be built is shown in a dotted line box, and when the "finger" button is dragged from the start position to the end position, a rectangular area may be defined with the start position and the end position as diagonal lines, as shown in a solid line box in fig. 16, and all cells within the rectangular area are replaced with the lawn model, thereby forming the lawn shown in fig. 16 within the rectangular area.
In a possible embodiment, after receiving the interactive operation of the user, the manner of executing the target interactive function of the object to be built in the virtual scene may be to determine a manipulation point position of the moving operation, and if the manipulation point position is not recorded in the interactive operation process, replace the manipulation point position with the object to be built.
For example, when a pool is dug (for example, an object to be constructed is a pool model), the whole virtual scene may be divided by taking grids as units, the positions of control points through which a moving operation passes, that is, the grids through which the moving operation passes, are recorded, and if the current grids are not recorded, the current grids are replaced by the pool model, so that the pool is formed in the grids through which the moving operation passes. Referring to fig. 17, the object to be built is shown as a dashed box, and when a 'finger' button is dragged to change the position of the manipulation point, if the grid through which the manipulation is performed is moved to form the shape shown in fig. 17, the pool shown in fig. 17 can be formed.
In the case of classifying the objects by using the second classification method, the method provided in the above embodiments may still be used for various objects to perform the target interaction function of the object to be built in the virtual scene, and details are not repeated here. If the object to be built is a table model, a gallery bridge model and the like, the method is realized through a common controller so as to execute a target interaction function; if the object to be built is a lawn model, a pool model and the like, the method is realized through a ground controller so as to execute a target interaction function; if the object to be built is a lamp model and the like, the method is realized through the roof controller so as to execute the target interaction function.
For some objects, because the objects have the characteristic of not deforming after zooming, namely zooming-type objects, the target interaction function can be realized by zooming for the zooming-type objects. In this case, upon receiving the interactive operation by the user, the manner of performing the target interactive function for the object to be built in the virtual scene may be to scale the object to be built in the moving direction of the moving operation according to the moving operation. At this time, the object to be built can be implemented by the zoom controller to perform the target interactive function.
In this embodiment, scaling coordinates may be recorded to indicate their scaling in different coordinate axis directions (e.g., X-axis direction and Y-axis direction), and some objects to be built may have scaling in only one direction, such as a fence model, with scaling in the X-axis direction indicating a row of fences and scaling in the Y-axis direction indicating a row of fences. Other objects to be built can be scaled in both directions, such as a floor model or the like.
It should be noted that, in the present embodiment, if the reduced-order operation function entry is closed, and at this time, if the interactive operation is a moving operation, only the interactive function of moving the object to be built is implemented, and the target interactive function with extremely high complexity is not implemented. However, compared with the related art, the object to be built can be selected by clicking, the object to be built does not need to be selected by long pressing, and the operation is more convenient.
In addition, the embodiment supports two modes of interactive operation, namely a "dragging model" and a "dragging UI" when the interactive function of moving the object to be built is realized, wherein the UI is short for a User Interface (User Interface). The "drag UI" facilitates movement of larger objects to be built, primarily from a top down perspective, while the "drag UI" facilitates movement of smaller objects to be built. For example, as shown in fig. 18, when the object to be built is "clicked", the object to be built may be selected, so as to control the object to be built to move or rotate, and specifically, the object to be built may implement an interactive function by means of "dragging the model" and "dragging the UI". The UI may be a "move" or "shift" button in fig. 18.
Because the user can select a proper mode to implement interactive operation according to the size of the object to be built, the operation of the user is facilitated, and the convenience of interaction is improved.
Next, a method for implementing an interactive function provided in the embodiment of the present application will be described with reference to an actual application scenario. The application scenario is a home design game, in the game, a user wants to build a wall, and in this case, the user can trigger to enter an editing mode, and select a wall model as an object to be built, so as to execute the implementation method of the interactive function provided by the embodiment of the application. The target interaction function which can be realized by the object to be built comprises a continuous paving function, namely a plurality of enclosure wall models form an enclosure wall in a continuous mode. Referring to fig. 19, the method includes:
s1901, the user opens the family design game.
S1902, in response to a user operation, an edit mode is entered.
When entering the editing mode, the editing interface shown in fig. 17 may be displayed, the object to be built is selected to be placed in the virtual scene, and a target interaction function and the like may be executed on the object to be built through an interactive operation.
S1903, the user selects to place the enclosure model in the virtual scene as the object to be built.
S1904, the terminal device selects a controller according to the object type of the object to be built.
And S1905, the terminal equipment judges that the reduced order operation function entry is opened, if so, executes S1906, and if not, executes S1907.
And S1906, calling a function of the controller to realize the continuous paving function by the terminal equipment.
If so, it is stated that when the user performs an interactive operation, such as a move operation, on the object to be built, the continuous paving function can be executed in response to the interactive operation of the user, instead of the move function, and at this time, a function of the controller, such as an OnCopy () function, representing that the continuous paving function is implemented can be called, and the continuous paving function is implemented by the controller.
S1907, the terminal device calls a function of the controller indicating implementation of the mobility function.
If not, it is stated that when the user performs an interactive operation, such as a move operation, for the object to be built, a simple move function may be performed in response to the user's interactive operation, and at this time, a function representing the implementation of the move function, such as an OnMove () function, of the controller may be called to implement the move function through the controller.
S1908, the terminal device determines whether to confirm the current operation, if so, executes S1909, and if not, executes S1910.
If receiving the interactive operation of clicking the 'confirm' button by the user, the terminal equipment confirms the operation, and calls a function of the controller, such as an OnConfirm () function, for confirming the interactive operation, otherwise, calls a function of the controller, such as an OnCancel () function, for canceling the interactive operation.
S1909, the terminal device calls a function of the controller to indicate and confirm the current interactive operation.
S1910, the terminal equipment calls a function of the controller for indicating that the interactive operation is cancelled.
Based on the method for implementing the interactive function provided in the foregoing embodiment, this embodiment provides an apparatus for implementing the interactive function, referring to fig. 20, the apparatus includes an obtaining unit 2001, a determining unit 2002, a providing unit 2003, and an executing unit 2004:
the acquiring unit 2001 acquires an object type of an object to be built in the virtual scene;
the determining unit 2002 is configured to determine a target interactive function for the object to be built according to the object type, where the target interactive function includes an interactive function implemented based on a first number of operations;
the providing unit 2003 is configured to provide a reduced-order operation function entry according to the target interactive function, where the reduced-order operation function entry is configured to implement the target interactive function based on a second number of operations, and the second number is smaller than the first number;
the execution unit 2004 is configured to, if the reduced-order operation function portal is opened, respond to an interaction operation of a user, and execute a target interaction function for the object to be built in the virtual scene.
In a possible implementation manner, if the object type is a one-dimensional concatenated type, the interaction operation is a moving operation, and the execution unit 2004 is specifically configured to:
acquiring a starting position and an ending position of the moving operation;
calculating a target number of the to-be-built objects to be placed on a movement trajectory of the moving operation based on the start position and the end position;
continuously placing the target number of the objects to be built on the moving track.
In a possible implementation manner, the determining unit 2002 is further configured to:
determining the connecting direction of the object to be built according to the change of the position of the control point of the moving operation;
and determining a moving track according to the connecting direction and the position of the control point.
In one possible implementation, the apparatus further includes a construction unit:
the building unit is used for building the association connection among a plurality of types of the objects to be built according to the association relationship among the objects to be built
In a possible implementation, the building unit is configured to determine a positional relationship between a plurality of the objects to be built;
and if the position relation meets a first preset condition, constructing the association connection among the types of the objects to be built according to the association relation among the objects to be built.
In a possible implementation manner, the constructing unit is further configured to display a selection control for establishing the association connection to the user if the position relationship satisfies a second preset condition and does not satisfy the first preset condition.
In a possible implementation manner, if the object type is a zoom type, the interaction operation is a moving operation, and the execution unit 2004 is specifically configured to zoom the object to be built in a moving direction of the moving operation according to the moving operation.
In a possible implementation manner, if the type of the object is a two-dimensional continuous type, the interactive operation is a moving operation, and the execution unit 2004 is specifically configured to determine a target area to be replaced in the virtual scene according to a starting position and a terminating position of the moving operation;
replacing the target area with the object to be built.
In a possible implementation manner, if the object type is a two-dimensional concatenated type, the interactive operation is a moving operation, and the execution unit 2004 is specifically configured to determine a position of a manipulation point of the moving operation;
and if the position of the control point is not recorded in the interactive operation process, replacing the position of the control point with the object to be built.
In one possible implementation, the apparatus further includes an adjusting unit:
the adjusting unit is used for adjusting the view angle range aiming at the virtual scene according to the object size of the object to be built.
The embodiment of the present application further provides an implementation device for an interactive function, where the device may specifically be a terminal device, and the terminal device provided in the embodiment of the present application will be introduced from the perspective of hardware materialization.
Referring to fig. 21, fig. 21 is a schematic structural diagram of a terminal device provided in an embodiment of the present application. As shown in fig. 21, for convenience of explanation, only the portions related to the embodiments of the present application are shown, and details of the specific techniques are not disclosed, please refer to the method portion of the embodiments of the present application. The terminal may be any terminal device including a mobile phone, a tablet computer, a personal digital Assistant (PDA, for short in english), a Point of sale terminal (POS, for short in english), a vehicle-mounted computer, and the like, taking the terminal as a smart phone as an example:
fig. 21 is a block diagram illustrating a partial structure of a smartphone related to a terminal provided in an embodiment of the present application. Referring to fig. 21, the smart phone includes: radio Frequency (RF) circuit 2110, memory 2120, input unit 2130, display 2140, sensor 2150, audio circuit 2160, wireless fidelity (WiFi) module 2170, processor 2180, and power source 2190. Those skilled in the art will appreciate that the smartphone configuration shown in fig. 21 is not limiting and may include more or fewer components than shown, or some components in combination, or a different arrangement of components.
The memory 2120 may be used for storing software programs and modules, and the processor 2180 executes various functional applications and data processing of the smart phone by running the software programs and modules stored in the memory 2120. The memory 2120 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the smartphone, and the like. Additionally, the memory 2120 can include high-speed random access memory, and can also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
The processor 2180 is a control center of the smart phone, connects various parts of the whole smart phone by using various interfaces and lines, and executes various functions and processes data of the smart phone by running or executing software programs and/or modules stored in the memory 2120 and calling data stored in the memory 2120, thereby integrally monitoring the smart phone. Optionally, the processor 2180 may include one or more processing units; preferably, the processor 2180 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It is to be appreciated that the modem processor described above may not be integrated into processor 2180.
In the embodiment of the present application, the processor 2180 included in the terminal further has the following functions:
acquiring the object type of an object to be built in the virtual scene;
determining a target interactive function for the object to be built according to the object type, wherein the target interactive function comprises an interactive function realized based on a first number of operations;
providing a reduced-order operation function entry according to the target interactive function, wherein the reduced-order operation function entry is used for realizing the target interactive function based on a second number of operations, and the second number is smaller than the first number;
and if the reduced order operation function entrance is opened, responding to the interactive operation of a user, and executing a target interactive function on the object to be built in the virtual scene.
Referring to fig. 22, fig. 22 is a block diagram of a server 2200 provided in this embodiment, and the server 2200 may have a relatively large difference due to different configurations or performances, and may include one or more Central Processing Units (CPUs) 2222 (e.g., one or more processors) and a memory 2232, and one or more storage media 2230 (e.g., one or more mass storage devices) storing an application 2242 or data 2244. The memory 2232 and the storage medium 2230 can be, among other things, transient storage or persistent storage. The program stored in the storage medium 2230 may include one or more modules (not shown), each of which may include a series of instructions operating on a server. Further, the central processor 2222 may be configured to communicate with the storage medium 2230, and execute a series of instruction operations in the storage medium 2230 on the server 2200.
The server 2200 may also include one or more power supplies 2226, one or more wired or wireless network interfaces 2250, one or more input-output interfaces 2258, and/or one or more operating systems 2241, such as Windows Server, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, etc.
The steps performed by the server in the above embodiment may be based on the server structure shown in fig. 22.
The embodiment of the present application further provides a computer-readable storage medium for storing a program code, where the program code is used to execute any one implementation manner of the implementation method of the interaction function described in the foregoing embodiments.
The terms "first," "second," "third," "fourth," and the like in the description of the application and the above-described figures, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application described herein are, for example, capable of operation in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It should be understood that in the present application, "at least one" means one or more, "a plurality" means two or more. "and/or" for describing an association relationship of associated objects, indicating that there may be three relationships, e.g., "a and/or B" may indicate: only A, only B and both A and B are present, wherein A and B may be singular or plural. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. "at least one of the following" or similar expressions refer to any combination of these items, including any combination of single item(s) or plural items. For example, at least one (one) of a, b, or c, may represent: a, b, c, "a and b", "a and c", "b and c", or "a and b and c", wherein a, b, c may be single or plural.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (15)

1. A method for implementing an interactive function, the method comprising:
acquiring the object type of an object to be built in the virtual scene;
determining a target interactive function for the object to be built according to the object type, wherein the target interactive function comprises an interactive function realized based on a first number of operations;
providing a reduced-order operation function entry according to the target interactive function, wherein the reduced-order operation function entry is used for realizing the target interactive function based on a second number of operations, and the second number is smaller than the first number;
and if the reduced order operation function entrance is opened, responding to the interactive operation of a user, and executing a target interactive function on the object to be built in the virtual scene.
2. The method according to claim 1, wherein if the object type is a one-dimensional continuous type, the interactive operation is a moving operation, and the performing a target interactive function on the object to be built in the virtual scene in response to the user's interactive operation comprises:
acquiring a starting position and an ending position of the moving operation;
calculating a target number of the to-be-built objects to be placed on a movement trajectory of the moving operation based on the start position and the end position;
continuously placing the target number of the objects to be built on the moving track.
3. The method of claim 2, wherein prior to said obtaining a start position and an end position of said moving operation, said method further comprises:
determining the connecting direction of the object to be built according to the change of the position of the control point of the moving operation;
and determining a moving track according to the connecting direction and the position of the control point.
4. The method according to claim 1, wherein in response to an interactive operation by a user, after performing a target interactive function on the object to be built in the virtual scene, the method further comprises:
and constructing the association connection among a plurality of types of the objects to be built according to the association relationship among the objects to be built.
5. The method according to claim 4, wherein the constructing the associative link between the plurality of the object types to be built according to the associative relationship between the plurality of the objects to be built comprises:
determining a positional relationship between a plurality of the objects to be built;
and if the position relation meets a first preset condition, constructing the association connection among the types of the objects to be built according to the association relation among the objects to be built.
6. The method of claim 5, further comprising:
and if the position relation meets a second preset condition and does not meet the first preset condition, displaying a selection control for establishing the association connection to the user.
7. The method according to claim 1, wherein if the object type is zoom type, the interactive operation is a move operation, and the performing the target interactive function on the object to be built in the virtual scene in response to the user's interactive operation comprises:
zooming the object to be built in a moving direction of the moving operation according to the moving operation.
8. The method according to claim 1, wherein if the object type is a two-dimensional concatenated type, the interactive operation is a moving operation, and the performing a target interactive function on the object to be built in the virtual scene in response to the user's interactive operation comprises:
determining a target area to be replaced in the virtual scene according to the starting position and the ending position of the moving operation;
replacing the target area with the object to be built.
9. The method according to claim 1, wherein if the object type is a two-dimensional concatenated type, the interactive operation is a moving operation, and the performing a target interactive function on the object to be built in the virtual scene in response to the user's interactive operation comprises:
determining the position of a control point of the mobile operation;
and if the position of the control point is not recorded in the interactive operation process, replacing the position of the control point with the object to be built.
10. The method according to any one of claims 1-9, further comprising:
and adjusting the view angle range aiming at the virtual scene according to the object size of the object to be built.
11. An apparatus for implementing an interactive function, the apparatus comprising an obtaining unit, a determining unit, a providing unit and an executing unit:
the acquisition unit is used for acquiring the object type of the object to be built in the virtual scene;
the determining unit is used for determining a target interactive function aiming at the object to be built according to the object type, wherein the target interactive function comprises an interactive function realized based on a first number of operations;
the providing unit is configured to provide a reduced-order operation function entry according to the target interactive function, where the reduced-order operation function entry is configured to implement the target interactive function based on a second number of operations, and the second number is smaller than the first number;
and the execution unit is used for responding to the interactive operation of a user and executing the target interactive function of the object to be built in the virtual scene if the reduced-order operation function entrance is opened.
12. The apparatus according to claim 11, wherein if the object type is a one-dimensional concatenated type, the interactive operation is a move operation, and the execution unit is specifically configured to:
acquiring a starting position and an ending position of the moving operation;
calculating a target number of the to-be-built objects to be placed on a movement trajectory of the moving operation based on the start position and the end position;
continuously placing the target number of the objects to be built on the moving track.
13. The apparatus of claim 12, wherein the determining unit is further configured to:
determining the connecting direction of the object to be built according to the change of the position of the control point of the moving operation;
and determining a moving track according to the connecting direction and the position of the control point.
14. An apparatus for implementing interactive functions, the apparatus comprising a processor and a memory:
the memory is used for storing program codes and transmitting the program codes to the processor;
the processor is configured to execute the method for implementing the interactive function according to claims 1-10 according to instructions in the program code.
15. A computer-readable storage medium, characterized in that the computer-readable storage medium is used for storing program code for performing a method for implementing an interactive function according to claims 1-10.
CN202010424564.3A 2020-05-19 2020-05-19 Method, device, equipment and storage medium for realizing interactive function Pending CN111589151A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010424564.3A CN111589151A (en) 2020-05-19 2020-05-19 Method, device, equipment and storage medium for realizing interactive function

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010424564.3A CN111589151A (en) 2020-05-19 2020-05-19 Method, device, equipment and storage medium for realizing interactive function

Publications (1)

Publication Number Publication Date
CN111589151A true CN111589151A (en) 2020-08-28

Family

ID=72179417

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010424564.3A Pending CN111589151A (en) 2020-05-19 2020-05-19 Method, device, equipment and storage medium for realizing interactive function

Country Status (1)

Country Link
CN (1) CN111589151A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117473138A (en) * 2023-12-26 2024-01-30 江西工业贸易职业技术学院(江西省粮食干部学校、江西省粮食职工中等专业学校) Product display method and system based on virtual reality
WO2024066994A1 (en) * 2022-09-27 2024-04-04 腾讯科技(深圳)有限公司 Batch execution method and apparatus for target operation, and storage medium and electronic device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108245885A (en) * 2017-12-29 2018-07-06 网易(杭州)网络有限公司 Information processing method, device, mobile terminal and storage medium
CN109731329A (en) * 2019-01-31 2019-05-10 网易(杭州)网络有限公司 A kind of determination method and apparatus for the placement location of virtual component in game
CN110262730A (en) * 2019-05-23 2019-09-20 网易(杭州)网络有限公司 Edit methods, device, equipment and the storage medium of game virtual resource

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108245885A (en) * 2017-12-29 2018-07-06 网易(杭州)网络有限公司 Information processing method, device, mobile terminal and storage medium
CN109731329A (en) * 2019-01-31 2019-05-10 网易(杭州)网络有限公司 A kind of determination method and apparatus for the placement location of virtual component in game
CN110262730A (en) * 2019-05-23 2019-09-20 网易(杭州)网络有限公司 Edit methods, device, equipment and the storage medium of game virtual resource

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
佚名: "人人都有海景房 教你玩转天涯明月刀手游家园系统", 《HTTP://WWW.GAMEDOG.CN/TYMYD/20190729/2618868.HTML》 *
天人合一MOONLIGHT: "Unity移动物体时,当接近目的地时自动吸附", 《HTTPS://BLOG.CSDN.NET/MOONLIGHTPENG/ARTICLE/DETAILS/90080026》 *
林北是卿桐桐: "[一梦江湖]宅邸装修单品"青瓦阁"教程", 《HTTPS://WWW.BILIBILI.COM/VIDEO/BV1NC4Y1X7FP/?P=2&SPM_ID_FROM=PAGEDRIVER》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024066994A1 (en) * 2022-09-27 2024-04-04 腾讯科技(深圳)有限公司 Batch execution method and apparatus for target operation, and storage medium and electronic device
CN117473138A (en) * 2023-12-26 2024-01-30 江西工业贸易职业技术学院(江西省粮食干部学校、江西省粮食职工中等专业学校) Product display method and system based on virtual reality
CN117473138B (en) * 2023-12-26 2024-03-29 江西工业贸易职业技术学院(江西省粮食干部学校、江西省粮食职工中等专业学校) Product display method and system based on virtual reality

Similar Documents

Publication Publication Date Title
CN103975365B (en) Methods and systems for capturing and moving 3d models and true-scale metadata of real world objects
CN104769640B (en) It is drawn and is positioned using multiple sensors
US10241565B2 (en) Apparatus, system, and method of controlling display, and recording medium
CN110162236B (en) Display method and device between virtual sample boards and computer equipment
CN109242765B (en) Face image processing method and device and storage medium
KR20160055177A (en) Structural modeling using depth sensors
CN110321048A (en) The processing of three-dimensional panorama scene information, exchange method and device
CN112230836B (en) Object moving method and device, storage medium and electronic device
US20240078703A1 (en) Personalized scene image processing method, apparatus and storage medium
US10901613B2 (en) Navigating virtual environments
CN111589151A (en) Method, device, equipment and storage medium for realizing interactive function
CN112927260B (en) Pose generation method and device, computer equipment and storage medium
KR20140095414A (en) Method, system and computer-readable recording medium for creating motion sequence of animation
CN111359201A (en) Jigsaw puzzle type game method, system and equipment
CN114387445A (en) Object key point identification method and device, electronic equipment and storage medium
CN112891943A (en) Lens processing method and device and readable storage medium
WO2018227230A1 (en) System and method of configuring a virtual camera
Sun et al. Enabling participatory design of 3D virtual scenes on mobile devices
CN108089713A (en) A kind of interior decoration method based on virtual reality technology
CN111617475B (en) Interactive object construction method, device, equipment and storage medium
CN112612463A (en) Graphical programming control method, system and device
CN106716501A (en) Visual decoration design method, apparatus therefor, and robot
CN114727090A (en) Entity space scanning method, device, terminal equipment and storage medium
CN114942737A (en) Display method, display device, head-mounted device and storage medium
CN113327329A (en) Indoor projection method, device and system based on three-dimensional model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40027426

Country of ref document: HK