CN112107854B - Game task guide completion method, system and equipment - Google Patents

Game task guide completion method, system and equipment Download PDF

Info

Publication number
CN112107854B
CN112107854B CN202010905278.9A CN202010905278A CN112107854B CN 112107854 B CN112107854 B CN 112107854B CN 202010905278 A CN202010905278 A CN 202010905278A CN 112107854 B CN112107854 B CN 112107854B
Authority
CN
China
Prior art keywords
target
guide component
game
task
ray
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010905278.9A
Other languages
Chinese (zh)
Other versions
CN112107854A (en
Inventor
宋大伟
邹黎盛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Qinyou Network Technology Co ltd
Original Assignee
Suzhou Purple Flame Network Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Purple Flame Network Technology Co ltd filed Critical Suzhou Purple Flame Network Technology Co ltd
Priority to CN202010905278.9A priority Critical patent/CN112107854B/en
Publication of CN112107854A publication Critical patent/CN112107854A/en
Application granted granted Critical
Publication of CN112107854B publication Critical patent/CN112107854B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/10Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals
    • A63F2300/1068Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals being specially adapted to detect the point of contact of the player on a surface, e.g. floor mat, touch pad

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention relates to a method, a system and equipment for completing game task guidance, wherein the method comprises the following steps: monitoring a click signal of a user during a game task, and acquiring a click coordinate from the click signal; constructing a penetrating ray according to the click coordinate and the coordinate of a camera in Unity 3D; detecting a component penetrated by penetrating rays to obtain a penetrating result list; acquiring a guide component list of a game task; comparing the penetration result list with the guide component list to obtain a target guide component; and automatically triggering the script event of the target guide component to complete the game task step corresponding to the target guide component. The invention expands the control function of the game guiding system, enables the game guiding system to actively complete the game task steps for the user, reduces the operation times of the user, reduces the completion difficulty of the game task, and also greatly improves the game experience of the user.

Description

Game task guide completion method, system and equipment
Technical Field
The invention relates to the field of game guidance, in particular to a method, a system and equipment for completing game task guidance.
Background
In the field of gaming, when a new user enters a game or when a user is completing a newly released game task, a game guidance system is needed to help the user become familiar with the game style or task content. Generally, the game guide system mainly plays a role in making a prominent and striking prompt for the operation of a game task in a game interface, limiting a clickable area of a user, and only clicking a fixed area of the game guide by the user so as to guide the user to complete the game task.
The existing game guiding system has a rendering mode of a mask or a lifting component. Generally, the game guidance system in such a prompt manner makes a specific and striking prompt for each step of operation of the task, and a user needs to complete each step of operation according to the prompt, that is, one operation of the user does not more than one minimum step of completing the task, which causes a long and boring task guidance process of a novice or a new task, and also causes a large occupation of game running resources. Meanwhile, the mode of forcibly guiding the player to complete the whole guiding process step by step also reduces the fun of the user for exploring the game.
Therefore, the existing game guiding mode can be improved to better meet the requirements of users.
Disclosure of Invention
The invention aims to solve the technical problem that in the process of guiding the game task, the operation of a user is responded more efficiently and more actively, and the user can complete the game task better.
In order to solve the technical problem, the invention discloses a method, a system and equipment for completing game task guidance. The specific technical scheme is as follows:
in a first aspect, the invention discloses a game task guidance completion method, which comprises the following steps:
monitoring a screen click signal of a user in a progress state of a target game task, and analyzing according to the screen click signal to obtain a screen click coordinate;
constructing at least one penetrating ray according to the screen click coordinate and the coordinate of a camera in the Unity 3D;
detecting the component penetrated by the at least one penetrating ray by using a ray detection event mechanism in the Unity3D to obtain a ray penetration result list;
acquiring a guide component list of the target game task;
comparing the ray penetration result list with the guide component list to obtain a target guide component collection;
and acquiring a script event of each target guide component in the target guide component set, automatically triggering the script event of each target guide component, and completing the game task step corresponding to the target guide component set.
Further, the constructing at least one penetrating ray according to the screen click coordinates and the coordinates of the camera in Unity3D includes:
acquiring world coordinates of a camera of the current interface of the target game task in Unity 3D;
and constructing at least one penetrating ray which takes the world coordinate of the camera as a ray endpoint and points to the direction of the screen click coordinate.
Further, the detecting, by using a ray detection event mechanism in Unity3D, a component through which the at least one penetrating ray passes, and obtaining a ray penetration result list includes:
creating a ray penetration result list;
calling a Raycasta call collision detection method in a Unity3D event system mechanism, and detecting whether a component colliding with the at least one penetrating ray exists in the current task interface and/or the lower task interface of the target game task;
and acquiring data returned by the collision detection method, and caching the data in the ray penetration result list.
Further, the method further comprises: and if the data returned by the collision detection method is a null value, ignoring the clicking operation of the user at this time and not triggering or responding at any time.
Further, the comparing the ray penetration result list with the guiding component list to obtain a target guiding component set includes:
and comparing the task name, the task step order, the game object, the trigger event type and/or the response event content corresponding to each component in the ray penetration result list with each guide component in the guide component list, screening out the guide components existing in the ray penetration result list and the guide component list at the same time, and storing the guide components in a target guide component set as target guide components.
Further, the acquiring a script event of each target boot component in the set of target boot components and automatically triggering the script event of each target boot component includes:
acquiring the script event type and/or content of each target boot component in the target boot component set, and transmitting the script event type and/or content as a parameter into a time execution method in Unity 3D;
transmitting the game object, task name and/or task step sequence of each target guide component as parameters into the event execution method;
and calling the event execution method to automatically trigger the script event of each target guide component.
Further, the method further comprises:
planning a triggering sequence of each target guide component in the target guide component collection according to the script event type and/or content, the game object, the task name and/or the task step sequence of each target guide component;
and calling the event execution method to automatically trigger the script time of each target guide component according to the trigger sequence.
Further, the method further comprises:
in the progress state of the target game task, if a screen click signal of a user is not monitored within a preset time threshold, automatically judging whether a guide component in a guide component list of the target game task exists in a target area of a current game task interface or not;
if the script events exist, the guide component is taken as a target guide component and is stored in the target guide component set, and the script events of each target guide component are automatically triggered.
In a second aspect, the present invention discloses a game task guidance completion system, which includes:
the monitoring unit is used for monitoring the clicking operation of the user in the progress state of the target game task;
the first acquisition unit is used for acquiring a screen click signal of a user and analyzing the screen click signal to acquire a screen click coordinate;
the ray unit is used for constructing at least one penetrating ray according to the screen click coordinate and the coordinate of the camera in the Unity 3D;
the detection unit is used for detecting the component penetrated by the at least one penetrating ray by utilizing a ray detection event mechanism in the Unity3D to obtain a ray penetration result list;
a second acquisition unit configured to acquire a guidance component list of the target game task;
the comparison unit is used for comparing the ray penetration result list with the guide component list to obtain a target guide component collection;
and the execution unit is used for acquiring the script event of each target guide component in the target guide component set, automatically triggering the script event of each target guide component and completing the game task step corresponding to the target guide component set.
In a third aspect, the present invention discloses a computer device, comprising a processor and a memory, wherein the memory stores at least one instruction or at least one program, and the at least one instruction or the at least one program is loaded by the processor and executes a game task guidance completion method according to the first aspect.
In a fourth aspect, the present invention discloses a computer storage medium, in which at least one instruction or at least one program is stored, and the at least one instruction or the at least one program is loaded and executed by a processor to implement a game task guidance completion method according to the first aspect.
By adopting the technical scheme, the method, the system and the equipment for completing the game task guidance have the following beneficial effects:
1) the click of the user can be prompted or limited without setting a mask or raising the rendering level of the component, so that the occupation of game running resources is reduced;
2) the area which can be clicked by the user is not limited, so that the clicking of the user in any area of the interface can be triggered and the subsequent operation of the game task is performed, the randomness of the operation of the user is improved, and the fun of exploring the game is brought to the user;
3) compared with the method that the user is forced to complete the whole game task guidance step by step, the method has the advantages that the multiple components which can be responded and possibly exist in the user clicking area can be obtained through ray detection, the script events of the multiple components which can be responded are directly triggered by the guidance system, the multiple operation steps corresponding to the game task are actively completed, the user is prevented from clicking the components which can be responded one by one, the time for completing the game task is shortened, and the game experience of the user is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a flow chart illustrating a method for guiding completion of a game task according to an embodiment of the present invention;
FIG. 2 is a schematic flow diagram of a responsive component utilizing ray detection according to an embodiment of the present invention;
FIG. 3 is a flow chart illustrating steps for automatically performing a game task according to an embodiment of the present invention;
FIG. 4 is a schematic structural diagram of a game task guidance completion system according to an embodiment of the present invention;
fig. 5 is a block diagram of a hardware structure of a computer device for executing a game task guidance completion method according to an embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Reference herein to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic may be included in at least one implementation of the invention. In describing the present invention, it is to be understood that the terms "first," "second," "third," and "fourth," etc. in the description and claims of the present invention and the above-described drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Fig. 1 is a schematic flow chart of a game task guidance completion method provided by an embodiment of the invention, and the description provides the operation steps of the method according to the embodiment or the schematic flow chart, but more or less operation steps can be included based on conventional or non-creative labor. The order of steps recited in the embodiments is merely one manner of performing the steps in a multitude of orders and does not represent the only order of execution. In practice, the system or server product may be implemented in a sequential or parallel manner (e.g., parallel processor or multi-threaded environment) according to the embodiments or methods shown in the figures. Specifically, as shown in fig. 1, the game task guidance completion method may include:
s110: and monitoring a screen click signal of a user in the progress state of the target game task, and analyzing and obtaining a screen click coordinate according to the screen click signal.
Generally, a main system of a game acquires operation data of a user, responds to the operation of the user, and feeds back completion to a game guidance system so as to enable the game guidance system to determine task progress and perform subsequent game guidance steps. In the game task guidance completion method provided by the embodiment of the invention, the game guidance system not only has the auxiliary functions of receiving feedback and executing guidance, but also can autonomously manage the acquisition, storage and judgment of data in the game task and the triggering and response of events, so that the operation of the user can be more quickly and accurately identified and executed, and the user can be helped to finish the game task more efficiently.
Specifically, the user may perform operations in any area in the current game interface, which is not limited to operations such as clicking, long-pressing, dragging, sliding, and multi-point zooming. Generally, a user may preferentially click on visually recognizable components such as buttons and character maps, and the on-screen click monitoring of the user may be bound to the main buttons or character maps in the current game interface. After the clicking operation of the user on the current game interface is monitored, the obtained screen clicking coordinates are fed back to the game guidance system; or the game guide system directly monitors objects in the game interface, and the objects can respond to the operation of the user more quickly.
S120: and constructing at least one penetrating ray according to the screen click coordinate and the coordinate of the camera in the Unity 3D.
Specifically, step S120 provided in the embodiment of the present invention may include the following steps:
s210: the world coordinates of the camera of the current interface of the target game task in Unity3D are obtained.
It will be appreciated that most operations in a game are based on world coordinates; the camera in the game provides an observation picture of the game view in Unity, and the coordinate system based on the camera is an observation coordinate system; the screen coordinates are related to the screen resolution, and are obtained when the mouse position or the finger click position is acquired. In Unity, a function interface is provided in which respective coordinate systems are transformed into each other, and camera coordinates in the observation coordinate system can be converted into camera coordinates in the world coordinate system by a transformation function. Similarly, the click position coordinates of the screen click coordinates in the world coordinate system can be obtained.
S220: and constructing at least one penetrating ray which takes the world coordinate of the camera as a ray endpoint and points to the direction of the screen click coordinate.
It will be appreciated that the ray is here directed in the direction of the coordinate of the click position after coordinate transformation.
In some possible embodiments, when the user clicks one button of the current game interface, the boundary of the button is used as the area, and a plurality of penetrating rays penetrating the whole area are constructed, the ray detection range can be increased.
S130: and detecting the component penetrated by the at least one penetrating ray by using a ray detection event mechanism in the Unity3D to obtain a ray penetration result list.
In some possible implementations, as shown in fig. 2, the step S130 provided by the embodiment of the present invention may include the following steps:
s310: a list of ray penetration results is created.
Specifically, the create statement may be expressed as: resultList ═ newList < raycastcause > ().
S320: calling a collision detection method of Raycast All in a Unity3D event system mechanism, and detecting whether a component colliding with the at least one penetrating ray exists in the current task interface and/or the lower task interface of the target game task;
specifically, the call statement may be expressed as: eventsystem current raycastall (data, resultList); wherein the data represents screen click coordinate data or click position coordinate data after coordinate transformation.
In some possible embodiments, the ray detection may specify to detect a preset layer, or skip a preset layer, and does not respond to an object where the layer collides with the ray, or may determine whether the layer is responsive according to a script event. Taking the button as an example, if there is a response event, the script of the button is bound in the script program of the button, if the script of this form is bound, it is indicated that it is a button that can be responded, and then the data of the button is cached in the result list.
It can be understood that the current task interface and/or the lower task interface means that a plurality of task windows are provided in the game task interface, or are simultaneously displayed to the user through different rendering levels, or the user is required to display the next interface after completing the previous click, so that in the game task guidance environment, the click areas that the user needs to operate generally coincide, that is, other responsive buttons exist below one responsive button, and the responsive buttons that cannot be identified by naked eyes of the user or cannot be clicked currently can be collected through the ray detection event.
S330: and acquiring data returned by the collision detection method, and caching the data in the ray penetration result list.
It should be understood that, taking the button as an example, if there are five buttons below the position or area clicked by the user, after the ray passes through, five buttons that can be responded are detected and determined, and the IDs of the five buttons that can be responded, the script events, and other data are cached in the list as parameters in the event execution method.
In a common method, buttons to be operated in the next task are displayed in a shading, hole digging or level raising mode, but the selection setting of the buttons is fixed in the game, and a user can only complete the game task along with the fixed steps.
In some possible other embodiments, other types of ray detection methods in Unity3D, such as graphcraycaster, may be adopted, different types of ray detection methods are suitable for different game scenarios, types of returned data are also different, and may be adaptively changed according to actual development requirements, which is not specifically limited in this embodiment.
Specifically, if the collision detection method returns that the data is null, the user ignores the click operation of the user and does not trigger or respond to any event.
S140: and acquiring a guide component list of the target game task.
It is understood that the guide component list includes the data of the respondable components set by the target game task, the IDs thereof, script events and the like, for the absolute path or the relative path for completing the target game task.
S150: and comparing the ray penetration result list with the guide assembly list to obtain a target guide assembly collection.
It is understood that the guide component list is used as an absolute path or a relative path for completing the target game task, and is used as a reference item in the comparison process; the raytransmission results list serves as a collection of respondent components detected by the current game interface, and components in an absolute path or a relative path in which the target game task is completed are found.
Preferably, step S150 provided in the embodiment of the present invention may include:
and comparing the task name, the task step order, the game object, the trigger event type and/or the response event content corresponding to each component in the ray penetration result list with each guide component in the guide component list, screening out the guide components existing in the ray penetration result list and the guide component list at the same time, and storing the guide components serving as target guide components in a target guide component set.
Preferably, any component, control, etc. in Unity can be referred to as an object (GameObject), and any object corresponds to a unique instance ID, and by comparing IDs, it can be confirmed that two objects are not the same without comparing other types of data contents.
S160: and acquiring a script event of each target guide component in the target guide component set, automatically triggering the script of each target guide component, and completing the game task step corresponding to the target guide component set.
It can be understood that the guiding system adopting the mask, the hole digging or the elevation level for guiding does not link the guiding mode with the specific game task steps, and is still the target guiding component for monitoring the operation of the user and triggering the response event of the target guiding component, but in the method provided by the embodiment of the invention, the guiding system can not only detect and collect the components which can respond in the subsequent game task steps, but also can skip the components which can respond to directly trigger the response event bound behind the components, namely the script event, and the game task steps corresponding to the components which can respond are completed without the operation of the user, so that the user is prevented from clicking the components which can respond one by one, the time for completing the game task is shortened, and the game experience of the user is improved.
In some possible implementations, as shown in fig. 3, step S160 provided in this embodiment of the present invention may include the following steps:
s610: and acquiring the script event type and/or content of each target guide component in the target guide component set, and transmitting the script event type and/or content as a parameter into an event execution method in the Unity 3D.
S620: and transmitting the game object, the task name and/or the task step sequence of each target guide component into the event execution method as parameters.
S630: and calling the event execution method to automatically trigger the script event of each target guide component.
Specifically, the program language of the above steps can be expressed as:
executeveevents (GameObject, data, executevevents. pointerclickhandler), wherein executeveevents is an event execution method, GameObject is a component or object that can respond, and executeveevents pointerclickhandler indicates that the triggering mode for the component is click. And after the event execution is finished, releasing the cache space and re-entering the monitoring state.
Further, the method further comprises:
planning a triggering sequence of each target guide component in the target guide component collection according to the script event type and/or content, the game object, the task name and/or the task step sequence of each target guide component;
and calling the event execution method to automatically trigger the script event of each target guide component according to the trigger sequence.
An embodiment of the present invention further provides a game task guidance and completion system, as shown in fig. 4, the game task guidance and completion system includes:
the monitoring unit 410 is configured to monitor a click operation of a user in a progress state of the target game task.
The first obtaining unit 420 is configured to obtain a screen click signal of a user, and obtain a screen click coordinate according to the screen click signal.
And the ray unit 430 is used for constructing at least one penetrating ray according to the screen click coordinate and the coordinate of the camera in the Unity 3D.
Further, the ray unit further comprises a coordinate conversion module for converting the coordinates of the camera and the screen click coordinates into position coordinates in the same coordinate system.
The detecting unit 440 is configured to detect the component traversed by the at least one penetrating ray by using a ray detection event mechanism in Unity3D, and obtain a ray penetration result list.
A second obtaining unit 450, configured to obtain a guide component list of the target game task.
A comparing unit 460, configured to compare the ray penetration result list with the guiding component list, so as to obtain a target guiding component set.
The executing unit 470 is configured to acquire a script event of each target guidance component in the target guidance component set, automatically trigger the script event of each target guidance component, and complete a game task step corresponding to the target guidance component set.
Further, the execution unit may further include an execution sequence planning module, configured to plan a trigger sequence of each target guidance component in the set of target guidance components according to the script event type and/or content, the game object, the task name, and/or the task step order of each target guidance component.
Compared with the existing guide system, the game task guide completion system has more autonomous control rights. The game guiding system provided by the embodiment has the auxiliary functions of receiving feedback and executing guidance, can autonomously manage the acquisition, storage and judgment of data in the game task and the triggering and response of events, can identify and execute the operation of the user more quickly and accurately, helps the user to finish the game task more efficiently, and improves the game experience of the user.
The game task guidance completion system and method embodiments of the present invention are based on the same inventive concept, and please refer to the method embodiments for details, which are not described herein again.
An embodiment of the present invention further provides a computer device, where the computer device includes: the game system comprises a processor and a memory, wherein at least one instruction or at least one program is stored in the memory, and the at least one instruction or the at least one program is loaded by the processor and executed to realize a game task guidance completion method according to the embodiment of the invention.
The memory may be used to store software programs and modules, and the processor may execute various functional applications by executing the software programs and modules stored in the memory. The memory can mainly comprise a program storage area and a data storage area, wherein the program storage area can store an operating system, application programs needed by functions and the like; the storage data area may store data created according to use of the apparatus, and the like. Further, the memory may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, the memory may also include a memory controller to provide the processor access to the memory.
The method embodiments provided by the embodiments of the present invention may be executed in a computer terminal, a server, or a similar computing device, that is, the computer device may include a computer terminal, a server, or a similar computing device. Fig. 5 is a block diagram of a hardware structure of a computer device for executing a game task guidance completion method according to an embodiment of the present invention, and as shown in fig. 5, the internal structure of the computer device may include, but is not limited to: a processor, a network interface, and a memory. The processor, the network interface, and the memory in the computer device may be connected by a bus or in other manners, and fig. 5 shown in the embodiment of the present specification is exemplified by being connected by a bus.
The processor (or CPU) is a computing core and a control core of the computer device. The network interface may optionally include a standard wired interface, a wireless interface (e.g., WI-FI, mobile communication interface, etc.). Memory (Memory) is a Memory device in a computer device used to store programs and data. It is understood that the memory herein may be a high-speed RAM storage device, or may be a non-volatile storage device (non-volatile memory), such as at least one magnetic disk storage device; optionally, at least one memory device located remotely from the processor. The memory provides storage space that stores an operating system of the electronic device, which may include, but is not limited to: a Windows system (an operating system), a Linux system (an operating system), an Android system, an IOS system, etc., which are not limited in the present invention; also, one or more instructions, which may be one or more computer programs (including program code), are stored in the memory space and are adapted to be loaded and executed by the processor. In this embodiment of the present specification, the processor loads and executes one or more instructions stored in the memory to implement the method for completing the guidance of game tasks provided in the above method embodiment.
The embodiment of the invention also provides a computer storage medium, wherein at least one instruction or at least one program is stored in the computer storage medium, and the at least one instruction or the at least one program is loaded by the processor and executes the game task guidance completion method according to the embodiment of the invention.
Optionally, in this embodiment, the storage medium may include, but is not limited to: a U-disk, a Read-only memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
It should be noted that: the precedence order of the above embodiments of the present invention is only for description, and does not represent the merits of the embodiments. And specific embodiments thereof have been described above. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, as for the apparatus, system and server embodiments, since they are substantially similar to the method embodiments, the description is relatively simple, and reference may be made to some descriptions of the method embodiments for relevant points.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (10)

1. A game task guidance completion method, characterized in that the method comprises:
monitoring a screen click signal of a user in a progress state of a target game task, and analyzing according to the screen click signal to obtain a screen click coordinate;
constructing at least one penetrating ray according to the screen click coordinate and the coordinate of a camera in the Unity 3D;
detecting the component penetrated by the at least one penetrating ray by using a ray detection event mechanism in the Unity3D to obtain a ray penetration result list;
acquiring a guide component list of the target game task;
comparing the ray penetration result list with the guide component list to obtain a target guide component collection;
and acquiring a script event of each target guide component in the target guide component set, automatically triggering the script event of each target guide component, and completing the game task step corresponding to the target guide component set.
2. The game task guidance completion method of claim 1, wherein the constructing at least one penetrating ray according to the screen click coordinates and the coordinates of a camera in Unity3D comprises:
acquiring world coordinates of a camera of the current interface of the target game task in Unity 3D;
and constructing at least one penetrating ray which takes the world coordinate of the camera as a ray endpoint and points to the direction of the screen click coordinate.
3. The method as claimed in claim 1, wherein the detecting the component passed by the at least one penetrating ray by using a ray detection event mechanism in Unity3D to obtain a list of ray penetration results comprises:
creating a ray penetration result list;
calling a Raycasta call collision detection method in a Unity3D event system mechanism, and detecting whether a component colliding with the at least one penetrating ray exists in the current task interface and/or the lower task interface of the target game task;
and acquiring data returned by the collision detection method, and caching the data in the ray penetration result list.
4. A game task guidance completion method according to claim 3, wherein the method further comprises:
and if the data returned by the collision detection method is a null value, ignoring the clicking operation of the user at this time and not triggering or responding any event.
5. The method as claimed in claim 1, wherein the step of comparing the list of radiolucent results with the list of guiding components to obtain a target guiding component set comprises:
and comparing the task name, the task step order, the game object, the trigger event type and/or the response event content corresponding to each component in the ray penetration result list with each guide component in the guide component list, screening out the guide components existing in the ray penetration result list and the guide component list at the same time, and storing the guide components in a target guide component set as target guide components.
6. The method of claim 1, wherein the obtaining the script event of each target boot component in the set of target boot components and automatically triggering the script event of each target boot component comprises:
acquiring the script event type and/or content of each target guide component in the target guide component set, and transmitting the script event type and/or content serving as a parameter into an event execution method in Unity 3D;
transmitting the game object, task name and/or task step sequence of each target guide component as parameters into the event execution method;
and calling the event execution method to automatically trigger the script event of each target guide component.
7. The game task guidance completion method of claim 6, wherein the method further comprises:
planning a triggering sequence of each target guide component in the target guide component collection according to the script event type and/or content, the game object, the task name and/or the task step sequence of each target guide component;
and calling the event execution method to automatically trigger the script event of each target guide component according to the trigger sequence.
8. A game task guidance completion method according to any one of claims 6 or 7, characterized in that the method further comprises:
in the progress state of the target game task, if a screen click signal of a user is not monitored within a preset time threshold, automatically judging whether a guide component in a guide component list of the target game task exists in a target area of a current game task interface or not;
if the script events exist, the guide component is taken as a target guide component and is stored in the target guide component set, and the script events of each target guide component are automatically triggered.
9. A game task guidance completion system, the system comprising:
the monitoring unit is used for monitoring the clicking operation of the user in the progress state of the target game task;
the first acquisition unit is used for acquiring a screen click signal of a user and analyzing the screen click signal to acquire a screen click coordinate;
the ray unit is used for constructing at least one penetrating ray according to the screen click coordinate and the coordinate of the camera in the Unity 3D;
the detection unit is used for detecting the component penetrated by the at least one penetrating ray by utilizing a ray detection event mechanism in the Unity3D to obtain a ray penetration result list;
a second acquisition unit configured to acquire a guidance component list of the target game task;
the comparison unit is used for comparing the ray penetration result list with the guide component list to obtain a target guide component collection;
and the execution unit is used for acquiring the script event of each target guide component in the target guide component set, automatically triggering the script event of each target guide component and completing the game task step corresponding to the target guide component set.
10. A computer device comprising a processor and a memory, wherein at least one instruction or at least one program is stored in the memory, and wherein the at least one instruction or the at least one program is loaded by the processor and executes a game task boot-completion method according to any one of claims 1 to 8.
CN202010905278.9A 2020-09-01 2020-09-01 Game task guide completion method, system and equipment Active CN112107854B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010905278.9A CN112107854B (en) 2020-09-01 2020-09-01 Game task guide completion method, system and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010905278.9A CN112107854B (en) 2020-09-01 2020-09-01 Game task guide completion method, system and equipment

Publications (2)

Publication Number Publication Date
CN112107854A CN112107854A (en) 2020-12-22
CN112107854B true CN112107854B (en) 2021-03-16

Family

ID=73803992

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010905278.9A Active CN112107854B (en) 2020-09-01 2020-09-01 Game task guide completion method, system and equipment

Country Status (1)

Country Link
CN (1) CN112107854B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113893540B (en) * 2021-09-30 2023-08-25 腾讯科技(深圳)有限公司 Information prompting method and device, storage medium and electronic equipment

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9262180B2 (en) * 2012-04-26 2016-02-16 Adobe Systems Incorporated Method and apparatus for recommending product features in a software application in real time
CN106774873A (en) * 2016-12-12 2017-05-31 联想(北京)有限公司 A kind of display methods of virtual interface, electronic equipment and device
CN107145227A (en) * 2017-04-20 2017-09-08 腾讯科技(深圳)有限公司 The exchange method and device of virtual reality scenario
CN107362538A (en) * 2017-07-05 2017-11-21 腾讯科技(深圳)有限公司 One kind game auxiliary information methods of exhibiting, device and client
CN107952242A (en) * 2017-10-30 2018-04-24 努比亚技术有限公司 A kind of terminal software experiential method, terminal and computer-readable recording medium
CN108108214A (en) * 2017-12-18 2018-06-01 维沃移动通信有限公司 A kind of guiding method of operating, device and mobile terminal
CN108126342A (en) * 2017-12-28 2018-06-08 珠海市君天电子科技有限公司 A kind of information processing method, device and terminal
CN109101102A (en) * 2017-06-20 2018-12-28 北京行云时空科技有限公司 Widget interaction method, apparatus and system for VR/AR
CN109656653A (en) * 2018-11-26 2019-04-19 北京字节跳动网络技术有限公司 Mask icon display method and device
CN110597593A (en) * 2019-09-25 2019-12-20 腾讯科技(深圳)有限公司 User guide task processing method and device, computer equipment and storage medium
CN110703974A (en) * 2019-09-26 2020-01-17 珠海市小源科技有限公司 Message interaction method, device and storage medium
CN110865812A (en) * 2019-10-24 2020-03-06 腾讯科技(深圳)有限公司 User interface identification method and device
CN111324409A (en) * 2020-02-14 2020-06-23 腾讯科技(深圳)有限公司 Artificial intelligence-based interaction method and related device

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5676676B2 (en) * 2013-04-08 2015-02-25 株式会社スクウェア・エニックス Video game processing apparatus and video game processing program
WO2019069342A1 (en) * 2017-10-02 2019-04-11 ガンホー・オンライン・エンターテイメント株式会社 Terminal device, program, and method

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9262180B2 (en) * 2012-04-26 2016-02-16 Adobe Systems Incorporated Method and apparatus for recommending product features in a software application in real time
CN106774873A (en) * 2016-12-12 2017-05-31 联想(北京)有限公司 A kind of display methods of virtual interface, electronic equipment and device
CN107145227A (en) * 2017-04-20 2017-09-08 腾讯科技(深圳)有限公司 The exchange method and device of virtual reality scenario
CN109101102A (en) * 2017-06-20 2018-12-28 北京行云时空科技有限公司 Widget interaction method, apparatus and system for VR/AR
CN107362538A (en) * 2017-07-05 2017-11-21 腾讯科技(深圳)有限公司 One kind game auxiliary information methods of exhibiting, device and client
CN107952242A (en) * 2017-10-30 2018-04-24 努比亚技术有限公司 A kind of terminal software experiential method, terminal and computer-readable recording medium
CN108108214A (en) * 2017-12-18 2018-06-01 维沃移动通信有限公司 A kind of guiding method of operating, device and mobile terminal
CN108126342A (en) * 2017-12-28 2018-06-08 珠海市君天电子科技有限公司 A kind of information processing method, device and terminal
CN109656653A (en) * 2018-11-26 2019-04-19 北京字节跳动网络技术有限公司 Mask icon display method and device
CN110597593A (en) * 2019-09-25 2019-12-20 腾讯科技(深圳)有限公司 User guide task processing method and device, computer equipment and storage medium
CN110703974A (en) * 2019-09-26 2020-01-17 珠海市小源科技有限公司 Message interaction method, device and storage medium
CN110865812A (en) * 2019-10-24 2020-03-06 腾讯科技(深圳)有限公司 User interface identification method and device
CN111324409A (en) * 2020-02-14 2020-06-23 腾讯科技(深圳)有限公司 Artificial intelligence-based interaction method and related device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
全面了解游戏引导:6大引导形式,哪个最好?;新浪游戏;《http://games.sina.com.cn/y/n/2020-03-26/imxxsth1913670.shtml》;20200306;第1-9页 *

Also Published As

Publication number Publication date
CN112107854A (en) 2020-12-22

Similar Documents

Publication Publication Date Title
CN108196930B (en) Application program processing method and device, storage medium and computer equipment
CN107402877B (en) Android-terminal-based APP test method and system
CN109857303B (en) Interaction control method and device
US10810113B2 (en) Method and apparatus for creating reference images for an automated test of software with a graphical user interface
CN108762837A (en) Application program preloads method, apparatus, storage medium and terminal
JP4381436B2 (en) Scenario generation device and scenario generation program
CN105824742B (en) Operating user interface method for recording and device
US20170169122A1 (en) Webpage display method, mobile terminal, intelligent terminal, program and storage medium
US20160232338A1 (en) User verifying method, terminal device, server and storage medium
CN112107854B (en) Game task guide completion method, system and equipment
CN111190826B (en) Testing method, device, storage medium and equipment for virtual reality immersive tracking environment
CN106984044B (en) Method and equipment for starting preset process
CN109032343B (en) Industrial man-machine interaction system and method based on vision and haptic augmented reality
JP3913797B2 (en) Pachinko ball movement trajectory analysis method and movement trajectory analysis apparatus
CN107346197B (en) Information display method and device
CN114489461B (en) Touch response method, device, equipment and storage medium
KR101724143B1 (en) Apparatus, system, method, program for providing searching service
CN105827701B (en) Method and device for controlling controlled terminal based on Internet and Internet of things
KR102439574B1 (en) System for Searching Object Based on Image Search for Robot Process Automation
CN113688345A (en) Page switching method and device and computing equipment
CN113069757B (en) Cloud game automatic acceleration method, cloud game automatic acceleration equipment and computer readable storage medium
CN111061630A (en) Product debugging method, debugging device and readable storage medium
CN113220562A (en) Terminal testing method and device, computer storage medium and electronic equipment
CN110750193A (en) Scene topology determination method and device based on artificial intelligence
CN113842636A (en) Game operation guide completion method, system and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20240612

Address after: 12F, Friendship Time Building, No. 68 Qitai Road, Industrial Park, Suzhou City, Jiangsu Province, 215000

Patentee after: SUZHOU QINYOU NETWORK TECHNOLOGY Co.,Ltd.

Country or region after: China

Address before: Room 5272, Tianlong building, no.378, Zhujiang South Road, Mudu Town, Wuzhong District, Suzhou City, Jiangsu Province

Patentee before: SUZHOU PURPLE FLAME NETWORK TECHNOLOGY Co.,Ltd.

Country or region before: China