CN111013156B - Scene detection method, device, terminal and medium based on robot - Google Patents

Scene detection method, device, terminal and medium based on robot Download PDF

Info

Publication number
CN111013156B
CN111013156B CN201911304382.6A CN201911304382A CN111013156B CN 111013156 B CN111013156 B CN 111013156B CN 201911304382 A CN201911304382 A CN 201911304382A CN 111013156 B CN111013156 B CN 111013156B
Authority
CN
China
Prior art keywords
resource
resources
target scene
redundant
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911304382.6A
Other languages
Chinese (zh)
Other versions
CN111013156A (en
Inventor
张巍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Mihoyo Tianming Technology Co Ltd
Original Assignee
Shanghai Mihoyo Tianming Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Mihoyo Tianming Technology Co Ltd filed Critical Shanghai Mihoyo Tianming Technology Co Ltd
Priority to CN201911304382.6A priority Critical patent/CN111013156B/en
Publication of CN111013156A publication Critical patent/CN111013156A/en
Application granted granted Critical
Publication of CN111013156B publication Critical patent/CN111013156B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/70Game security or game management aspects
    • A63F13/77Game security or game management aspects involving data related to game devices or game servers, e.g. configuration data, software version or amount of memory
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/50Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers
    • A63F2300/55Details of game data or player data management
    • A63F2300/5526Game data structure

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Business, Economics & Management (AREA)
  • Computer Security & Cryptography (AREA)
  • General Business, Economics & Management (AREA)
  • Stored Programmes (AREA)
  • Manipulator (AREA)

Abstract

The invention discloses a scene detection method, a device, a terminal and a medium based on a robot, wherein the method comprises the following steps: when detecting triggering scene detection operation, detecting each resource to be used in a target scene based on a preset virtual user, and taking the resource as a current resource corresponding to the target scene; determining redundant resources corresponding to the target scene based on the current resources and preset resources acquired in advance; when detecting that the redundant resource changes, updating the target scene based on the changed redundant resource. According to the technical scheme, the technical problems that the existing method traverses all resources in the target scene in an artificial mode, whether redundant resources exist or not is further determined, the labor cost is high, the accuracy is low, and accordingly the downloaded resource installation package is large are solved, the fact that whether the redundant resources exist or not is achieved based on the fact that the robot automatically detects the resources in the target scene is achieved, and the technical effects of resource detection efficiency and accuracy are improved.

Description

Scene detection method, device, terminal and medium based on robot
Technical Field
The embodiment of the invention relates to the technical field of game development, in particular to a scene detection method, device, terminal and medium based on a robot.
Background
For the game industry, many redundant objects exist in the scene, such as temporary resources, occupied resources, air walls in the game, and the like, which are needed when a target scene is manufactured, are all intermediate products, namely redundant resources.
When downloading games, the redundant resources are easy to cause the problems of larger installation package and higher requirement on the terminal memory, and meanwhile, in the game process, the redundant resources have no practical use, scene defects can be exposed, and the problem of poor user experience is caused.
At present, in order to avoid redundant resources in a scene, each resource in the scene can be traversed manually, and the problems still exist due to a certain error in manual detection.
Disclosure of Invention
The embodiment of the invention provides a scene detection method, a scene detection device, a scene detection terminal and a scene detection medium based on a robot, so as to achieve the technical effects of improving commodity display effects and user experience.
In a first aspect, an embodiment of the present invention provides a robot-based scene detection method, where the method is applied to a game client, and includes:
when detecting triggering scene detection operation, detecting each resource to be used in a target scene based on a preset virtual user, and taking the resource as a current resource corresponding to the target scene;
determining redundant resources corresponding to the target scene based on the current resources and preset resources acquired in advance, and outputting the redundant resources in a preset format;
and when detecting that the redundant resources change, updating the target scene based on the changed redundant resources.
In a second aspect, an embodiment of the present invention further provides a robot-based scene detection device, where the device is configured in a game client, and includes:
the current resource acquisition module is used for detecting each resource to be used in the target scene based on a preset virtual user when detecting the trigger scene detection operation, and taking the resource to be used as a current resource corresponding to the target scene;
the redundant resource acquisition module is used for determining a redundant resource corresponding to the target scene based on the current resource and a preset resource acquired in advance and outputting the redundant resource in a preset format;
and the target scene resource updating module is used for updating the target scene based on the changed redundant resource when detecting that the redundant resource changes.
In a third aspect, an embodiment of the present invention further provides a terminal, where the terminal includes:
one or more processors;
storage means for storing one or more programs,
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the robot-based scene detection method according to any of the embodiments of the present invention.
In a fourth aspect, embodiments of the present invention also provide a storage medium containing computer-executable instructions, which when executed by a computer processor, are used to perform a robot-based scene detection method according to any of the embodiments of the present invention.
According to the technical scheme, when the trigger scene detection operation is detected, all resources to be used in a target scene are detected based on a preset virtual user, and the resources are used as current resources corresponding to the target scene; determining redundant resources corresponding to the target scene based on the current resources and preset resources acquired in advance, and outputting the redundant resources in a preset format; when detecting that the redundant resources change, updating the target scene based on the changed redundant resources, solving the technical problems that whether the redundant resources exist or not is determined by traversing each resource in the target scene in a manual mode in the prior art, the labor cost is high, the accuracy is low, and the downloaded resource installation package is large is caused, realizing the automatic detection of the resources in the target scene based on the robot, determining whether the redundant resources exist or not, and improving the technical effects of the resource detection efficiency and the accuracy.
Drawings
In order to more clearly illustrate the technical solution of the exemplary embodiments of the present invention, a brief description is given below of the drawings required for describing the embodiments. It is obvious that the drawings presented are only drawings of some of the embodiments of the invention to be described, and not all the drawings, and that other drawings can be made according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic flow chart of a scene detection method based on a robot according to a first embodiment of the present invention;
fig. 2 is a schematic flow chart of a scene detection method based on a robot according to a second embodiment of the present invention;
fig. 3 is a schematic structural diagram of a scene detection device based on a robot according to a third embodiment of the present invention;
fig. 4 is a schematic structural diagram of a terminal according to a fourth embodiment of the present invention.
Detailed Description
The invention is described in further detail below with reference to the drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting thereof. It should be further noted that, for convenience of description, only some, but not all of the structures related to the present invention are shown in the drawings.
Example 1
Fig. 1 is a schematic flow chart of a robot-based scene detection method according to an embodiment of the present invention, where the embodiment is applicable to detecting related resources in a game scene and determining a situation that redundant resources need to be deleted, the method may be performed by a robot-based scene detection device, and the device may be implemented in the form of software and/or hardware, where the hardware may be an electronic device, and the electronic device may be a mobile terminal or a PC terminal.
As shown in fig. 1, the method of the present embodiment includes:
and S110, detecting each resource to be used in the target scene based on a preset virtual user when detecting the trigger scene detection operation, and taking the resource to be used as the current resource corresponding to the target scene.
The target scene is a scene in which the game process is located. The resources to be used are resources to be used in each target scene, and optionally, the resources correspond to trees, rivers, tables, chairs and the like. And taking all the resources used in the game process as current resources. The virtual user may be understood as a robot for detecting each resource to be used contained in the target scene.
Specifically, after the project corresponding to the target scene is completed, the resources which are not frequently used in the target scene need to be deleted so as to reduce the data volume in the installation package, and the virtual user can be set to play the game, so that each resource in the target scene is detected in the game process. When detecting the operation of triggering scene detection by the user, optionally, triggering the scene detection control, each resource used in the target scene can be obtained based on the preset virtual user, and each resource detected at this time is used as the current resource.
In order to improve the efficiency of determining the current resources in the target scene, at least one virtual user can be preset to detect the used resources in the target scene in parallel. Optionally, based on at least one preset virtual user, traversing each currently called resource to be used in the target scene in parallel, marking each resource according to a preset mode, and taking each identified resource to be used as the current resource.
Wherein the number of virtual users may be one, two or more. The number of virtual users can be set according to actual conditions, and the number of virtual users is three. When the number of the virtual users is multiple, the target scene can be divided according to the number of the virtual users, and optionally, if the number of the virtual users is three, the target scene can be divided into three areas correspondingly. The preset manner may be to determine resource identifiers corresponding to the respective resources in advance, for example, to set a resource identifier corresponding to a tree to 1, a resource identifier corresponding to a river to 2, and the like. Parallel traversal may be understood as each virtual user traversing a target scene corresponding thereto, e.g., 3 virtual users dividing the target scene into three sub-areas A, B, C, a first virtual user may traverse the resources in sub-area a, a second virtual user may traverse the resources in sub-area B, and a third virtual user may edit the resources in sub-area C.
Specifically, a plurality of virtual users traverse each resource to be used in the target scene in parallel, mark each resource to be used according to a predetermined resource identification method, and take each marked resource to be used as a current resource.
S120, determining redundant resources corresponding to the target scene based on the current resources and the preset resources acquired in advance, and outputting the redundant resources in a preset format.
Wherein, the redundant resources are resources which are not needed to be used by the target scene in the game process. Optionally, the types of air walls in the preset resources are multiple, and the number of air walls in the game is one, so that resources different from the types of air walls in the game are redundant resources. Since the staff is required to determine what the redundant resources are in the target scene, whether the redundant resources need to be deleted or not, the redundant resources can be output in a preset format.
Specifically, according to the current resource and the preset resource, the redundant resource in the target scene can be determined, and the redundant resource is output to the display interface according to the preset format, so that a worker can determine whether to delete the redundant resource in the redundant resource list according to the output redundant resource list.
Before triggering scene detection, each resource used in making the target scene is required to be acquired, and the resource used in making the target scene can be used as a preset resource. Because the resources used in the process of making the target scene are far larger than the resources used in the process, the redundant resources to be deleted can be determined based on the preset resources and the current resources.
Optionally, each resource in the target scene is acquired through a resource acquisition tool, each resource is marked based on a preset mode, and each marked resource is used as a preset resource.
The preset resources are used for manufacturing the target scene. The resource acquisition tool may be understood as an engine. The preset manner may be understood as a predetermined resource identifier corresponding to each resource.
Specifically, each resource used in the process of making the target scene is obtained through a resource obtaining tool, each resource is marked based on a predetermined resource identifier corresponding to each resource, and each marked resource is used as a preset resource.
That is, the preset resources are a set of resources used when making the target scene; the current resource is a collection of individual resources used during the game. The number of elements in the set corresponding to the preset resource is much greater than the number of elements in the set corresponding to the current resource.
In this embodiment, according to the current resource and the preset resource, the determination of the redundant resource may be: and screening resources different from the current resource identifier from the preset resources according to the current resource identifier in the current resources and the preset resource identifier in the preset resources, and taking the screened resources as redundant resources.
Wherein, the identifiers corresponding to the elements in the current resource are used as the current resource identification; and taking the identifier corresponding to each element in the preset resource as a preset resource identifier.
It can be understood that: according to each current resource identifier in the current resources and each preset resource identifier in the preset resources, each resource which does not comprise the current resource identifier and the resource identifier can be screened out from the preset resources, and the screened resource is used as a redundant resource and the corresponding identifier is used as a redundant identifier.
And S130, when detecting that the redundant resources are changed, updating the target scene based on the changed redundant resources.
In this embodiment, after determining the redundant resource, the redundant resource may be output according to a preset format, so as to obtain a redundant resource list. The scene maker can judge whether the scene is the content which should exist in the target scene according to the redundant resource list, so as to determine whether one or more redundant resources in the redundant resource list need to be deleted.
Specifically, when a worker who makes a target scene deletes a part of the resources in the redundant resource list, the client may detect a triggered deletion operation, and update the target scene according to the deleted redundant resource list, that is, the updated target scene does not include the deleted redundant resources.
According to the technical scheme, when the trigger scene detection operation is detected, all resources to be used in a target scene are detected based on a preset virtual user, and the resources are used as current resources corresponding to the target scene; determining redundant resources corresponding to the target scene based on the current resources and preset resources acquired in advance, and outputting the redundant resources in a preset format; when detecting that the redundant resources change, updating the target scene based on the changed redundant resources, solving the technical problems that whether the redundant resources exist or not is determined by traversing each resource in the target scene in a manual mode in the prior art, the labor cost is high, the accuracy is low, and the downloaded resource installation package is large is caused, realizing the automatic detection of the resources in the target scene based on the robot, determining whether the redundant resources exist or not, and improving the technical effects of the resource detection efficiency and the accuracy.
Example two
As a preferred embodiment of the foregoing embodiments, fig. 2 is another flow chart of a robot-based scene detection method according to a second embodiment of the present invention. As shown in fig. 2, the method includes:
s210, a scene data generation module.
The scene data generating module may be understood as a module for acquiring a preset resource.
Specifically, prior to detection of the target scene, various resources used in making the target scene may be acquired by a resource acquisition tool. And determining the identification corresponding to each resource through a scene data generation module.
That is, the preset resource is a set of a plurality of resources.
S220, a robot traversing module.
Wherein the robot may be understood as a virtual user. A robot traversal module may be understood as a module that traverses a target scene based on a robot.
Specifically, according to the number of virtual users, the target scene may be divided into sub-areas as many as the number of virtual users. Multiple virtual users traverse each resource used in the target scene in parallel.
That is, the target scene detection may traverse the various resources used in the game process by the virtual user simulating the game. And running and testing the target scene in parallel through a certain algorithm according to the performance of the test equipment, and recording the resource content encountered by the virtual user.
S230, a robot data generation module.
The robot data generating module may be understood as data formed according to the current resource after the current resource is acquired.
Specifically, according to the running test result of the robot, data collected by all virtual users are summarized to form current resources, and the current identification of each resource in the current resources is determined.
S240, a data analysis module.
The data analysis module may be understood as a module for determining redundant resources according to current resources and preset resources.
Specifically, the preset resource identifier in the preset resource is compared with the current resource identifier in the current resource, the difference content and the specific resource are used as redundant resources, and the redundant resources are output in a preset format.
S250, a data deleting module.
And outputting the screened redundant resources on a display interface so as to enable a worker making a target scene to determine whether the redundant resources in the redundant resource list need to be deleted according to the list to which the redundant resources belong. If deletion is needed, a worker making the target scene can mark the redundant resources to be deleted, and after the triggering of the deletion control is detected, the marked redundant resources can be deleted. Correspondingly, each resource in the target scene can be updated according to the deleted redundant resources, so that the technical effect of reducing the data volume of the installation package is realized.
According to the technical scheme, when the trigger scene detection operation is detected, all resources to be used in a target scene are detected based on a preset virtual user, and the resources are used as current resources corresponding to the target scene; determining redundant resources corresponding to the target scene based on the current resources and preset resources acquired in advance, and outputting the redundant resources in a preset format; when detecting that the redundant resources change, updating the target scene based on the changed redundant resources, solving the technical problems that whether the redundant resources exist or not is determined by traversing each resource in the target scene in a manual mode in the prior art, the labor cost is high, the accuracy is low, and the downloaded resource installation package is large is caused, realizing the automatic detection of the resources in the target scene based on the robot, determining whether the redundant resources exist or not, and improving the technical effects of the resource detection efficiency and the accuracy.
Example III
Fig. 3 is a schematic structural diagram of a scene detection device based on a robot according to a third embodiment of the present invention. As shown in fig. 3, the apparatus includes: a current resource acquisition module 310, a redundant resource acquisition module 320, and a target scenario resource update module 330.
The current resource obtaining module 310 is configured to detect, when detecting that the scene detection operation is triggered, each resource to be used in the target scene based on a preset virtual user, and serve as a current resource corresponding to the target scene; a redundant resource acquisition module 320, configured to determine a redundant resource corresponding to the target scenario based on the current resource and a preset resource acquired in advance, and output the redundant resource in a preset format; and the target scene resource updating module 330 updates the target scene based on the changed redundant resource when the redundant resource is detected to be changed.
According to the technical scheme, when the trigger scene detection operation is detected, all resources to be used in a target scene are detected based on a preset virtual user, and the resources are used as current resources corresponding to the target scene; determining redundant resources corresponding to the target scene based on the current resources and preset resources acquired in advance, and outputting the redundant resources in a preset format; when detecting that the redundant resources change, updating the target scene based on the changed redundant resources, solving the technical problems that whether the redundant resources exist or not is determined by traversing each resource in the target scene in a manual mode in the prior art, the labor cost is high, the accuracy is low, and the downloaded resource installation package is large is caused, realizing the automatic detection of the resources in the target scene based on the robot, determining whether the redundant resources exist or not, and improving the technical effects of the resource detection efficiency and the accuracy.
On the basis of the above technical solution, the apparatus further includes a preset resource acquisition module, configured to, before the current resource acquisition module detects each resource in the target scene based on the preset virtual user when the trigger scene detection operation is detected, and serves as the current resource corresponding to the target scene, further be configured to:
and acquiring each resource in the target scene through a resource acquisition tool, marking each resource based on a preset mode, and taking each marked resource as a preset resource.
On the basis of the above technical solutions, the current resource obtaining module is further configured to:
based on at least one preset virtual user, traversing each currently called resource to be used in the target scene in parallel, marking each resource to be used in a preset mode, taking each marked resource to be used as the current resource,
based on the above technical solutions, the redundant resource acquisition module is further configured to:
and screening resources different from the current resource identification from the preset resources according to the current resource identification in the current resources and the preset resource identification in the preset resources, and taking the screened resources as redundant resources.
On the basis of the above technical solutions, the target scene resource updating module is further configured to:
when the deleting control in the triggering redundant resource is detected, acquiring the deleted redundant resource, and updating the target scene according to the deleted redundant resource.
On the basis of the above technical solutions, the target scene resource updating module is further configured to:
and updating each resource of the target scene according to the deleted redundant resource and the current resource.
It should be noted that each unit and module included in the above apparatus are only divided according to the functional logic, but not limited to the above division, so long as the corresponding functions can be implemented; in addition, the specific names of the functional units are also only for distinguishing from each other, and are not used to limit the protection scope of the embodiments of the present invention.
Example IV
Fig. 4 is a schematic structural diagram of a terminal according to a fourth embodiment of the present invention. Fig. 4 shows a block diagram of an exemplary terminal 40 suitable for use in implementing the embodiments of the invention. The terminal 40 shown in fig. 4 is only an example, and should not be construed as limiting the functionality and scope of use of the embodiments of the present invention.
As shown in fig. 4, the terminal 40 is in the form of a general purpose computing device. The components of terminal 40 may include, but are not limited to: one or more processors or processing units 401, a system memory 402, a bus 403 that connects the various system components (including the system memory 402 and the processing units 401).
Bus 403 represents one or more of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, a processor, or a local bus using any of a variety of bus architectures. By way of example, and not limitation, such architectures include Industry Standard Architecture (ISA) bus, micro channel architecture (MAC) bus, enhanced ISA bus, video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus.
Terminal 40 typically includes a variety of computer system readable media. Such media can be any available media that is accessible by terminal 40 and includes both volatile and nonvolatile media, removable and non-removable media.
The system memory 402 may include computer system readable media in the form of volatile memory, such as Random Access Memory (RAM) 404 and/or cache memory 405. The terminal 40 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 406 may be used to read from and write to non-removable, nonvolatile magnetic media (not shown in FIG. 4, commonly referred to as a "hard drive"). Although not shown in fig. 4, a magnetic disk drive for reading from and writing to a removable non-volatile magnetic disk (e.g., a "floppy disk"), and an optical disk drive for reading from or writing to a removable non-volatile optical disk (e.g., a CD-ROM, DVD-ROM, or other optical media) may be provided. In such cases, each drive may be coupled to bus 403 through one or more data medium interfaces. Memory 402 may include at least one program product having a set (e.g., at least one) of program modules configured to carry out the functions of embodiments of the invention.
A program/utility 408 having a set (at least one) of program modules 407 may be stored in, for example, memory 402, such program modules 407 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each or some combination of which may include an implementation of a network environment. Program modules 407 generally perform the functions and/or methods of the described embodiments of the invention.
The terminal 40 can also communicate with one or more external devices 409 (e.g., keyboard, pointing device, display 410, etc.), one or more devices that enable a user to interact with the terminal 40, and/or any device (e.g., network card, modem, etc.) that enables the terminal 40 to communicate with one or more other computing devices. Such communication may occur through an input/output (I/O) interface 411. Also, terminal 40 can communicate with one or more networks such as a Local Area Network (LAN), a Wide Area Network (WAN) and/or a public network, such as the Internet, via network adapter 412. As shown, network adapter 412 communicates with other modules of terminal 40 over bus 403. It should be appreciated that although not shown in fig. 4, other hardware and/or software modules may be used in connection with terminal 40, including, but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, data backup storage systems, and the like.
The processing unit 401 executes various functional applications and data processing by running a program stored in the system memory 402, for example, implements the robot-based scene detection method provided by the embodiment of the present invention.
Example five
A fifth embodiment of the present invention also provides a storage medium containing computer-executable instructions for performing a robot-based scene detection method when executed by a computer processor.
The method comprises the following steps:
when detecting triggering scene detection operation, detecting each resource to be used in a target scene based on a preset virtual user, and taking the resource as a current resource corresponding to the target scene;
determining redundant resources corresponding to the target scene based on the current resources and preset resources acquired in advance, and outputting the redundant resources in a preset format;
and when detecting that the redundant resources change, updating the target scene based on the changed redundant resources.
The computer storage media of embodiments of the invention may take the form of any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for embodiments of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
Note that the above is only a preferred embodiment of the present invention and the technical principle applied. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, while the invention has been described in connection with the above embodiments, the invention is not limited to the embodiments, but may be embodied in many other equivalent forms without departing from the spirit or scope of the invention, which is set forth in the following claims.

Claims (8)

1. The scene detection method based on the robot is characterized by being applied to a game client and comprising the following steps of:
when detecting triggering scene detection operation, detecting each resource to be used in a target scene based on a preset virtual user, and taking the resource as a current resource corresponding to the target scene;
determining redundant resources corresponding to the target scene based on the current resources and preset resources acquired in advance, and outputting the redundant resources in a preset format;
when detecting that the redundant resources change, updating the target scene based on the changed redundant resources;
the virtual user is a robot for detecting each resource to be used contained in the target scene;
the detecting each resource to be used in the target scene based on the preset virtual user and serving as the current resource corresponding to the target scene comprises the following steps:
based on at least one preset virtual user, traversing all currently called resources to be used in the target scene in parallel, marking all the resources to be used according to a preset mode, and taking the marked resources to be used as current resources;
the determining, based on the current resource and a preset resource acquired in advance, a redundant resource corresponding to the target scene includes:
and screening resources different from the current resource identification from the preset resources according to the current resource identification in the current resources and the preset resource identification in the preset resources, and taking the screened resources as redundant resources.
2. The scene detection method according to claim 1, wherein before detecting each resource in the target scene based on a virtual user set in advance and as a current resource corresponding to the target scene based on when the trigger scene detection operation is detected, further comprising:
and acquiring each resource in the target scene through a resource acquisition tool, marking each resource based on a preset mode, and taking each marked resource as a preset resource.
3. The scene detection method according to claim 1, wherein when the redundant resource is detected to be changed, updating the target scene based on the changed redundant resource comprises:
when the deleting control in the triggering redundant resource is detected, acquiring the deleted redundant resource, and updating the target scene according to the deleted redundant resource.
4. The scene detection method according to claim 3, wherein the updating the target scene according to the deleted redundant resources comprises:
and updating each resource of the target scene according to the deleted redundant resource and the current resource.
5. A robot-based scene detection device, configured in a game client, comprising:
the current resource acquisition module is used for detecting each resource to be used in the target scene based on a preset virtual user when detecting the trigger scene detection operation, and taking the resource to be used as a current resource corresponding to the target scene;
the redundant resource acquisition module is used for determining a redundant resource corresponding to the target scene based on the current resource and a preset resource acquired in advance and outputting the redundant resource in a preset format;
the target scene resource updating module is used for updating the target scene based on the changed redundant resources when detecting that the redundant resources are changed;
the virtual user is a robot for detecting each resource to be used contained in the target scene;
the current resource obtaining module is further configured to: based on at least one preset virtual user, traversing all currently called resources to be used in the target scene in parallel, marking all the resources to be used according to a preset mode, and taking the marked resources to be used as current resources;
the redundant resource acquisition module is further configured to: and screening resources different from the current resource identification from the preset resources according to the current resource identification in the current resources and the preset resource identification in the preset resources, and taking the screened resources as redundant resources.
6. The scene detection device according to claim 5, further comprising:
the preset resource acquisition module is used for acquiring each resource in the target scene through the resource acquisition tool, marking each resource based on a preset mode, and taking each marked resource as a preset resource.
7. A terminal, the terminal comprising:
one or more processors;
storage means for storing one or more programs,
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the robot-based scene detection method of any of claims 1-4.
8. A storage medium containing computer executable instructions for performing the robot-based scene detection method of any of claims 1-4 when executed by a computer processor.
CN201911304382.6A 2019-12-17 2019-12-17 Scene detection method, device, terminal and medium based on robot Active CN111013156B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911304382.6A CN111013156B (en) 2019-12-17 2019-12-17 Scene detection method, device, terminal and medium based on robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911304382.6A CN111013156B (en) 2019-12-17 2019-12-17 Scene detection method, device, terminal and medium based on robot

Publications (2)

Publication Number Publication Date
CN111013156A CN111013156A (en) 2020-04-17
CN111013156B true CN111013156B (en) 2023-05-05

Family

ID=70210228

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911304382.6A Active CN111013156B (en) 2019-12-17 2019-12-17 Scene detection method, device, terminal and medium based on robot

Country Status (1)

Country Link
CN (1) CN111013156B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105677883A (en) * 2016-01-14 2016-06-15 网易(杭州)网络有限公司 Animation resource optimization method and device
CN108280009A (en) * 2017-12-25 2018-07-13 福建天晴数码有限公司 A kind of method and terminal of monitoring gridding resource
CN108434734A (en) * 2018-01-30 2018-08-24 网易(杭州)网络有限公司 Virtual resource processing method, device, terminal and storage medium in scene of game

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105677883A (en) * 2016-01-14 2016-06-15 网易(杭州)网络有限公司 Animation resource optimization method and device
CN108280009A (en) * 2017-12-25 2018-07-13 福建天晴数码有限公司 A kind of method and terminal of monitoring gridding resource
CN108434734A (en) * 2018-01-30 2018-08-24 网易(杭州)网络有限公司 Virtual resource processing method, device, terminal and storage medium in scene of game

Also Published As

Publication number Publication date
CN111013156A (en) 2020-04-17

Similar Documents

Publication Publication Date Title
CN108089893B (en) Method and device for determining redundant resources, terminal equipment and storage medium
US10210076B2 (en) White box testing
CN110287696B (en) Detection method, device and equipment for rebound shell process
CN109471851B (en) Data processing method, device, server and storage medium
WO2016000546A1 (en) Method and device for checking influence of deletion of cache file, and mobile terminal
US20150242380A1 (en) Checking testing coverage
CN110704062A (en) Dependency management method, data acquisition method, device and equipment
CN111124371A (en) Game-based data processing method, device, equipment and storage medium
CN110597704B (en) Pressure test method, device, server and medium for application program
CN114282752A (en) Method and device for generating flow task, electronic equipment and storage medium
CN112040312A (en) Split-screen rendering method, device, equipment and storage medium
CN115048254B (en) Simulation test method, system, equipment and readable medium for data distribution strategy
CN108460161B (en) Hierarchical sampling method and device and computer equipment
CN108845956A (en) The method and apparatus of Application testing
CN111013156B (en) Scene detection method, device, terminal and medium based on robot
CN115022201B (en) Data processing function test method, device, equipment and storage medium
CN110457705B (en) Method, device, equipment and storage medium for processing point of interest data
CN113342632A (en) Simulation data automatic processing method and device, electronic equipment and storage medium
CN109948251B (en) CAD-based data processing method, device, equipment and storage medium
CN113961835A (en) Data processing method and device, electronic equipment and storage medium
CN104021071A (en) Method and system for obtaining process lifecycles
CN109062797B (en) Method and device for generating information
CN111309583B (en) Interface overdrawing detection method, device, medium and computing equipment
CN111552956A (en) Role authority control method and device for background management
CN113127312A (en) Method and device for testing database performance, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant