CN111957040A - Method and device for detecting shielding position, processor and electronic device - Google Patents

Method and device for detecting shielding position, processor and electronic device Download PDF

Info

Publication number
CN111957040A
CN111957040A CN202010929105.0A CN202010929105A CN111957040A CN 111957040 A CN111957040 A CN 111957040A CN 202010929105 A CN202010929105 A CN 202010929105A CN 111957040 A CN111957040 A CN 111957040A
Authority
CN
China
Prior art keywords
virtual character
coordinate point
standable
game
virtual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010929105.0A
Other languages
Chinese (zh)
Other versions
CN111957040B (en
Inventor
姜斌
王嘉恒
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202010929105.0A priority Critical patent/CN111957040B/en
Publication of CN111957040A publication Critical patent/CN111957040A/en
Application granted granted Critical
Publication of CN111957040B publication Critical patent/CN111957040B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • A63F13/537Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen
    • A63F13/5372Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen for tagging characters, objects or locations in the game scene, e.g. displaying a circle under the character controlled by the player
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/822Strategy games; Role-playing games
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/30Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by output arrangements for receiving control signals generated by the game device
    • A63F2300/308Details of the user interface
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/80Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game specially adapted for executing a specific type of game
    • A63F2300/807Role playing or strategy games

Abstract

The invention discloses a method and a device for detecting a shielding position, a processor and an electronic device. The method comprises the following steps: obtaining a coordinate point where at least one virtual character can stand from a game map; acquiring images to be compared corresponding to the standable coordinate points, wherein the images to be compared are used for recording game pictures when the virtual character is located at the standable coordinate points; acquiring a game picture when the virtual character is positioned at a non-shielding position in a game scene as a reference; whether the position of the virtual character at the standable coordinate point is occluded is determined based on the image to be compared and the reference picture. The invention solves the technical problems that excessive time cost and labor cost are easily consumed and omission is easily caused in an operation mode of manually traversing a game map to obtain the position of the shelter in the related technology.

Description

Method and device for detecting shielding position, processor and electronic device
Technical Field
The invention relates to the field of computers, in particular to a method and a device for detecting a shielding position, a processor and an electronic device.
Background
Action Role Playing Game (ARPG) generally means that in a Game, a Game player plays a specific virtual Role and plays in a real or fictitious world, and the Action of the virtual Role manipulated by the Game player is more prominent. QUALITY ASSURANCE (QA) personnel are generally responsible for providing sufficient trust to indicate that an entity is able to meet QUALITY requirements, while all planned and systematic activities are conducted in the QUALITY management infrastructure and validated as needed.
At present, the ARPG games provided in the related art are inevitably subjected to the situation of being blocked by the blocking objects arranged in the game scene during the virtual character walking process. For occlusion handling, a common solution is to modify an occlusion object in a game scene so that the occlusion object does not occlude a virtual character during the walking process of the virtual character. With this solution, in order to make the game player observe the virtual character as much as possible without being obstructed by the obstacles provided in the game scene, it is necessary to modify it to a different extent. Before modification, the position of the shelters causing the shelter needs to be found.
For finding the positions of the shelters causing the sheltering, a more common way to find the sheltering positions is as follows: and operating a virtual character to traverse each position on the game map in a manual mode, judging the positions which can cause occlusion by naked eyes, storing the occlusion positions in a specific file in a screenshot mode, and continuously traversing the next position until all the positions on the game map are completely traversed, so that the final traversal result can be obtained.
However, the obvious drawbacks of this mode of operation are: excessive time cost and labor cost are consumed, and the shielding position is easily missed by manual searching, so that even if excessive time cost and labor cost are consumed, an accurate result is difficult to obtain. In addition, if the artists make some modification to the game scene, the QA staff needs to traverse the entire game map again to re-determine the occlusion position, thereby repeatedly consuming time and labor costs.
In view of the above problems, no effective solution has been proposed.
Disclosure of Invention
At least part of embodiments of the invention provide a method, a device, a processor and an electronic device for detecting a sheltering position, so as to solve at least the technical problems that in the related art, an operation mode of manually traversing a game map to obtain the position of a shelter easily consumes excessive time cost and labor cost, and is easy to cause omission.
According to an embodiment of the present invention, a method for detecting an occlusion position is provided, in which a terminal device provides a graphical user interface, the graphical user interface at least partially includes a game scene, and the game scene includes a game map and a virtual character, including:
obtaining a coordinate point where at least one virtual character can stand from a game map; acquiring images to be compared corresponding to the standable coordinate points, wherein the images to be compared are used for recording game pictures when the virtual character is located at the standable coordinate points; acquiring a game picture when the virtual character is positioned at a non-shielding position in a game scene as a reference; whether the position of the virtual character at the standable coordinate point is occluded is determined based on the image to be compared and the reference picture.
Optionally, obtaining a coordinate point where at least one virtual character can stand from the game map includes: the method comprises the steps of obtaining a size parameter, a distance parameter, an initial coordinate point and a preset route searching map of a game map, wherein the distance parameter is used for representing the distance between two adjacent coordinate points in the game map, the initial coordinate point represents the initial position of a virtual character in the game map, and the route searching map is used for recording the walking position information of the virtual character in the game map; acquiring a plurality of coordinate points to be selected in a game map by adopting the size parameter, the distance parameter and the initial coordinate point; a standable coordinate point is selected from a plurality of coordinate points to be selected based on a routing map.
Optionally, acquiring a plurality of coordinate points to be selected in the game map by using the size parameter, the distance parameter, and the start coordinate point includes: determining a traversal range in the game map by adopting the size parameter and the initial coordinate point; and starting from the initial coordinate point, acquiring a plurality of coordinate points to be selected in the traversal range according to the distance parameter.
Optionally, selecting a standable coordinate point from a plurality of coordinate points to be selected based on the routing map, including: respectively determining whether each coordinate point in a plurality of coordinate points to be selected is located in a walkable area represented by the road finding graph; coordinate points located within the walkable region are recorded as standable coordinate points.
Optionally, the starting coordinate point is a standable coordinate point.
Optionally, acquiring an image to be compared corresponding to the standable coordinate point includes: starting from the initial coordinate point, sequentially traversing each standable coordinate point; and sequentially carrying out screenshot on the game picture when the virtual character traverses to each standable coordinate point to obtain an image to be compared corresponding to each standable coordinate point.
Optionally, the obtaining a game screen when the virtual character is located at an unobstructed position in the game scene as a reference includes: hiding other resources except the virtual role in the game scene; and capturing a game picture of the game scene which is subjected to the hiding processing and contains the virtual character to obtain a reference picture.
Optionally, determining whether the position of the virtual character at the standable coordinate point is occluded based on the image to be compared and the reference picture includes: acquiring a first virtual frame surrounding a virtual role in a reference picture; determining the pixel proportion of the virtual character in the first virtual frame in the reference as the reference proportion; acquiring a second virtual frame surrounding the virtual character in the image to be compared, wherein the size of the second virtual frame is the same as that of the first virtual frame; determining the pixel proportion of the virtual character in the second virtual frame in the image to be compared as a target proportion; and determining whether the position of the virtual character at the standable coordinate point is blocked or not according to the target ratio and the reference ratio.
Optionally, determining whether the position of the virtual character at the standable coordinate point is blocked according to the target proportion and the reference proportion includes: determining whether the difference between the reference ratio and the target ratio is less than a preset threshold; if the difference value between the reference occupation ratio and the target occupation ratio is smaller than a preset threshold value, determining that the virtual character is not shielded at the position of the standable coordinate point; and if the difference value between the reference occupation ratio and the target occupation ratio is greater than or equal to a preset threshold value, determining that the virtual character is shielded at the position of the standable coordinate point.
Optionally, determining a pixel proportion of the virtual character in the first virtual frame in reference to the reference proportion includes: rendering the virtual character into a target color; and calculating the ratio of the number of the pixels of the target color in the reference picture to the total number of the pixels contained in the first virtual frame to obtain the reference ratio.
Optionally, determining a pixel proportion of the virtual character in the second virtual frame in the image to be compared, as a target proportion, includes: rendering the virtual character into a target color; and calculating the ratio of the number of the pixels of the target color in the image to be compared to the total number of the pixels contained in the second virtual frame to obtain the target ratio.
According to an embodiment of the present invention, there is also provided an apparatus for detecting an occlusion position, where a terminal device provides a graphical user interface, the graphical user interface at least partially includes a game scene, and the game scene includes a game map and a virtual character, the apparatus including:
the system comprises an acquisition module, a display module and a control module, wherein the acquisition module is used for acquiring a coordinate point where at least one virtual character can stand from a game map; acquiring images to be compared corresponding to the standable coordinate points, wherein the images to be compared are used for recording game pictures when the virtual character is located at the standable coordinate points; acquiring a game picture when the virtual character is positioned at a non-shielding position in a game scene as a reference; and the determining module is used for determining whether the position of the virtual character at the standable coordinate point is blocked or not based on the image to be compared and the reference picture.
Optionally, the obtaining module is configured to obtain a size parameter, a distance parameter, a starting coordinate point, and a preset routing graph of the game map, where the distance parameter is used to represent a distance between two adjacent coordinate points in the game map, the starting coordinate point represents a starting position of the virtual character in the game map, and the routing graph is used to record position information that the virtual character can walk in the game map; acquiring a plurality of coordinate points to be selected in a game map by adopting the size parameter, the distance parameter and the initial coordinate point; a standable coordinate point is selected from a plurality of coordinate points to be selected based on a routing map.
Optionally, the obtaining module is configured to determine a traversal range in the game map by using the size parameter and the starting coordinate point; and starting from the initial coordinate point, acquiring a plurality of coordinate points to be selected in the traversal range according to the distance parameter.
Optionally, the obtaining module is configured to determine whether each coordinate point in the multiple coordinate points to be selected is located in a walkable area represented by the routing graph; coordinate points located within the walkable region are recorded as standable coordinate points.
Optionally, the starting coordinate point is a standable coordinate point.
Optionally, the obtaining module is configured to sequentially traverse each standable coordinate point from the start coordinate point; and sequentially carrying out screenshot on the game picture when the virtual character traverses to each standable coordinate point to obtain an image to be compared corresponding to each standable coordinate point.
Optionally, the obtaining module is configured to hide other resources except the virtual character in the game scene; and capturing a game picture of the game scene which is subjected to the hiding processing and contains the virtual character to obtain a reference picture.
Optionally, the determining module is configured to obtain a first virtual frame surrounding the virtual character in the reference picture; determining the pixel proportion of the virtual character in the first virtual frame in the reference as the reference proportion; acquiring a second virtual frame surrounding the virtual character in the image to be compared, wherein the size of the second virtual frame is the same as that of the first virtual frame; determining the pixel proportion of the virtual character in the second virtual frame in the image to be compared as a target proportion; and determining whether the position of the virtual character at the standable coordinate point is blocked or not according to the target ratio and the reference ratio.
Optionally, the determining module is configured to determine whether a difference between the reference ratio and the target ratio is smaller than a preset threshold; if the difference value between the reference occupation ratio and the target occupation ratio is smaller than a preset threshold value, determining that the virtual character is not shielded at the position of the standable coordinate point; and if the difference value between the reference occupation ratio and the target occupation ratio is greater than or equal to a preset threshold value, determining that the virtual character is shielded at the position of the standable coordinate point.
Optionally, the determining module is configured to render the virtual character into a target color; and calculating the ratio of the number of the pixels of the target color in the reference picture to the total number of the pixels contained in the first virtual frame to obtain the reference ratio.
Optionally, the determining module is configured to render the virtual character into a target color; and calculating the ratio of the number of the pixels of the target color in the image to be compared to the total number of the pixels contained in the second virtual frame to obtain the target ratio.
According to an embodiment of the present invention, there is further provided a non-volatile storage medium, in which a computer program is stored, where the computer program is configured to execute the method for detecting an occlusion position in any one of the above methods when the computer program runs.
There is further provided, according to an embodiment of the present invention, a processor for executing a program, where the program is configured to execute, when running, the method for detecting an occlusion position in any one of the above methods.
There is further provided, according to an embodiment of the present invention, an electronic apparatus including a memory and a processor, where the memory stores a computer program, and the processor is configured to execute the computer program to perform the method for detecting an occlusion position in any one of the above.
In at least part of embodiments of the invention, a mode of acquiring at least one standable coordinate point of a virtual character from a game map, acquiring an image to be compared corresponding to the standable coordinate point, wherein the image to be compared is used for recording a game picture when the virtual character is positioned at the standable coordinate point, and acquiring the game picture when the virtual character is positioned at a non-shielding position in a game scene as a reference is adopted, and whether the position of the virtual character at the standable coordinate point is shielded or not is determined by the image to be compared and the reference, so that the aim of adopting automatic detection of the shielding position in the game scene to replace manual detection of the shielding position in the game scene is achieved, thereby realizing the technical effects of reducing the repeated labor of QA personnel for manually traversing the game map, reducing the time cost and the labor cost for acquiring the position of a shielding object, and improving the accuracy and the integrity for detecting the position of the shielding object, and further solve the technical problems that excessive time cost and labor cost are easily consumed and omission is easily caused in an operation mode of manually traversing a game map to obtain the position of the shelter in the related technology.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
FIG. 1 is a flow chart of a method of detecting an occlusion position according to one embodiment of the invention;
FIG. 2 is a schematic illustration of determining a reference ratio in accordance with an alternative embodiment of the present invention;
FIG. 3 is a schematic illustration of determining an occlusion position in accordance with an alternative embodiment of the present invention;
fig. 4 is a block diagram of an apparatus for detecting an occlusion position according to an embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In accordance with one embodiment of the present invention, there is provided an embodiment of a method for detecting an occlusion position, it should be noted that the steps illustrated in the flowchart of the drawings may be performed in a computer system such as a set of computer executable instructions, and that although a logical order is illustrated in the flowchart, in some cases the steps illustrated or described may be performed in an order different than here.
The method embodiments may be performed in a mobile terminal, a computer terminal or a similar computing device. Taking the example of the Mobile terminal running on the Mobile terminal, the Mobile terminal may be a terminal device such as a smart phone (e.g., an Android phone, an iOS phone, etc.), a tablet computer, a palm computer, a Mobile Internet device (MID for short), a PAD, and the like. The mobile terminal may include one or more processors (which may include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a Digital Signal Processing (DSP) chip, a Microprocessor (MCU), a programmable logic device (FPGA), a neural Network Processor (NPU), a Tensor Processor (TPU), an Artificial Intelligence (AI) type processor, etc.) and a memory for storing data. Optionally, the mobile terminal may further include a transmission device, an input/output device, and a display device for a communication function. It will be understood by those skilled in the art that the foregoing structural description is only illustrative and not restrictive of the structure of the mobile terminal. For example, the mobile terminal may also include more or fewer components than described above, or have a different configuration than described above.
The memory may be used to store a computer program, for example, a software program and a module of application software, such as a computer program corresponding to the method for detecting an occlusion position in the embodiment of the present invention, and the processor executes various functional applications and data processing by running the computer program stored in the memory, that is, implements the method for detecting an occlusion position described above. The memory may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory may further include memory located remotely from the processor, and these remote memories may be connected to the mobile terminal through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device is used to receive or transmit data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the mobile terminal. In one example, the transmission device includes a Network adapter (NIC) that can be connected to other Network devices through a base station to communicate with the internet. In one example, the transmission device may be a Radio Frequency (RF) module, which is used for communicating with the internet in a wireless manner.
The display device may be, for example, a touch screen type Liquid Crystal Display (LCD) and a touch display (also referred to as a "touch screen" or "touch display screen"). The liquid crystal display may enable a user to interact with a user interface of the mobile terminal. In some embodiments, the mobile terminal has a Graphical User Interface (GUI) with which a user can interact by touching finger contacts and/or gestures on a touch-sensitive surface, where the human-machine interaction function optionally includes the following interactions: executable instructions for creating web pages, drawing, word processing, making electronic documents, games, video conferencing, instant messaging, emailing, call interfacing, playing digital video, playing digital music, and/or web browsing, etc., for performing the above-described human-computer interaction functions, are configured/stored in one or more processor-executable computer program products or readable storage media.
In this embodiment, a method for detecting an occlusion position of a mobile terminal is provided, where a terminal device provides a graphical user interface, where the graphical user interface at least partially includes a game scene, and the game scene includes a game map and a virtual character, and fig. 1 is a flowchart of a method for detecting an occlusion position according to an embodiment of the present invention, and as shown in fig. 1, the method includes the following steps:
step S10, obtaining at least one coordinate point where the virtual character can stand from the game map;
because virtual scene resources such as virtual mountains and virtual rivers are usually set in the game map, the scene resources mainly play a role in: richen and beautify game scene display effect, in addition, still can set up the map boundary in the recreation map usually, its effect lies in: the maximum activity range of the virtual character is limited, so that the area where the scene resources are located is often the area where the virtual character cannot reach, namely, not all areas on the game map have a passable path. It can be seen that, in the determination of the occlusion position, regions that are not reachable by the virtual character can be excluded first, in order to select at least one coordinate point at which the virtual character can stand within the walkable region of the virtual character. In order to select the coordinate points capable of standing, the whole game map can be partitioned according to preset parameters to obtain the coordinate points capable of allowing the virtual character to stand. The specific selection of the standable coordinate points will be described in further detail in the following optional embodiments.
Step S12, acquiring an image to be compared corresponding to the standable coordinate point, wherein the image to be compared is used for recording a game picture when the virtual character is located at the standable coordinate point;
at each of the standable coordinate points, a game screen (i.e., an image to be compared) displayed in the graphical user interface corresponding to each of the standable coordinate points may be acquired, for example, by a game screen capture method. The coordinate point where the virtual character is currently located is contained in the game picture. If the coordinate point where the virtual character is currently located has a virtual obstruction, the current game picture also contains the virtual obstruction.
Step S14, obtaining the game picture when the virtual character is located at the position without occlusion in the game scene as the reference;
step S16, it is determined whether the virtual character is occluded at the position of the standable coordinate point based on the image to be compared and the reference picture.
In a game scene, there are various situations, such as a virtual character not being blocked by a virtual blocking object, a virtual character being partially blocked by a virtual blocking object, and a virtual character being completely blocked by a virtual blocking object. If the virtual character is completely occluded by the virtual occlusion, the standable coordinate point can be determined to belong to the occlusion position. If the virtual character is partially occluded by the virtual occlusion, it is further determined whether the virtual character is occluded at the position of the standable coordinate point based on the image to be compared and the reference map.
Through the steps, the method can adopt a mode of acquiring at least one standable coordinate point of the virtual character from the game map, acquiring an image to be compared corresponding to the standable coordinate point, wherein the image to be compared is used for recording a game picture when the virtual character is positioned at the standable coordinate point, and acquiring the game picture when the virtual character is positioned at a non-shielding position in the game scene as a reference, and the method can determine whether the position of the virtual character at the standable coordinate point is shielded or not through the image to be compared and the reference, thereby achieving the purpose of adopting the shielding position in the automatic detection game scene to replace the manual detection of the shielding position in the game scene, further realizing the technical effects of reducing the repeated labor of QA personnel for manually traversing the game map, reducing the time cost and the labor cost for acquiring the position of the shielding object, and improving the accuracy and the integrity for detecting the position of the shielding object, and further solve the technical problems that excessive time cost and labor cost are easily consumed and omission is easily caused in an operation mode of manually traversing a game map to obtain the position of the shelter in the related technology.
Alternatively, in step S10, the step of obtaining a standable coordinate point from the game map may include the steps of:
step S101, acquiring a size parameter, a distance parameter, an initial coordinate point and a preset routing graph of a game map, wherein the distance parameter is used for representing the distance between two adjacent coordinate points in the game map, the initial coordinate point represents the initial position of a virtual character in the game map, and the routing graph is used for recording the walking position information of the virtual character in the game map;
step S102, obtaining a plurality of coordinate points to be selected in a game map by adopting the size parameter, the distance parameter and the initial coordinate point;
step S103, selecting a standable coordinate point from a plurality of coordinate points to be selected based on the routing map.
The distance parameter is used for representing the distance between two adjacent coordinate points in the game map. The smaller the distance between every two coordinate points, the finer the resulting detection result, but the longer the consumed detection time, correspondingly. Thus, the distance parameter is typically an empirical value, such as: the half width of the virtual character is selected to be appropriate, so that the situation that the virtual character moves on the game map according to the half-step length is checked, and the excessive detection time is avoided on the premise that coordinate points are not omitted as much as possible. The start coordinate point is a coordinate point corresponding to a start position (for example, a game place of birth) of the virtual character in the game map. The routing map is used for recording the position information of the virtual character which can walk in the game map (for example, controlling the virtual character to automatically move to a specific coordinate point, searching the optimal path between two adjacent coordinate points) and recording all the passable area information in the game map.
Therefore, firstly, a size parameter, a distance parameter, an initial coordinate point and a route searching map of a preset game map can be obtained, then a plurality of coordinate points to be selected are obtained in the game map by adopting the size parameter, the distance parameter and the initial coordinate point, the coordinate points are matched and verified with the route searching map to obtain a verification result, and finally a standable coordinate point is selected from the coordinate points to be selected from the verification result to be recorded.
Alternatively, in step S102, acquiring a plurality of coordinate points to be selected in the game map by using the size parameter, the distance parameter, and the start coordinate point may include performing the steps of:
step S1021, determining a traversal range in the game map by adopting the size parameter and the initial coordinate point;
step S1022, starting from the initial coordinate point, a plurality of coordinate points to be selected are obtained in the traversal range according to the distance parameter.
The size parameter is used for limiting the maximum moving range of the virtual character, and the initial coordinate point is used for limiting the initial position of the virtual character on the game map, so that the virtual character can be controlled to traverse the coordinate points from the initial position to the transverse direction and the longitudinal direction respectively according to the distance parameter, and a plurality of coordinate points to be selected are obtained through traversal.
Assuming that the game map has a width w, a height h, a distance parameter d, and the coordinates of the birth point are (x, y), the minimum value of the coordinates cannot be smaller than (x-w, y-h), and the maximum value of the coordinates cannot be larger than (x + w, y + h). In the traversal process, all coordinate points (X, Y) with the distance parameter d are obtained, wherein the value range of X is between [ X-w, X + w ], the value range of Y is between [ Y-w, Y + w ], and the distance parameter d is obtained, so that a plurality of coordinate points to be selected are obtained.
Alternatively, in step S103, selecting a standable coordinate point from a plurality of coordinate points to be selected based on the routing map may include performing the steps of:
step S1031, respectively determining whether each coordinate point in the multiple coordinate points to be selected is located in a walkable area represented by the road hunting map;
in step S1032, a coordinate point located in the walkable region is recorded as a standable coordinate point.
After the plurality of coordinate points to be selected are obtained through traversal, matching verification needs to be performed on the route searching graph so as to verify whether each coordinate point to be selected in the plurality of coordinate points to be selected is located in a walkable area represented by the route searching graph. And then recording the coordinate points to be selected in the walkable area as a plurality of standable coordinate points. In an alternative embodiment, the descriptive information (e.g. identification information, position information) of the plurality of standable coordinate points may be stored as a list.
Alternatively, in step S12, acquiring the image to be compared corresponding to the standable coordinate point may include performing the steps of:
step S121, starting from the initial coordinate point, sequentially traversing each standable coordinate point;
and step S122, sequentially capturing the game pictures when the virtual character traverses to each standable coordinate point, and obtaining the images to be compared corresponding to each standable coordinate point.
The starting coordinate point is a standable coordinate point. In the process of acquiring a plurality of images to be compared by using a plurality of standable coordinate points, starting from the initial coordinate point of the virtual character, the plurality of standable coordinate points can be traversed sequentially according to the description information recorded in the list. And sequentially carrying out screen capture processing on the game pictures corresponding to the coordinate points where the virtual character traverses to stand, thereby obtaining the images to be compared corresponding to the coordinate points where the virtual character can stand. In an alternative embodiment, the avatar may move from the current coordinate point to the next coordinate point within a single frame duration. Therefore, a screen capture tool is used for carrying out screen capture processing on the current game picture displayed in the graphical user interface by each frame to obtain the game screen capture corresponding to each standable coordinate point of the virtual character, a screen capture result is obtained, and whether the virtual character is shielded by the shielding object on each standable coordinate point is judged according to the screen capture result.
Alternatively, in step S14, the method for acquiring a game screen when the virtual character is located at a non-blocking position in the game scene may include the following steps:
step S141, hiding other resources except the virtual character in the game scene;
step S142, capturing a screenshot of the game screen of the game scene that is hidden and includes the virtual character, to obtain a reference image.
In the graphical user interface, the reference can be obtained by hiding other resources except the virtual character in the game scene and capturing the game picture of the game scene which is hidden and contains the virtual character. At this time, only the virtual character is included in the reference picture.
Alternatively, in step S16, determining whether the position of the virtual character at the standable coordinate point is occluded based on the image to be compared and the reference picture may include performing the steps of:
step S161, acquiring a first virtual frame surrounding the virtual character in the reference picture;
step S162, determining the pixel proportion of the virtual character in the first virtual frame in the reference picture as the reference proportion;
step S163, acquiring a second virtual frame surrounding the virtual character in the image to be compared, wherein the second virtual frame and the first virtual frame have the same size;
step S164, determining the pixel proportion of the virtual character in the second virtual frame in the image to be compared as a target proportion;
and step S165, determining whether the position of the virtual character at the standable coordinate point is shielded or not according to the target ratio and the reference ratio.
In the reference image, a first virtual frame (i.e., bounding box outline) surrounding the virtual character and a second virtual frame surrounding the virtual character in the image to be compared can be acquired, respectively. The second virtual frame has the same size as the first virtual frame (the same size means that the same virtual frame has substantially the same shape, that is, the same virtual frame is referred to as the first virtual frame in the reference and is referred to as the second virtual frame in the image to be compared). Then, the pixel ratio of the virtual character in the first virtual frame in the reference image (i.e., the reference ratio) and the pixel ratio of the virtual character in the second virtual frame in the image to be compared (i.e., the target ratio) can be determined, respectively. And finally, determining whether the position of the virtual character at the standable coordinate point is shielded or not according to the target ratio and the reference ratio.
The reference ratio only needs to be calculated once in the process of determining the occlusion position in the game map. There is a need in calculating the reference ratio to ensure that the virtual character is not occluded by any obstruction. Namely, the pixel number proportion of the target color is determined to be the normal display proportion of the virtual character within the rectangular range of the bounding box of the virtual character. For the standable coordinate points with changed subsequent occlusion relations, if the virtual character is occluded by any occlusion object, the pixel number ratio of the target color is smaller than the reference ratio.
The first virtual frame and the second virtual frame may be the bounding box outline of the virtual character or the boundary of the entire image user interface.
Alternatively, in step S164, determining whether the position of the virtual character at the standable coordinate point is occluded according to the target ratio and the reference ratio may include performing the steps of:
step S1641, determining whether a difference between the reference ratio and the target ratio is less than a preset threshold;
step S1642, if the difference value between the reference proportion and the target proportion is smaller than a preset threshold value, determining that the virtual character is not shielded at the position of the standable coordinate point;
in step S1643, if the difference between the reference proportion and the target proportion is greater than or equal to the preset threshold, it is determined that the virtual character is blocked at the position of the standable coordinate point.
In the process of determining whether the position of the virtual character at the standable coordinate point is blocked according to the target ratio and the reference ratio, firstly, a difference value between the target ratio and the reference ratio needs to be calculated, and then, whether the difference value between the reference ratio and the target ratio is smaller than a preset threshold value is determined. And when the difference value of the reference occupation ratio and the target occupation ratio is smaller than a preset threshold value, determining that the virtual character is not shielded at the position of the standable coordinate point. And when the difference value of the reference occupation ratio and the target occupation ratio is larger than or equal to a preset threshold value, determining that the position of the virtual character at the standable coordinate point is shielded.
Alternatively, in step S162, determining a pixel proportion of the virtual character in the first virtual frame in the reference image may include the following steps:
step S1621, rendering the virtual character into a target color;
in step S1622, a ratio of the number of pixels of the target color in the reference map to the total number of pixels included in the first virtual frame is calculated to obtain a reference ratio.
Because the virtual character has the corresponding art sticker, if the color is lack of uniformity, the display position of the virtual character on the graphical user interface is not easy to judge. In view of the above, the target color (e.g., pure red) used in the game scene art design process is particularly selected, so that when the virtual character is displayed at a standable coordinate point, the virtual character can be displayed as the target color (e.g., pure red) on the whole body through a Shader (Shader) on the virtual character, so as to calculate the ratio of the number of pixels of the target color in the reference image to the total number of pixels included in the first virtual frame, and further obtain the reference ratio.
Specifically, color statistics can be performed on all pixel points in an area (e.g., a rectangular area) occupied by a first virtual box (i.e., a bounding box outline) surrounding the virtual character in the graphical user interface by using the reference map. Based on the nature of the ARPG game, the virtual character is typically centered in the graphical user interface and remains relatively fixed in position, and thus the size and position of the rectangular area may be predetermined. Then, the ratio of the number of pixels of the target color in the reference picture to the total number of pixels included in the first virtual frame is used to obtain the reference ratio.
In an alternative embodiment, fig. 2 is a schematic diagram of determining a reference ratio according to an alternative embodiment of the present invention, and as shown in fig. 2, shaders on the virtual character are modified so that the virtual character is displayed with a target color (shown as black in the figure, and actually shown as pure red) throughout the body. Secondly, in order to eliminate the interference of the game scene, the game scene resources except the virtual character are hidden in the graphical user interface to obtain a target game picture. Then, the screenshot process is performed on the target game picture, the first virtual frame (the dotted line part in the figure) at the position of the virtual character is selected to calculate the target color ratio, and the target color ratio is used as the reference ratio, for example: the reference ratio is 50% by dividing the number of pixels contained in the area occupied by the outline of the virtual character by the number of all pixels in the first virtual frame.
Alternatively, in step S163, determining the pixel proportion of the virtual character in the second virtual frame in the image to be compared may include the following steps as the target proportion:
step S1631, rendering the virtual character into a target color;
step S1632, calculating a ratio of the number of pixels of the target color in the image to be compared to the total number of pixels included in the second virtual frame, and obtaining the target ratio.
As described above, the target occupation ratio is obtained by selecting the target color (for example, pure red) which is less intensively used in the game scene art design process, so that when the virtual character is displayed at the standable coordinate point, the virtual character can be displayed as the target color (for example, pure red) on the whole body through the shaders on the virtual character, so as to calculate the ratio of the number of pixels of the target color in the image to be compared to the total number of pixels included in the second virtual frame.
It should be noted that the same situation as the above target color may exist in view of the display color of the virtual occlusion object in the game scene. In this case, the target ratio may be deviated during the calculation process, and thus, the detection result of the occlusion position may lack accuracy. For this purpose, in the above special case, the above target color (for example, pure yellow) may be updated, and the ratio of the number of pixels of the target color in the image to be compared to the total number of pixels included in the second virtual frame may be recalculated to obtain the target proportion again. Then, the difference between the target ratio and the reference ratio is calculated again and compared with a preset threshold value, thereby determining whether the position of the virtual character at the standable coordinate point is blocked.
Fig. 3 is a schematic diagram of determining an occlusion position according to an alternative embodiment of the present invention, and as shown in fig. 3, for the corresponding screenshots of the virtual character at each of the reachable coordinate points, the pixel proportion (i.e. the target proportion) of the virtual character in the second virtual frame is respectively calculated. Secondly, calculating the difference between the target ratio and the reference ratio, and then determining whether the difference between the reference ratio and the target ratio is less than a preset threshold value. And when the difference value of the reference occupation ratio and the target occupation ratio is smaller than a preset threshold value, determining that the virtual character is not shielded at the position of the standable coordinate point. And when the difference value of the reference occupation ratio and the target occupation ratio is larger than or equal to a preset threshold value, determining that the position of the virtual character at the standable coordinate point is shielded. The occlusion position to be recorded can thus be obtained. In fig. 3, a partial region in the target color region displayed by the virtual character is blocked by the virtual tree resource. At this time, the target occupancy is 40% by using the number of the pixel points contained in the remaining region except the region occupied by the outline of the virtual character and blocked by the virtual tree resource, and then by the number of all the pixel points in the rectangular region. Then, the target ratio is subtracted from the reference ratio of 50% by 40%, so that the ratio of the virtual character which is shielded by the virtual tree resource is 10%. If the preset threshold value is 10%, the coordinate point where the virtual character can stand currently can be recorded as a shielding position, and the game screen shot and the coordinate position corresponding to the shielding position are recorded, so that convenience is provided for secondary confirmation and fine arts workers to modify the display content of the game scene. In addition, since the distance parameter and the shielding threshold parameter can be subjected to parametric control, personalized setting can be conveniently carried out on different types of game maps, and further balance between accuracy and artistic effect is realized.
Through the above description of the embodiments, those skilled in the art can clearly understand that the method according to the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but the former is a better implementation mode in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
In this embodiment, a device for detecting a shielding position is further provided, and the device is used to implement the foregoing embodiments and preferred embodiments, and the description of the device that has been already made is omitted. As used below, the term "module" may be a combination of software and/or hardware that implements a predetermined function. Although the means described in the embodiments below are preferably implemented in software, an implementation in hardware, or a combination of software and hardware is also possible and contemplated.
Fig. 4 is a block diagram of a structure of an apparatus for detecting an occlusion position according to an embodiment of the present invention, in which a terminal device provides a graphical user interface, the graphical user interface at least partially includes a game scene, and the game scene includes a game map and a virtual character, as shown in fig. 4, the apparatus includes: an obtaining module 10, configured to obtain a coordinate point where at least one virtual character can stand from a game map; acquiring images to be compared corresponding to the standable coordinate points, wherein the images to be compared are used for recording game pictures when the virtual character is located at the standable coordinate points; acquiring a game picture when the virtual character is positioned at a non-shielding position in a game scene as a reference; a determination module 20 for determining whether the virtual character is occluded at the position of the standable coordinate point based on the image to be compared and the reference picture.
Optionally, the obtaining module 10 is configured to obtain a size parameter, a distance parameter, a starting coordinate point, and a preset routing graph of the game map, where the distance parameter is used to represent a distance between two adjacent coordinate points in the game map, the starting coordinate point represents a starting position of the virtual character in the game map, and the routing graph is used to record position information that the virtual character can walk in the game map; acquiring a plurality of coordinate points to be selected in a game map by adopting the size parameter, the distance parameter and the initial coordinate point; a standable coordinate point is selected from a plurality of coordinate points to be selected based on a routing map.
Optionally, the obtaining module 10 is configured to determine a traversal range in the game map by using the size parameter and the starting coordinate point; and starting from the initial coordinate point, acquiring a plurality of coordinate points to be selected in the traversal range according to the distance parameter.
Optionally, the obtaining module 10 is configured to determine whether each coordinate point in the multiple coordinate points to be selected is located in a walkable area represented by the routing graph; coordinate points located within the walkable region are recorded as standable coordinate points.
Optionally, the starting coordinate point is a standable coordinate point.
Optionally, the obtaining module 10 is configured to sequentially traverse each standable coordinate point from the starting coordinate point; and sequentially carrying out screenshot on the game picture when the virtual character traverses to each standable coordinate point to obtain an image to be compared corresponding to each standable coordinate point.
Optionally, the obtaining module 10 is configured to hide other resources except for the virtual character in the game scene; and capturing a game picture of the game scene which is subjected to the hiding processing and contains the virtual character to obtain a reference picture.
Optionally, the determining module 20 is configured to obtain a first virtual frame surrounding the virtual character in the reference picture; determining the pixel proportion of the virtual character in the first virtual frame in the reference as the reference proportion; acquiring a second virtual frame surrounding the virtual character in the image to be compared, wherein the size of the second virtual frame is the same as that of the first virtual frame; determining the pixel proportion of the virtual character in the second virtual frame in the image to be compared as a target proportion; and determining whether the position of the virtual character at the standable coordinate point is blocked or not according to the target ratio and the reference ratio.
Optionally, the determining module 20 is configured to determine whether a difference between the reference ratio and the target ratio is smaller than a preset threshold; if the difference value between the reference occupation ratio and the target occupation ratio is smaller than a preset threshold value, determining that the virtual character is not shielded at the position of the standable coordinate point; and if the difference value between the reference occupation ratio and the target occupation ratio is greater than or equal to a preset threshold value, determining that the virtual character is shielded at the position of the standable coordinate point.
Optionally, the determining module 20 is configured to render the virtual character into the target color; and calculating the ratio of the number of the pixels of the target color in the reference picture to the total number of the pixels contained in the first virtual frame to obtain the reference ratio.
Optionally, the determining module 20 is configured to render the virtual character into the target color; and calculating the ratio of the number of the pixels of the target color in the image to be compared to the total number of the pixels contained in the second virtual frame to obtain the target ratio.
It should be noted that, the above modules may be implemented by software or hardware, and for the latter, the following may be implemented, but not limited to: the modules are all positioned in the same processor; alternatively, the modules are respectively located in different processors in any combination.
Embodiments of the present invention also provide a non-volatile storage medium having a computer program stored therein, wherein the computer program is configured to perform the steps of any of the above method embodiments when executed.
Alternatively, in the present embodiment, the above-mentioned nonvolatile storage medium may be configured to store a computer program for executing the steps of:
s1, obtaining at least one coordinate point where the virtual character can stand from the game map;
s2, acquiring images to be compared corresponding to the standable coordinate points, wherein the images to be compared are used for recording game pictures when the virtual character is located at the standable coordinate points;
s3, obtaining the game picture when the virtual character is located at the position without occlusion in the game scene as the reference;
s4, it is determined whether the virtual character is occluded at the position of the standable coordinate point based on the image to be compared and the reference picture.
Optionally, in this embodiment, the nonvolatile storage medium may include, but is not limited to: various media capable of storing computer programs, such as a usb disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk.
Embodiments of the present invention also provide an electronic device comprising a memory having a computer program stored therein and a processor arranged to run the computer program to perform the steps of any of the above method embodiments.
Optionally, the electronic apparatus may further include a transmission device and an input/output device, wherein the transmission device is connected to the processor, and the input/output device is connected to the processor.
Optionally, in this embodiment, the processor may be configured to execute the following steps by a computer program:
s1, obtaining at least one coordinate point where the virtual character can stand from the game map;
s2, acquiring images to be compared corresponding to the standable coordinate points, wherein the images to be compared are used for recording game pictures when the virtual character is located at the standable coordinate points;
s3, obtaining the game picture when the virtual character is located at the position without occlusion in the game scene as the reference;
s4, it is determined whether the virtual character is occluded at the position of the standable coordinate point based on the image to be compared and the reference picture.
Optionally, the specific examples in this embodiment may refer to the examples described in the above embodiments and optional implementation manners, and this embodiment is not described herein again.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed technology can be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units may be a logical division, and in actual implementation, there may be another division, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (15)

1. A method for detecting an occlusion position, which provides a graphical user interface through a terminal device, wherein the graphical user interface at least partially includes a game scene, and the game scene includes a game map and a virtual character, the method comprising:
obtaining at least one coordinate point where the virtual character can stand from the game map;
acquiring an image to be compared corresponding to the standable coordinate point, wherein the image to be compared is used for recording a game picture when the virtual character is located at the standable coordinate point;
acquiring a game picture of the virtual character at a non-shielding position in the game scene as a reference;
determining whether the virtual character is occluded at the position of the standable coordinate point based on the image to be compared and the reference map.
2. The method of claim 1, wherein obtaining at least one coordinate point from the game map on which the virtual character can stand comprises:
the method comprises the steps of obtaining a size parameter, a distance parameter, a starting coordinate point and a preset route searching map of the game map, wherein the distance parameter is used for representing the distance between two adjacent coordinate points in the game map, the starting coordinate point represents the starting position of a virtual character in the game map, and the route searching map is used for recording the walking position information of the virtual character in the game map;
acquiring a plurality of coordinate points to be selected in the game map by adopting the size parameter, the distance parameter and the starting coordinate point;
selecting the standable coordinate point from the plurality of coordinate points to be selected based on the routing graph.
3. The method of claim 2, wherein using the size parameter, the distance parameter, and the starting coordinate point to obtain the plurality of coordinate points to be selected in the game map comprises:
determining a traversal range in the game map by adopting the size parameter and the starting coordinate point;
and starting from the initial coordinate point, acquiring a plurality of coordinate points to be selected in the traversal range according to the distance parameter.
4. The method of claim 2, wherein selecting the standable coordinate point from the plurality of coordinate points to be selected based on the routing graph comprises:
respectively determining whether each coordinate point in the coordinate points to be selected is positioned in a walkable area represented by the road hunting map;
recording a coordinate point located within the walkable region as the standable coordinate point.
5. The method of claim 2, wherein the starting coordinate point is a standable coordinate point.
6. The method of claim 5, wherein obtaining the image to be compared corresponding to the standable coordinate point comprises:
starting from the initial coordinate point, sequentially traversing each standable coordinate point;
and sequentially carrying out screenshot on the game picture when the virtual character traverses to each standable coordinate point to obtain the image to be compared corresponding to each standable coordinate point.
7. The method according to claim 1, wherein acquiring the game screen when the virtual character is located at an unobstructed position in the game scene comprises, as the reference image:
hiding other resources except the virtual character in the game scene;
and capturing a game picture of the game scene which is subjected to the hiding processing and contains the virtual character to obtain a reference picture.
8. The method of claim 7, wherein determining whether the virtual character is occluded at the position of the standable coordinate point based on the image to be compared and the reference map comprises:
acquiring a first virtual frame surrounding the virtual character in the reference picture;
determining the pixel proportion of the virtual character in the first virtual frame in the reference picture as a reference proportion;
acquiring a second virtual frame surrounding the virtual character in the image to be compared, wherein the second virtual frame and the first virtual frame have the same size;
determining the pixel proportion of the virtual character in the second virtual frame in the image to be compared as a target proportion;
and determining whether the position of the virtual character at the standable coordinate point is blocked or not according to the target proportion and the reference proportion.
9. The method of claim 8, wherein determining whether the virtual character is occluded at the position of the standable coordinate point based on the target and reference fractions comprises:
determining whether a difference between the reference duty ratio and the target duty ratio is less than a preset threshold;
if the difference value between the reference proportion and the target proportion is smaller than the preset threshold value, determining that the virtual character is not shielded at the position of the standable coordinate point;
and if the difference value between the reference occupation ratio and the target occupation ratio is larger than or equal to the preset threshold value, determining that the virtual character is blocked at the position of the standable coordinate point.
10. The method of claim 8, wherein determining the pixel fraction of the virtual character in the first virtual frame in the reference as the reference fraction comprises:
rendering the virtual character into a target color;
and calculating the ratio of the number of the pixels of the target color in the reference picture to the total number of the pixels contained in the first virtual frame to obtain the reference ratio.
11. The method according to claim 8, wherein determining a pixel proportion of the virtual character in the second virtual frame in the image to be compared as the target proportion comprises:
rendering the virtual character into a target color;
and calculating the ratio of the number of the pixels of the target color in the image to be compared to the total number of the pixels contained in the second virtual frame to obtain the target ratio.
12. A device for detecting an occlusion position, which provides a graphical user interface through a terminal device, wherein the graphical user interface at least partially includes a game scene, and the game scene includes a game map and a virtual character, the device comprising:
the acquisition module is used for acquiring at least one coordinate point where the virtual character can stand from the game map; acquiring an image to be compared corresponding to the standable coordinate point, wherein the image to be compared is used for recording a game picture when the virtual character is located at the standable coordinate point; acquiring a game picture when the virtual character is positioned at a non-shielding position in the game scene as a reference;
a determination module for determining whether the virtual character is occluded at the position of the standable coordinate point based on the image to be compared and the reference picture.
13. A non-volatile storage medium, characterized in that a computer program is stored in the storage medium, wherein the computer program is arranged to execute the method of detecting an occlusion position according to any of claims 1 to 11 when running.
14. A processor for running a program, wherein the program is arranged to execute the method for occlusion position detection as claimed in any of claims 1 to 11.
15. An electronic device comprising a memory and a processor, wherein the memory stores a computer program, and the processor is configured to execute the computer program to perform the method for detecting an occlusion position according to any of claims 1 to 11.
CN202010929105.0A 2020-09-07 2020-09-07 Detection method and device for shielding position, processor and electronic device Active CN111957040B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010929105.0A CN111957040B (en) 2020-09-07 2020-09-07 Detection method and device for shielding position, processor and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010929105.0A CN111957040B (en) 2020-09-07 2020-09-07 Detection method and device for shielding position, processor and electronic device

Publications (2)

Publication Number Publication Date
CN111957040A true CN111957040A (en) 2020-11-20
CN111957040B CN111957040B (en) 2024-02-23

Family

ID=73392446

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010929105.0A Active CN111957040B (en) 2020-09-07 2020-09-07 Detection method and device for shielding position, processor and electronic device

Country Status (1)

Country Link
CN (1) CN111957040B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112090084A (en) * 2020-11-23 2020-12-18 成都完美时空网络技术有限公司 Object rendering method and device, storage medium and electronic equipment
CN112807684A (en) * 2020-12-31 2021-05-18 上海米哈游天命科技有限公司 Obstruction information acquisition method, device, equipment and storage medium
CN112843730A (en) * 2020-12-31 2021-05-28 上海米哈游天命科技有限公司 Shooting method, device, equipment and storage medium
CN113018858A (en) * 2021-04-12 2021-06-25 腾讯科技(深圳)有限公司 Virtual role detection method, computer equipment and readable storage medium
CN113101637A (en) * 2021-04-19 2021-07-13 网易(杭州)网络有限公司 Scene recording method, device, equipment and storage medium in game
CN113177996A (en) * 2021-04-07 2021-07-27 网易(杭州)网络有限公司 Through-mold analysis method and device for virtual model, processor and electronic device
CN113706608A (en) * 2021-08-20 2021-11-26 云往(上海)智能科技有限公司 Pose detection device and method for target object in predetermined area and electronic equipment
WO2023109328A1 (en) * 2021-12-16 2023-06-22 网易(杭州)网络有限公司 Game control method and apparatus

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103366374A (en) * 2013-07-12 2013-10-23 重庆大学 Fire fighting access obstacle detection method based on image matching
CN104759097A (en) * 2015-04-13 2015-07-08 四川天上友嘉网络科技有限公司 Automatic way-finding method in game
US20180301122A1 (en) * 2017-04-13 2018-10-18 Alpine Electronics, Inc. Display control apparatus, display control method, and camera monitoring system
CN109126134A (en) * 2018-09-29 2019-01-04 北京金山安全软件有限公司 Game role moving method and device and electronic equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103366374A (en) * 2013-07-12 2013-10-23 重庆大学 Fire fighting access obstacle detection method based on image matching
CN104759097A (en) * 2015-04-13 2015-07-08 四川天上友嘉网络科技有限公司 Automatic way-finding method in game
US20180301122A1 (en) * 2017-04-13 2018-10-18 Alpine Electronics, Inc. Display control apparatus, display control method, and camera monitoring system
CN109126134A (en) * 2018-09-29 2019-01-04 北京金山安全软件有限公司 Game role moving method and device and electronic equipment

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112090084A (en) * 2020-11-23 2020-12-18 成都完美时空网络技术有限公司 Object rendering method and device, storage medium and electronic equipment
CN112090084B (en) * 2020-11-23 2021-02-09 成都完美时空网络技术有限公司 Object rendering method and device, storage medium and electronic equipment
CN112807684A (en) * 2020-12-31 2021-05-18 上海米哈游天命科技有限公司 Obstruction information acquisition method, device, equipment and storage medium
CN112843730A (en) * 2020-12-31 2021-05-28 上海米哈游天命科技有限公司 Shooting method, device, equipment and storage medium
CN113177996A (en) * 2021-04-07 2021-07-27 网易(杭州)网络有限公司 Through-mold analysis method and device for virtual model, processor and electronic device
CN113018858A (en) * 2021-04-12 2021-06-25 腾讯科技(深圳)有限公司 Virtual role detection method, computer equipment and readable storage medium
CN113101637A (en) * 2021-04-19 2021-07-13 网易(杭州)网络有限公司 Scene recording method, device, equipment and storage medium in game
CN113101637B (en) * 2021-04-19 2024-02-02 网易(杭州)网络有限公司 Method, device, equipment and storage medium for recording scenes in game
CN113706608A (en) * 2021-08-20 2021-11-26 云往(上海)智能科技有限公司 Pose detection device and method for target object in predetermined area and electronic equipment
CN113706608B (en) * 2021-08-20 2023-11-28 云往(上海)智能科技有限公司 Pose detection device and method of target object in preset area and electronic equipment
WO2023109328A1 (en) * 2021-12-16 2023-06-22 网易(杭州)网络有限公司 Game control method and apparatus

Also Published As

Publication number Publication date
CN111957040B (en) 2024-02-23

Similar Documents

Publication Publication Date Title
CN111957040B (en) Detection method and device for shielding position, processor and electronic device
US11270497B2 (en) Object loading method and apparatus, storage medium, and electronic device
CN111815755A (en) Method and device for determining shielded area of virtual object and terminal equipment
CN107944420B (en) Illumination processing method and device for face image
CN110765891B (en) Engineering drawing identification method, electronic equipment and related product
CN110084871B (en) Image typesetting method and device and electronic terminal
CN109410295B (en) Color setting method, device, equipment and computer readable storage medium
CN111124888A (en) Method and device for generating recording script and electronic device
CN112991178A (en) Image splicing method, device, equipment and medium
CN113952720A (en) Game scene rendering method and device, electronic equipment and storage medium
US20160012302A1 (en) Image processing apparatus, image processing method and non-transitory computer readable medium
CN111199169A (en) Image processing method and device
CN108960012B (en) Feature point detection method and device and electronic equipment
CN110069125B (en) Virtual object control method and device
CN113975812A (en) Game image processing method, device, equipment and storage medium
CN111986229A (en) Video target detection method, device and computer system
WO2017034419A1 (en) A process, system and apparatus for machine colour characterisation of digital media
CN111475089B (en) Task display method, device, terminal and storage medium
CN112734747A (en) Target detection method and device, electronic equipment and storage medium
CN112231020A (en) Model switching method and device, electronic equipment and storage medium
CN110047126B (en) Method, apparatus, electronic device, and computer-readable storage medium for rendering image
CN114917590B (en) Virtual reality game system
CN113487697A (en) Method and device for generating simple strokes, electronic equipment and storage medium
CN116310241B (en) Virtual character position control method, device, electronic equipment and storage medium
CN110941974B (en) Control method and device of virtual object

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant