CN112827175A - Collision frame determination method and device and computer-readable storage medium - Google Patents

Collision frame determination method and device and computer-readable storage medium Download PDF

Info

Publication number
CN112827175A
CN112827175A CN202110219639.9A CN202110219639A CN112827175A CN 112827175 A CN112827175 A CN 112827175A CN 202110219639 A CN202110219639 A CN 202110219639A CN 112827175 A CN112827175 A CN 112827175A
Authority
CN
China
Prior art keywords
sub
collision
area
frame
virtual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110219639.9A
Other languages
Chinese (zh)
Other versions
CN112827175B (en
Inventor
任一杰
黄沛鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202110219639.9A priority Critical patent/CN112827175B/en
Publication of CN112827175A publication Critical patent/CN112827175A/en
Application granted granted Critical
Publication of CN112827175B publication Critical patent/CN112827175B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • A63F13/57Simulating properties, behaviour or motion of objects in the game world, e.g. computing tyre load in a car race game
    • A63F13/577Simulating properties, behaviour or motion of objects in the game world, e.g. computing tyre load in a car race game using determination of contact between game characters or objects, e.g. to avoid collision between virtual racing cars
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/833Hand-to-hand fighting, e.g. martial arts competition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/02Non-photorealistic rendering
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/64Methods for processing data by generating or executing the game program for computing dynamical parameters of game objects, e.g. motion determination or computation of frictional forces for a virtual car
    • A63F2300/643Methods for processing data by generating or executing the game program for computing dynamical parameters of game objects, e.g. motion determination or computation of frictional forces for a virtual car by determining the impact between objects, e.g. collision detection
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/66Methods for processing data by generating or executing the game program for rendering three dimensional images
    • A63F2300/6607Methods for processing data by generating or executing the game program for rendering three dimensional images for animating game characters, e.g. skeleton kinematics
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/80Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game specially adapted for executing a specific type of game
    • A63F2300/8029Fighting without shooting

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Graphics (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application provides a collision frame determination method, equipment and a computer readable storage medium; the method comprises the following steps: acquiring a frame image to be processed corresponding to the animation to be processed, and acquiring each collision frame positively correlated with a virtual object area in the frame image to be processed; determining a coincidence region of each collision frame in the collision frames and the virtual object region; acquiring the effective area occupation ratio of the collision frame corresponding to the coincidence area and each collision frame, and the effective area occupation ratio of the virtual object corresponding to the coincidence area and the virtual object area; determining the effective area occupation ratio of the collision frame and the effective area occupation ratio of the virtual object as the area occupation ratio corresponding to each collision frame, thereby obtaining each area occupation ratio corresponding to each collision frame; and determining the collision frame corresponding to the maximum area occupation ratio in each area occupation ratio as a virtual collision frame of the virtual object area. Through this application, can promote the definite efficiency of collision frame.

Description

Collision frame determination method and device and computer-readable storage medium
Technical Field
The present application relates to image processing technologies in the field of animation rendering, and in particular, to a method and an apparatus for determining a collision frame, and a computer-readable storage medium.
Background
With the rapid development of animation technology, the fighting game based on frame-by-frame animation is also rapidly developed; in a fighting game based on a frame-by-frame animation, a collision frame is a frame for performing damage judgment on a virtual object; therefore, determination of the collision frame is important in the fighting game.
Generally, in a virtual scene rendered based on frame-by-frame animation, in order to determine a collision frame of a virtual object, drawing is generally performed in an artificial manner; however, the collision frame is drawn manually, which is inefficient.
Disclosure of Invention
The embodiment of the application provides a collision frame determination method, a collision frame determination device, equipment and a computer-readable storage medium, and the collision frame determination efficiency can be improved.
The technical scheme of the embodiment of the application is realized as follows:
the embodiment of the application provides a collision frame determining method, which comprises the following steps:
acquiring a frame image to be processed corresponding to the animation to be processed, and acquiring each collision frame positively correlated with a virtual object area in the frame image to be processed;
determining a coincidence region of each collision frame in the collision frames and the virtual object region;
acquiring the effective area occupation ratio of the collision frame corresponding to the coincidence area and each collision frame, and the effective area occupation ratio of the virtual object corresponding to the coincidence area and the virtual object area;
determining the effective area ratio of the collision frame and the effective area ratio of the virtual object as the area ratio corresponding to each collision frame, so as to obtain each area ratio corresponding to each collision frame;
and determining the collision frame corresponding to the maximum area occupation ratio in the area occupation ratios as the virtual collision frame of the virtual object area.
The embodiment of the present application provides a collision frame determination device, including:
the image acquisition module is used for acquiring a frame image to be processed corresponding to the animation to be processed and acquiring each collision frame positively correlated with a virtual object area in the frame image to be processed;
the region determining module is used for determining the overlapping region of each collision frame in the collision frames and the virtual object region;
the proportion determining module is used for acquiring the proportion of the effective area of the collision frame corresponding to the coincidence area and each collision frame, and the proportion of the effective area of the virtual object corresponding to the coincidence area and the virtual object area;
the proportion determining module is further configured to determine the effective area proportion of the collision frame and the effective area proportion of the virtual object as the area proportion corresponding to each collision frame, so as to obtain each area proportion corresponding to each collision frame;
and a collision frame determining module, configured to determine a collision frame corresponding to a maximum area ratio of the area ratios as a virtual collision frame of the virtual object area.
In this embodiment of the present application, the collision frame determining apparatus further includes an area dividing module, configured to divide the virtual object area into at least two sub virtual object areas when the maximum area occupancy is smaller than an occupancy threshold; acquiring at least two sub-virtual collision frames corresponding to the at least two sub-virtual object areas, wherein each sub-virtual collision frame in the at least two sub-virtual collision frames is a sub-collision frame corresponding to a target maximum sub-area occupation ratio of a target sub-virtual object area corresponding to the at least two sub-virtual object areas; and determining the at least two sub-virtual collision frames as the virtual collision frame.
In this embodiment of the application, the region dividing module is further configured to, when a ratio of a sub-virtual object region to be divided to a maximum sub-region to be divided corresponding to the sub-virtual collision frame to be divided is smaller than a sub-ratio threshold to be divided, iteratively divide the sub-virtual object region to be divided until a division end condition is met, and end the division, where the sub-virtual object region to be divided is at least one of the at least two sub-virtual object regions, and the target sub-virtual collision frame corresponds to the virtual object region to be divided; and determining each sub-virtual collision frame corresponding to each sub-virtual object area divided currently as the virtual collision frame.
In this embodiment of the present application, the region dividing module is further configured to adjust a dividing line position of the virtual object region, and divide the virtual object region into various region combinations, where each of the various region combinations includes at least two initial sub-virtual object regions; and acquiring a target area combination from the various area combinations, wherein the target area combination comprises the at least two sub-virtual object areas, and the at least two sub-virtual collision frames corresponding to the target area combination are most approximate to the virtual object area.
In this embodiment of the application, when the at least two initial sub-virtual object regions include an initial upper body region and an initial leg region, the region dividing module is further configured to obtain each sub-collision frame corresponding to the initial upper body region in each of the various region combinations; determining each sub-region occupation ratio corresponding to each sub-collision frame based on a sub-coincidence region of each sub-collision frame in the sub-collision frames and the initial upper body region; obtaining the maximum sub-region occupation ratio from the sub-region occupation ratios so as to obtain the maximum sub-region occupation ratios corresponding to the various region combinations; determining a region combination corresponding to a largest sub-region occupation ratio among the largest sub-region occupation ratios as the target region combination including an upper body region and a leg region, thereby obtaining the at least two sub-virtual object regions including the upper body region and the leg region, wherein the largest sub-region occupation ratio is the target largest sub-region occupation ratio.
In this embodiment of the application, the region dividing module is further configured to determine the sub collision frame corresponding to the largest sub-region ratio as a first sub virtual collision frame corresponding to the upper body region; acquiring a leg middle line corresponding to the leg region; combining the leg region and the extension parameter, and extending towards two sides by taking the leg middle line as a center; determining the extended border as a second sub-virtual collision box, thereby obtaining the at least two sub-virtual collision boxes including the first sub-virtual collision box and the second sub-virtual collision box.
In this embodiment of the application, when the sub-virtual object region to be divided is the upper body region, the sub-virtual collision frame to be divided is the first sub-virtual collision frame, and the maximum sub-region occupation ratio to be divided is the maximum sub-region occupation ratio; the region dividing module is further configured to divide the upper body region based on an object part included in the upper body region to obtain a first sub upper body region and a second sub upper body region, where the second sub virtual collision frame, a third sub virtual collision frame corresponding to the first sub upper body region, and a fourth sub virtual collision frame corresponding to the second sub upper body region are the sub virtual collision frames.
In this embodiment of the application, when the area corresponding to each collision frame is equal to the virtual object area, the area occupation ratio is the collision frame effective area occupation ratio or the virtual object effective area occupation ratio.
In this embodiment of the application, the proportion determining module is further configured to calculate a ratio by taking the overlapping area as a numerator and each collision frame as a denominator, and obtain the effective area proportion of the collision frame; and taking the superposed region as a numerator and the virtual object region as a denominator, and calculating a ratio to obtain the virtual object effective region ratio.
In this embodiment of the application, the image obtaining module is further configured to display a frame image sequence of the animation to be processed; and responding to a frame image selection operation aiming at the frame image sequence, and determining a frame image sequence to be processed, wherein the frame image to be processed is any frame image in the frame image sequence to be processed.
In this embodiment of the application, the collision frame determining apparatus further includes a collision frame processing module, configured to combine the frame image to be processed and the virtual collision frame to obtain a frame image to be rendered, so as to obtain a frame image sequence to be rendered corresponding to the frame image sequence to be processed; and updating the frame image sequence to be processed in the frame image sequence based on the frame image sequence to be rendered to obtain the animation to be rendered.
In this embodiment of the present application, the collision frame determining apparatus further includes a threshold obtaining module, configured to display a threshold setting control, where the threshold setting control is configured to trigger setting of the duty threshold; obtaining the duty ratio threshold in response to a threshold setting operation acting on the threshold setting control.
In this embodiment of the present application, the collision frame determining apparatus further includes an information display module, configured to obtain an effective area proportion of a target collision frame and an effective area proportion of a target virtual object corresponding to the virtual collision frame; and displaying the effective area occupation ratio of the target collision frame and the effective area occupation ratio of the target virtual object.
In this embodiment of the application, the collision frame processing module is further configured to obtain a target frame image including the virtual object region from the animation to be processed; determining the virtual collision frame as a collision frame of each frame image in the target frame images.
An embodiment of the present application provides a collision frame determination device, including:
a memory for storing executable instructions;
and the processor is used for realizing the collision frame determination method provided by the embodiment of the application when executing the executable instructions stored in the memory.
The embodiment of the application provides a computer-readable storage medium, which stores executable instructions for causing a processor to execute the method for determining a collision frame provided by the embodiment of the application.
The embodiment of the application has at least the following beneficial effects: determining the quality degree of the collision frame by adopting the effective area occupation ratio of the collision frame and the effective area occupation ratio of the virtual object to approximate the virtual object area to obtain a virtual collision frame which is most approximate to the virtual object area; thus, the collision frame of the virtual object is automatically and intelligently determined, and therefore the determination efficiency of the collision frame can be improved.
Drawings
FIG. 1 is a schematic diagram of the determination of a collision box in an exemplary skeletal animation;
FIG. 2 is an alternative architectural diagram of a collision frame determination system provided by embodiments of the present application;
fig. 3 is a schematic structural diagram of a server in fig. 2 according to an embodiment of the present disclosure;
FIG. 4 is a schematic flow chart of an alternative collision frame determination method provided by the embodiment of the present application;
FIG. 5 is a schematic diagram of an exemplary deterministic collision box provided by an embodiment of the present application;
FIG. 6 is a diagram illustrating an exemplary virtual collision box provided by an embodiment of the present application;
FIG. 7 is a schematic flow chart of another alternative method for determining a collision frame according to an embodiment of the present disclosure;
FIG. 8 is a schematic diagram of another exemplary virtual collision box provided by an embodiment of the present application;
FIG. 9 is a schematic flow chart of another alternative method for determining a collision frame according to an embodiment of the present disclosure;
FIG. 10 is a schematic diagram of another exemplary virtual collision box provided by an embodiment of the present application;
FIG. 11 is a schematic diagram of an exemplary combination of regions provided by embodiments of the present application;
FIG. 12 is a schematic flow chart illustrating a further alternative collision frame determination method according to an embodiment of the present application;
FIG. 13 is a schematic flow chart illustrating an exemplary determination of a collision frame provided by an embodiment of the present application;
FIG. 14 is a schematic diagram of an exemplary page triggering a collision box determination according to an embodiment of the present application;
FIG. 15 is a diagram illustrating results obtained by an exemplary process of determining a collision frame according to FIG. 13 according to an embodiment of the present disclosure;
FIG. 16 is a diagram illustrating another result obtained by using the exemplary process of determining a collision box corresponding to FIG. 13 according to an embodiment of the present application;
fig. 17 is a schematic diagram of an exemplary determination of an animation to be rendered according to an embodiment of the present application.
Detailed Description
In order to make the objectives, technical solutions and advantages of the present application clearer, the present application will be described in further detail with reference to the attached drawings, the described embodiments should not be considered as limiting the present application, and all other embodiments obtained by a person of ordinary skill in the art without creative efforts shall fall within the protection scope of the present application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is understood that "some embodiments" may be the same subset or different subsets of all possible embodiments, and may be combined with each other without conflict.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the present application only and is not intended to be limiting of the application.
In the following description, references to the terms "first \ second \ third \ fourth" are only to distinguish similar objects and do not denote a particular order or importance to the objects, and it is to be understood that "first \ second \ third \ fourth" may be interchanged with a particular order or sequence where permissible to enable the embodiments of the present application described herein to be practiced in an order other than that shown or described herein.
Before further detailed description of the embodiments of the present application, terms and expressions referred to in the embodiments of the present application will be described, and the terms and expressions referred to in the embodiments of the present application will be used for the following explanation.
1) Frame-by-frame animation, an animation form that breaks up animation actions in successive animation frames; the animation is a continuously played animation formed by drawing different contents on each frame of a time axis frame by frame.
2) The skeleton animation is an animation in which a virtual model corresponding to a virtual object in the animation has a skeleton structure composed of interconnected skeletons, and the orientation and position of the skeletons are changed to generate the virtual model.
3) A Fighting Game (FTG), which is one of action games, has obvious action Game features, and the pictures are usually that players stand with their sides facing each other and fight each other, and a Fighting skill is used to defeat an opponent to win a win; the fighting game has exquisite characters and move setting to achieve the principle of fair competition. And an Action Game (Action Game) is a Game that broadly takes actions as a main expression form of the Game.
4) Collision boxes, which are closed frames used for performing injury determination on virtual objects in a virtual scene, are generally obtained by approximating the contour of the trunk (generally excluding hands and weapons) of a virtual object with as few simple geometric figures (such as rectangles) as possible, and the upper limit of the number of collision boxes is mostly 3.
5) The operation is a manner for triggering the device to execute processing, such as a click operation, a double-click operation, a long-press operation, a sliding operation, a gesture operation, a received trigger instruction, and the like; in addition, various operations in the embodiments of the present application may be a single operation or may be a collective term for a plurality of operations.
6) In response to the condition or state on which the process being performed depends being indicated, the one or more operations being performed may be in real time or may have a set delay when the dependent condition or state is satisfied; there is no restriction on the order of execution of the operations performed unless otherwise specified.
7) The virtual object is the image of various people and objects which can interact in the virtual scene, or the movable object in the virtual scene; the movable object can be a virtual character, a virtual animal, an animation character, etc., such as: characters, animals, plants, oil drums, walls, stones, etc. displayed in the virtual scene. In addition, the virtual object may be an avatar in the virtual scene for representing the user, and the virtual scene may include a plurality of virtual objects, each virtual object having its own shape and volume in the virtual scene and occupying a part of the space in the virtual scene, corresponding to the virtual object area in the embodiment of the present application.
It should be noted that, in order to determine a collision frame of a virtual object in a virtual scene, for the virtual object in a 3D (3-Dimension, three-dimensional) virtual scene, since the virtual object belongs to a skeleton animation, which parts of a body are each corresponding to a clear identifier; therefore, in order to determine the collision frame, one or more bounding boxes may be divided by bvh (bounding Volume hierarchy) space according to a plurality of axes (for example, a plurality of x-axis, y-axis and z-axis) with the bone coordinates of the character as points, and an optimal bounding box may be calculated violently as the collision frame of the virtual object.
Referring to FIG. 1, FIG. 1 is a schematic diagram illustrating the determination of a collision box in an exemplary skeletal animation; as shown in fig. 1, for the virtual object 1-1, bounding box 1-2 and bounding box 1-3 partitioned by the BVH space, that is, the collision frame corresponding to the virtual object 1-1, are shown.
However, the animation type to which the 3D virtual scene belongs is a skeletal animation, and is not a frame-by-frame animation, and thus, the method for determining the collision frame of the virtual object in the 3D virtual scene cannot be applied to the frame-by-frame animation; therefore, the method for determining the collision frame of the virtual object in the 3D-based virtual scene cannot automatically determine the collision frame of the virtual object in the frame-by-frame animation.
In addition, in order to determine the collision frame of the virtual object, the collision frame can be realized based on human body example segmentation in image processing; the human body example segmentation refers to the fact that a deep learning method is adopted, a large amount of data are adopted for neural network training, so that a virtual object is segmented from a picture, and then a corresponding collision frame is determined based on the segmented virtual object. However, most of the virtual objects have exaggerated characteristics, and many external images have larger differences from the traditional portrait, and the method for segmenting the human body example is directly applied to the determination of the collision frame of the virtual object, so that the accuracy of the obtained collision frame is lower; meanwhile, each virtual object has the characteristics of exaggeration, and even if the prior data of other virtual objects are used for training, the accuracy of the generated collision frame is still low.
Based on this, the embodiment of the application provides a collision frame determination method, a collision frame determination device, collision frame determination equipment and a computer-readable storage medium, which can improve the determination efficiency and accuracy of a collision frame; an exemplary application of the collision frame determination device provided in the embodiments of the present application is described below, and the collision frame determination device provided in the embodiments of the present application may be implemented as various types of user terminals such as a notebook computer, a tablet computer, a desktop computer, a set-top box, a mobile device (e.g., a mobile phone, a portable music player, a personal digital assistant, a dedicated messaging device, and a portable game device), and may also be implemented as a server. In the following, an exemplary application will be explained when the device is implemented as a server.
Referring to fig. 2, fig. 2 is an alternative architecture diagram of a collision frame determination system provided by the embodiment of the present application; as shown in fig. 2, in order to support one collision frame determination application, in the collision frame determination system 100, the terminal 400 (the terminal 400-1 and the terminal 400-2 are exemplarily shown) is connected to the server 200 (collision frame determination device) through the network 300, and the network 300 may be a wide area network or a local area network, or a combination of both. The collision frame determination system 100 further includes a database 500 for providing data support to the server 200 when the server 200 determines the collision frame.
A terminal 400 for displaying a frame image sequence of the animation to be processed on the graphical interface, and transmitting a collision frame determination request to the server 200 through the network 300 in response to a selection operation for the frame image sequence; and also for displaying the virtual collision frame and the corresponding area ratio sent by the server 200 through the network 300.
A server 200, configured to obtain a frame image to be processed corresponding to the animation to be processed from a collision frame determination request sent by the terminal 400 through the network 300, and obtain each collision frame positively correlated to a virtual object area in the frame image to be processed; determining a coincidence region of each collision frame in the collision frames and the virtual object region; acquiring the effective area occupation ratio of the collision frame corresponding to the coincidence area and each collision frame, and the effective area occupation ratio of the virtual object corresponding to the coincidence area and the virtual object area; determining the effective area occupation ratio of the collision frame and the effective area occupation ratio of the virtual object as the area occupation ratio corresponding to each collision frame, thereby obtaining each area occupation ratio corresponding to each collision frame; and determining the collision frame corresponding to the maximum area occupation ratio in each area occupation ratio as a virtual collision frame of the virtual object area. And also for sending the virtual collision box and the corresponding area proportion to the terminal 400 via the network 300.
It should be noted that the processing corresponding to the terminal 400 may be implemented by being integrated into the server 200, and similarly, the processing corresponding to the server 200 may also be implemented by being integrated into the terminal 400.
In some embodiments, the server 200 may be an independent physical server, may also be a server cluster or a distributed system formed by a plurality of physical servers, and may also be a cloud server that provides basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a Network service, cloud communication, a middleware service, a domain name service, a security service, a CDN (Content Delivery Network), a big data and artificial intelligence platform, and the like. The terminal 400 may be, but is not limited to, a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart speaker, a smart watch, and the like. The terminal and the server may be directly or indirectly connected through wired or wireless communication, which is not limited in the embodiment of the present invention.
Referring to fig. 3, fig. 3 is a schematic diagram of a component structure of a server in fig. 2 according to an embodiment of the present disclosure, where the server 200 shown in fig. 3 includes: at least one processor 210, memory 250, at least one network interface 220, and a user interface 230. The various components in server 200 are coupled together by a bus system 240. It is understood that the bus system 240 is used to enable communications among the components. The bus system 240 includes a power bus, a control bus, and a status signal bus in addition to a data bus. For clarity of illustration, however, the various buses are labeled as bus system 240 in fig. 3.
The Processor 210 may be an integrated circuit chip having Signal processing capabilities, such as a general purpose Processor, a Digital Signal Processor (DSP), or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like, wherein the general purpose Processor may be a microprocessor or any conventional Processor, or the like.
The user interface 230 includes one or more output devices 231, including one or more speakers and/or one or more visual display screens, that enable the presentation of media content. The user interface 230 also includes one or more input devices 232, including user interface components that facilitate user input, such as a keyboard, mouse, microphone, touch screen display, camera, other input buttons and controls.
The memory 250 may be removable, non-removable, or a combination thereof. Exemplary hardware devices include solid state memory, hard disk drives, optical disk drives, and the like. Memory 250 optionally includes one or more storage devices physically located remotely from processor 210.
The memory 250 includes volatile memory or nonvolatile memory, and may include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read Only Memory (ROM), and the volatile Memory may be a Random Access Memory (RAM). The memory 250 described in embodiments herein is intended to comprise any suitable type of memory.
In some embodiments, memory 250 is capable of storing data, examples of which include programs, modules, and data structures, or a subset or superset thereof, to support various operations, as exemplified below.
An operating system 251 including system programs for processing various basic system services and performing hardware-related tasks, such as a framework layer, a core library layer, a driver layer, etc., for implementing various basic services and processing hardware-based tasks;
a network communication module 252 for communicating to other computing devices via one or more (wired or wireless) network interfaces 220, exemplary network interfaces 220 including: bluetooth, wireless-compatibility authentication (Wi-Fi), and Universal Serial Bus (USB), etc.;
a presentation module 253 to enable presentation of information (e.g., a user interface for operating peripherals and displaying content and information) via one or more output devices 231 (e.g., a display screen, speakers, etc.) associated with the user interface 230;
an input processing module 254 for detecting one or more user inputs or interactions from one of the one or more input devices 232 and translating the detected inputs or interactions.
In some embodiments, the collision frame determination apparatus provided in the embodiments of the present application may be implemented in software, and fig. 3 illustrates the collision frame determination apparatus 255 stored in the memory 250, which may be software in the form of programs and plug-ins, and includes the following software modules: the image acquisition module 2551, the region determination module 2552, the proportion determination module 2553, the collision frame determination module 2554, the region division module 2555, the collision frame processing module 2556, the threshold acquisition module 2557 and the information display module 2558 are logical and thus may be arbitrarily combined or further divided according to the functions implemented. The functions of the respective modules will be explained below.
In other embodiments, the collision box determining apparatus provided in the embodiments of the present Application may be implemented in hardware, and for example, the collision box determining apparatus provided in the embodiments of the present Application may be a processor in the form of a hardware decoding processor, which is programmed to execute the collision box determining method provided in the embodiments of the present Application, for example, the processor in the form of the hardware decoding processor may be one or more Application Specific Integrated Circuits (ASICs), DSPs, Programmable Logic Devices (PLDs), Complex Programmable Logic Devices (CPLDs), Field Programmable Gate Arrays (FPGAs), or other electronic components.
In the following, the collision frame determination method provided by the embodiment of the present application will be described in conjunction with an exemplary application and implementation of the server provided by the embodiment of the present application.
Referring to fig. 4, fig. 4 is an alternative flowchart of a collision frame determination method provided in the embodiment of the present application, which will be described with reference to the steps shown in fig. 4.
S401, obtaining a frame image to be processed corresponding to the animation to be processed, and obtaining each collision frame positively correlated with a virtual object area in the frame image to be processed.
In the embodiment of the application, when the collision frame determination device determines a collision frame for a virtual object in an animation to be processed, the frame image in the animation to be processed, that is, the frame image to be processed, is determined; here, the frame image to be processed may be a frame image designated in the animation to be processed, or may be any frame image in the animation to be processed, which is not specifically limited in this embodiment of the application.
After the collision frame determining device obtains the frame image to be processed, obtaining an area occupied by a virtual object in the frame image to be processed, and obtaining a virtual object area; then, the collision frame determination apparatus determines candidate collision frames of various positions and/or shapes positively correlated with the size of the virtual object area based on the size of the virtual object area, and also obtains the respective collision frames positively correlated with the virtual object area in the frame image to be processed.
It should be noted that the animation form to which the to-be-processed animation belongs is a frame-by-frame animation. The respective collision frames may be the same in size (e.g., the same number of pixel points, or the same area), but different in shape and/or position on the frame image to be processed; the respective collision frames may also differ in size; the embodiment of the present application is not particularly limited to this; that is, the respective collision frames may also be different in one or more of size, shape, and position on the frame image to be processed. In addition, each collision frame is positively correlated with the virtual object area, which shows that the larger the area occupied by the virtual object in the frame image to be processed is, the larger the area corresponding to each corresponding collision frame is; the smaller the area occupied by the virtual object in the frame image to be processed is, the smaller the area corresponding to each collision frame is; each collision frame in the collision frames may be equal to the virtual object region, may also be smaller than the virtual object region, and may also be larger than the virtual object region, and the like, which is not specifically limited in this embodiment of the application.
It should be further noted that, since the frame image to be processed is composed of a limited number of pixel points, all possible collision frames positively correlated to the virtual object region in the frame image to be processed can be quantized; thus, each collision frame may be all possible collision frames positively correlated with the virtual object area calculated using violence.
S402, determining the overlapping area of each collision frame in the collision frames and the virtual object area.
In the embodiment of the application, the collision frame determination device traverses each obtained collision frame, and for each traversed collision frame, an image processing technology is adopted to obtain a coincidence region of each collision frame and a virtual object region. Here, the overlap region is a virtual object region included in each collision frame, or the overlap region is a region of a collision frame included in the virtual object region.
S403, acquiring the effective area occupation ratio of the overlapping area to the collision frame corresponding to each collision frame and the effective area occupation ratio of the overlapping area to the virtual object corresponding to the virtual object area.
In the embodiment of the present application, the collision frame determination device determines the degree of approximation of each collision frame to the virtual object area using two indexes, one of which is the collision frame effective area proportion and the other of which is the virtual object effective area proportion. The effective area occupation ratio of the collision frame refers to a ratio of a virtual object area included in each collision frame to an area corresponding to the collision frame, and the effective area occupation ratio of the virtual object refers to a ratio of a virtual object area included in each collision frame to an effective area of the virtual object.
S404, determining the effective area ratio of the collision frame and the effective area ratio of the virtual object as the area ratio corresponding to each collision frame, thereby obtaining each area ratio corresponding to each collision frame.
It should be noted that, since the effective area occupation ratio of the collision frame is the ratio of the virtual object area included in each collision frame to the area corresponding to the collision frame, when the effective area occupation ratio of the collision frame is large, the area corresponding to the collision frame may be too small; and because the virtual object effective area occupation ratio is the ratio of the virtual object area and the virtual object effective area included in each collision frame, when the virtual object effective area occupation ratio is large, the corresponding area of the collision frame may be too large; in conclusion, the collision frame determination device determines the approximation degree of the collision frame and the virtual object region together by combining the collision frame effective region occupation ratio and the virtual object effective region occupation ratio; therefore, here, the collision frame determination device determines the collision frame effective area ratio and the virtual object effective area ratio as the area ratio corresponding to each collision frame, and the area ratio is an index for determining the degree of note-taking with the virtual object area for each collision frame. For example, when both the collision frame effective area proportion and the virtual object effective area proportion exceed 0.8, it is indicated that the collision frame is closer to the virtual object area, and the collision frame can be determined as the collision frame of the virtual object area.
Exemplarily, referring to fig. 5, fig. 5 is a schematic diagram of an exemplary determination collision frame provided in an embodiment of the present application; as shown in fig. 5, in the frame image to be processed 5-1, for the virtual object region 5-11, the effective region occupation ratio of the collision frame corresponding to the collision frame 5-12 is smaller, and the effective region occupation ratio of the corresponding virtual object is 1, which is caused by the collision frame being too large; for the virtual object area 5-11, the effective area occupation ratio of the collision frame corresponding to the collision frame 5-13 is 1, and the effective area occupation ratio of the corresponding virtual object is smaller, which is caused by the fact that the collision frame is too small; aiming at the virtual object area 5-11, the effective area occupation ratio of the collision frame corresponding to the collision frame 5-14 is 0.81, the effective area occupation ratio of the corresponding virtual object is 0.81, the effective area occupation ratio of the collision frame and the effective area occupation ratio of the virtual object are both larger, and compared with the collision frame 5-12 and the collision frame 5-13, the approximation degree of the collision frame 5-14 and the virtual object area 5-11 is the largest.
In the embodiment of the application, when the collision frame determination device obtains the area ratio corresponding to each collision frame, the area ratios corresponding to each collision frame are combined, and then each area ratio corresponding to each collision frame is obtained; here, one of the collision frames corresponds to one of the area occupation ratios.
And S405, determining the collision frame corresponding to the maximum area occupation ratio in the area occupation ratios as a virtual collision frame of the virtual object area.
In the embodiment of the present application, each area proportion in each area proportion includes a collision frame effective area proportion and a virtual object effective area proportion, and the maximum area proportion refers to an area proportion in each area proportion in which the collision frame effective area proportion and the virtual object effective area proportion are both large; for example, the collision box effective area proportion and the virtual object effective area proportion are closest to each other and are both greater than the proportion threshold.
The virtual collision frame is a collision frame of the virtual target area.
Exemplarily, referring to fig. 6, fig. 6 is a schematic diagram of an exemplary virtual collision box provided in an embodiment of the present application; as shown in fig. 6, a virtual collision frame 6-12 corresponding to the virtual object area 6-11 in the frame image to be processed 6-1 determined by the collision frame determination method provided in the embodiment of the present application is shown.
In the embodiment of the present application, when the area corresponding to each collision frame is equal to the virtual object area, the area proportion is a collision frame effective area proportion or a virtual object effective area proportion.
It should be noted that, the area corresponding to each collision frame is equal to the virtual object area, which means that the area corresponding to each collision frame is equal to the virtual object area in size; the number of pixels of the region corresponding to each collision frame may be equal to the number of pixels of the virtual object region, the area of the region corresponding to each collision frame may be equal to the area of the virtual object region, and the like, which is not specifically limited in this embodiment of the application.
In the embodiment of the present application, when the area corresponding to each collision frame is equal to the virtual object area, the collision frame effective area proportion and the virtual object effective area proportion are equal, and thus, the game area proportion of each collision frame can be represented by the collision frame effective area proportion or the virtual object effective area proportion.
It can be understood that by setting the area corresponding to each collision frame to be equal to the virtual object area, the two indexes for measuring the approximation degree of the collision frame and the virtual object area are reduced into one index, that is, the complex multi-objective optimization is changed into the single-objective optimization, and the performability and efficiency of determining the collision frame are improved.
In the embodiment of the present application, S403 may be implemented by S4031 and S4032; that is, the collision frame determination device acquires the collision frame effective area ratio of the overlapping area to each collision frame and the virtual object effective area ratio of the overlapping area to the virtual object area, including S4031 and S4032, which are described below.
S4031, taking the overlapped area as a numerator and each collision frame as a denominator, calculating a ratio, and obtaining the effective area proportion of the collision frame.
The effective area proportion of the collision frame is a ratio calculated by taking the overlapping area as a numerator and each collision frame as a denominator. Here, the collision frame determination device may calculate the collision frame effective area proportion using the number of pixel points of the overlap area as a numerator and the number of pixel points of each collision frame as a denominator; the area of the overlapped area can be used as a numerator, the area of each collision frame is used as a denominator, and the effective area occupation ratio of the collision frames is calculated; and the like, which are not specifically limited in the embodiments of the present application.
And S4032, calculating a ratio by taking the superposed region as a numerator and the virtual object region as a denominator to obtain the virtual object effective region ratio.
The virtual object effective area ratio is a ratio calculated by using the overlapping area as a numerator and each virtual object area as a denominator. Here, the collision frame determination device may calculate the virtual object effective area proportion using the number of pixels of the overlap area as a numerator and the number of pixels of each virtual object area as a denominator; the area of the overlapped area can be used as a numerator, the area of the virtual object area is used as a denominator, and the virtual object effective area ratio is calculated; and the like, which are not specifically limited in the embodiments of the present application.
Referring to fig. 7, fig. 7 is a schematic flow chart of another alternative collision frame determining method provided in the embodiment of the present application; as shown in fig. 7, in the embodiment of the present application, S404 is followed by S406 to S408; that is, after the collision frame determination device obtains the respective area occupation ratios corresponding to the respective collision frames, the collision frame determination method further includes S406 to S408, and the respective steps are explained below separately.
And S406, when the maximum area occupation ratio in the area occupation ratios is smaller than the occupation ratio threshold value, dividing the virtual object area into at least two sub virtual object areas.
In the embodiment of the application, a proportion threshold is preset in the collision frame determination device, or the collision frame determination device can acquire the proportion threshold in real time, and the proportion threshold is used for determining whether the collision frame corresponding to the maximum area proportion is higher in the approximation degree with the virtual object area; for example, 0.8.
It should be noted that the maximum area occupancy is smaller than the occupancy threshold refers to the collision frame effective area occupancy and/or the virtual object effective area occupancy in the maximum area occupancy is smaller than the occupancy threshold; and the maximum area occupancy is less than the occupancy threshold, indicating that the degree of approximation of the collision frame corresponding to the maximum area occupancy to the virtual object area is low, whereby the collision frame determination device determines the plurality of collision frames as the virtual collision frames of the virtual object area based on the virtual object area.
Here, the collision frame determination device divides the virtual object region to divide at least two sub virtual object regions; for example, when the virtual object region is a virtual human object region, the at least two sub-virtual object regions may be a leg region and a region above the leg, and may also be a head region, a region below the head and above the leg, and a leg region, and the like; it is easy to know that at least two sub-virtual object regions together constitute a virtual object region.
It should be further noted that at least two sub-virtual object regions may be independent or overlapped; for example, when the at least two sub-virtual object regions are the head region and the under-head region, if the virtual object is in a head-down state, there may be an overlap between the head region and the under-head region; for another example, when the at least two sub-virtual object regions are the upper body region and the leg region, if the virtual object is in a standing state, there is no overlap between the upper body region and the leg region, and the two regions are independent.
S407, obtaining at least two sub-virtual collision frames corresponding to the at least two sub-virtual object areas.
It should be noted that the at least two sub-virtual collision boxes are a set formed by sub-virtual collision boxes corresponding to each sub-virtual object region in the at least two sub-virtual object regions; the at least two sub-virtual object regions correspond to the at least two sub-virtual collision frames one to one, that is, one sub-virtual object region of the at least two sub-virtual object regions corresponds to one sub-virtual collision frame of the at least two sub-virtual collision frames.
Each sub-virtual collision frame in the at least two sub-virtual collision frames is a sub-collision frame corresponding to the target maximum sub-area occupation ratio of a target sub-virtual object area corresponding to the at least two sub-virtual object areas; here, the target sub-virtual object region is a sub-virtual object region corresponding to each sub-virtual collision frame in at least two sub-virtual object regions, the target maximum sub-region occupancy is a maximum sub-region occupancy corresponding to each sub-virtual collision frame and the target sub-virtual object region, and may be obtained by obtaining the maximum region occupancy described in S402-S405, or may be a sub-region occupancy corresponding to a sub-collision frame that most approximates the target sub-virtual object region and is obtained by other methods, and the like, and this is not specifically limited in this embodiment of the present application.
S408, determining at least two sub virtual collision frames as virtual collision frames.
It should be noted that, when the maximum area occupancy is smaller than the occupancy threshold, the collision frame determination device uses the obtained at least two sub-virtual collision frames as the virtual collision frame of the virtual object area.
Exemplarily, referring to fig. 8, fig. 8 is a schematic diagram of another exemplary virtual collision box provided in the embodiment of the present application; as shown in fig. 8, it is shown that the virtual collision frames corresponding to the virtual object area 6-11 in the frame image to be processed 6-1 determined by the collision frame determination method provided in the embodiment of the present application are the sub-virtual collision frame 8-12 and the sub-virtual collision frame 8-13.
It can be understood that, when determining that the approximation degree of a collision frame to a virtual object region is low, the collision frame determination device divides the virtual object region, and then uses at least two sub-virtual collision frames corresponding to at least two sub-virtual object regions divided as virtual collision frames of the virtual object region; the approximation degree of the plurality of sub-virtual collision frames to the virtual object area is higher than the approximation degree of one collision frame to the virtual object area, so that the approximation degree of the virtual collision frame to the virtual object area is improved.
Accordingly, in the embodiment of the present application, S405 includes S4051: and when the maximum area occupation ratio in the area occupation ratios is larger than or equal to the occupation ratio threshold, determining the collision frame corresponding to the maximum area occupation ratio as a virtual collision frame of the virtual object area.
Referring to fig. 9, fig. 9 is a schematic flow chart of another alternative collision frame determining method provided in the embodiment of the present application; as shown in fig. 9, in the embodiment of the present application, S407 is followed by S409 and S410; that is, after the collision frame determination device acquires at least two sub virtual collision frames corresponding to at least two sub virtual object regions, the collision frame determination method further includes S409 and S410, which are described below.
And S409, when the ratio of the sub-virtual object area to be divided to the maximum sub-area to be divided corresponding to the sub-virtual collision frame to be divided is less than the sub-ratio threshold to be divided, iteratively dividing the sub-virtual object area to be divided until a division finishing condition is met, and finishing the division, wherein the sub-virtual object area to be divided is at least one of at least two sub-virtual object areas, and the target sub-virtual collision frame corresponds to the virtual object area to be divided.
It should be noted that, for different divided regions in the virtual object region, corresponding sub-proportion threshold values are different; for example, the sub-occupancy threshold for the head is higher than the sub-occupancy threshold for the legs is lower. The collision frame determination device determines whether the approximation degree of at least two sub-virtual collision frames corresponding to at least two sub-virtual object regions meets the requirement or not based on the corresponding sub-occupation ratio threshold, and the requirement is not met when the corresponding sub-region occupation ratio is smaller than the sub-occupation ratio threshold, for example, the highest sub-region occupation ratio (the largest sub-region occupation ratio to be divided) corresponding to the upper body region (the sub-virtual object region to be divided) is smaller than the upper body sub-occupation ratio threshold (the sub-occupation ratio to be divided). Here, the virtual object area to be divided is a sub virtual object area which does not meet the requirement, and is one sub virtual object area or a plurality of sub virtual object areas in at least two sub virtual object areas; the sub-virtual collision frame to be divided belongs to at least two sub-virtual collision frames and is at least one sub-collision frame corresponding to the sub-virtual object area to be divided; the sub-occupation ratio threshold to be divided is at least one sub-occupation ratio threshold corresponding to the sub-virtual object area to be divided. Here, the sub duty ratio threshold may be greater than the duty ratio threshold.
In the embodiment of the application, the collision frame determination device performs iterative division on the sub-virtual object regions to be divided until the number of each currently divided sub-virtual object region is equal to the maximum number threshold, or the divided maximum sub-region occupation ratio corresponding to the divided sub-virtual object regions to be divided is greater than the corresponding divided sub-occupation ratio threshold, and determines that a division end condition is satisfied. Here, the number of each currently divided sub virtual object region is the number of sub virtual object regions divided for the virtual object region; the divided sub virtual object areas to be divided are a plurality of sub virtual object areas corresponding to the sub virtual object areas to be divided.
And S410, determining each sub-virtual collision frame corresponding to each sub-virtual object area which is divided currently as a virtual collision frame.
It should be noted that each of the currently divided sub virtual object regions is a sub virtual object region divided for the virtual object region, such as a head region, an upper body part region, and a leg region. And when the occupation ratio of the maximum sub-area to be divided is less than the occupation ratio threshold of the sub-areas to be divided, the collision frame determining equipment determines each sub-virtual collision frame as a virtual collision frame of the virtual object area. Here, the data of the sub-virtual collision frame corresponding to each sub-virtual collision frame is greater than the number of the sub-virtual collision frames corresponding to at least two sub-virtual collision frames; and each sub-virtual collision frame and at least two sub-virtual collision frames may or may not have a common sub-virtual collision frame, which is not specifically limited in the embodiment of the present application.
Accordingly, in the present embodiment, S408 includes S4081: and when the occupation ratio of the maximum sub-area to be divided is greater than or equal to the occupation ratio threshold of the sub-areas to be divided, determining at least two sub-virtual collision frames as the virtual collision frames.
Illustratively, referring to fig. 10, fig. 10 is a schematic diagram of another exemplary virtual collision box provided by the embodiment of the present application; as shown in fig. 10, it is shown that, in the frame image to be processed 6-1 determined by the collision frame determination method provided in the embodiment of the present application, the virtual collision frames corresponding to the virtual object area 6-11 are the sub-virtual collision frame 10-12, the sub-virtual collision frame 10-13, and the sub-virtual collision frame 10-14.
It should be noted that the virtual collision frame 6-12 in fig. 6 is a collision frame corresponding to the maximum area ratio; since the occupation ratio threshold is 0.8 and the maximum area occupation ratios are 0.76 and 0.76, the maximum area occupation ratio is less than the occupation ratio threshold, and the virtual object area 6-11 is divided; the sub-virtual collision frame 8-12 and the sub-virtual collision frame 8-13 in fig. 8, i.e., at least two sub-virtual collision frames corresponding to at least two sub-virtual object regions, are divided into two parts (corresponding to at least two sub-virtual object regions) of the upper body region and the leg region; since the sub-occupation ratio threshold corresponding to the upper body area is 0.81 (the sub-occupation ratio threshold to be divided), and the maximum sub-area occupation ratio (the maximum sub-area occupation ratio to be divided) corresponding to the upper body area is 0.79 and 0.79, the maximum sub-area occupation ratio to be divided is smaller than the sub-occupation ratio threshold to be divided, and the upper body area is divided; dividing the virtual object areas into a head area and a trunk area, wherein each currently divided sub virtual object area corresponding to the virtual object areas 6-11 comprises the head area, the trunk area and 3 leg areas, and if the dividing ending condition is that the maximum number threshold is 3, ending the dividing; the sub virtual collision frame 10-12, the sub virtual collision frame 10-13 and the sub virtual collision frame 10-14 in fig. 10 are the sub virtual collision frames corresponding to the sub virtual object regions currently divided.
It can be understood that the approximation degree of the virtual collision box to the virtual object region can be improved by iteratively dividing the region with a lower approximation degree to obtain the virtual collision box including each sub-virtual collision box. It is easy to know that the approximation degree of the sub virtual collision frame 8-12 and the sub virtual collision frame 8-13 in fig. 8 to the virtual object region 6-11 is higher than the approximation degree of the virtual collision frame 6-12 in fig. 6 to the virtual object region 6-11; the approximation degree of the sub virtual collision frame 10-12, the sub virtual collision frame 10-13 and the sub virtual collision frame 10-14 in fig. 10 to the virtual object region 6-11 is higher than the approximation degree of the sub virtual collision frame 8-12 and the sub virtual collision frame 8-13 in fig. 8 to the virtual object region 6-11.
In the embodiment of the present application, S406 may be implemented by S4061 and S4062; that is, the collision frame determination apparatus divides the virtual object area into at least two sub virtual object areas including S4061 and S4062, and the following describes each step separately.
S4061, the dividing line position of the virtual object region is adjusted, and the virtual object region is divided into various region combinations.
It should be noted that the dividing line is used for dividing the virtual object region into a plurality of regions, and each region combination in various region combinations includes at least two initial sub-virtual object regions; here, when the at least two initial sub-virtual object regions are i regions, the division line is i-1 lines, where i is an integer greater than 2. The collision frame determination device constantly adjusts the positions of the division lines, and each different division line position will correspond to one division result, i.e., one area combination.
Exemplarily, referring to fig. 11, fig. 11 is a schematic diagram of an exemplary combination of regions provided by an embodiment of the present application; as shown in fig. 11, for the virtual object region 11-11 in the frame image to be processed 11-1, the division line 11-21 corresponds to a region combination including the region 11-111 to the region 11-113, the division line 11-22 corresponds to a region combination including the region 11-114 to the region 11-116, and the division line 11-23 corresponds to a region combination including the region 11-117 to the region 11-119. Here, each of the region 11-111 through the region 11-113, the region 11-114 through the region 11-116, and the region 11-117 through the region 11-119 is at least two initial sub-virtual object regions.
S4062, obtaining a target area combination from the various area combinations, wherein the target area combination comprises at least two sub-virtual object areas.
It should be noted that at least two sub-virtual collision frames corresponding to the target area combination are most approximate to the virtual object area, and are the optimal area combination.
It is understood that the collision frame determination apparatus can improve the accuracy of the determined virtual collision frame by adjusting the dividing line position to obtain the best dividing result.
In this embodiment of the application, when the at least two initial sub-virtual object regions include an initial upper body region and an initial leg region, S4062 may be implemented by S40621 to S40624, where the initial upper body region is a region corresponding to an upper body of a virtual object, the initial leg region is a region corresponding to a leg of the virtual object, the initial upper body region and the initial leg region jointly form a virtual object region, and the virtual object is an attacked object rendered in a virtual scene; that is, the collision frame determination apparatus acquires target area combinations including S40621 to S40624 from various area combinations, and the respective steps are explained below.
S40621, obtaining each sub collision frame corresponding to the initial upper body area in each area combination of the area combinations.
It should be noted that the collision frame determination device calculates the maximum sub-area proportion of the initial upper body area in each area combination to obtain the optimal area combination. Here, the obtaining manner of each sub collision frame is similar to the obtaining manner of each collision frame corresponding to the virtual object region, and details of the embodiment of the present application are not repeated here.
S40622, based on the sub-overlapping area of each sub-collision frame in the sub-collision frames and the initial upper body area, determining the proportion of the sub-areas corresponding to the sub-collision frames.
In this embodiment of the present application, an obtaining manner of the sub-overlapping area is similar to that of the overlapping area, and an obtaining manner of the occupation ratio of each sub-area is similar to that of each area, which is not described herein again in this embodiment of the present application.
S40623, acquiring the maximum sub-region occupation ratio from the sub-region occupation ratios, and thus acquiring the maximum sub-region occupation ratios corresponding to various region combinations.
It should be noted that after the collision frame determination device obtains the maximum sub-region occupation ratio corresponding to the initial upper body region in each region combination, the maximum sub-region occupation ratios are also obtained for the various region combinations. Here, the various combinations of regions correspond one-to-one to the respective largest sub-regions.
S40624, determining a region combination corresponding to the largest sub-region occupation ratio among the largest sub-region occupation ratios as a target region combination including the upper body region and the leg region, thereby obtaining at least two sub-virtual object regions including the upper body region and the leg region.
It should be noted that the largest maximum sub-region occupation ratio is the target maximum sub-region occupation ratio; the upper body area is the initial upper body area in the area combination corresponding to the largest maximum sub-area occupation ratio, and the leg area is the initial leg area in the area combination corresponding to the largest maximum sub-area occupation ratio.
In the embodiment of the present application, S407 may be implemented by S4071 to S4074; that is, the collision frame determination device acquires at least two sub-virtual collision frames corresponding to at least two sub-virtual object regions, including S4071 to S4074, and the following describes each step separately.
S4071, determining the sub collision frame corresponding to the largest maximum sub-region occupation ratio as a first sub virtual collision frame corresponding to the upper body region.
It should be noted that the largest sub-region accounts for the corresponding sub-collision frame, that is, the sub-virtual collision frame corresponding to the upper body region, and is referred to as the first sub-virtual collision frame herein.
S4072, obtaining a leg midline corresponding to the leg area.
It should be noted that, since the leg portions may include four legs, two legs, and so on, the leg portions are separated; therefore, the collision frame determination device may determine, for the sub-virtual collision frame of the leg region, based on the maximum sub-region proportion, and may also determine based on the region characteristics of the leg region; when the collision frame determination device determines a sub-virtual collision frame of the leg region based on the region characteristics of the leg region, first, a leg center line of the leg region is determined. Here, the leg region may be a region corresponding to a maximum circumscribed frame of the leg.
S4073, combining the leg region and the extension parameter, extending to both sides with the leg midline as the center.
In the embodiment of the present application, the collision frame determination device can acquire an extension parameter used for determining the extension degree, such as 1/4. Here, the collision frame determination device determines the degree of extension based on the leg region and the extension parameter, and further extends to both sides with the leg center line as a center based on the degree of extension; wherein the extension is for example 1/4 the area of the leg region.
S4074, determining the extended frame as a second sub-virtual collision frame, and thus obtaining at least two sub-virtual collision frames including the first sub-virtual collision frame and the second sub-virtual collision frame.
It should be noted that the extended frame is a sub-virtual collision frame corresponding to the leg region, and is referred to as a second sub-virtual collision frame herein. At this time, the at least two sub virtual collision frames are the sub virtual collision frame corresponding to the upper body region and the sub virtual collision frame corresponding to the leg region, that is, the first sub virtual collision frame and the second sub virtual collision frame.
In the embodiment of the application, when the sub-virtual object area to be divided is the upper body area, the sub-virtual collision frame to be divided is a first sub-virtual collision frame, and the maximum sub-area occupation ratio to be divided is the maximum sub-area occupation ratio which is the maximum; at this time, the collision frame determination apparatus in S409 divides the sub virtual object region to be divided, including S4091, which is explained below.
And S4091, dividing the upper body area based on the target part included in the upper body area, and obtaining a first sub upper body area and a second sub upper body area.
It should be noted that the second sub virtual collision frame, the third sub virtual collision frame corresponding to the first sub upper body region, and the fourth sub virtual collision frame corresponding to the second sub upper body region are each sub virtual collision frame. In addition, the determination manner of the third sub virtual collision frame and the determination manner of the fourth sub virtual collision frame are both similar to the determination manner of the collision frame corresponding to the maximum area ratio in S405, and this is not specifically limited in this embodiment of the present application. Here, the upper body region includes target parts such as a head and a torso.
In the embodiment of the present application, when the first sub upper body region is the head region, the third sub virtual collision frame is a rectangular frame, for example, a square. And when the first sub-upper body region is the head region, the second sub-upper body region is the torso region, i.e., a region below the head region and above the leg regions.
Referring to fig. 12, fig. 12 is a schematic flowchart of yet another alternative collision frame determining method provided in the embodiment of the present application; as shown in fig. 12, in the embodiment of the present application, in S401, the collision frame determination device obtains a to-be-processed frame image corresponding to a to-be-processed animation, which includes S4011 and S4012, and the following description describes each step separately.
And S4011, displaying a frame image sequence of the animation to be processed.
It should be noted that the frame image sequence is a frame sequence corresponding to the animation to be processed, and can be obtained through framing processing. The collision frame determination device causes a user to select a frame image to be subjected to collision frame determination from the displayed frame image sequence by displaying the frame image sequence.
S4012, in response to a frame image selection operation for the frame image sequence, determining a frame image sequence to be processed, wherein the frame image to be processed is any frame image in the frame image sequence to be processed.
In the embodiment of the application, the frame image sequence is displayed in a triggerable manner, or a selection control for triggering the selection of the frame image from the frame image sequence is also displayed; when a user triggers a frame image sequence displayed in a triggerable manner or triggers a selection control to select a frame image to be subjected to collision frame determination from the displayed frame image sequence, the collision frame determination device receives frame image selection operation for the frame image sequence; at this time, the collision frame determination device responds to the frame image selection operation, and the obtained frame image to be subjected to collision frame determination is the frame image sequence to be processed.
With continued reference to fig. 12, after determining the virtual collision frame (i.e., S405 or S408 or S410), the collision frame determination method further includes S411 and S412, which are described below.
S411, combining the frame image to be processed and the virtual collision frame to obtain a frame image to be rendered, so as to obtain a frame image sequence to be rendered corresponding to the frame image sequence to be processed.
In the embodiment of the present application, the collision frame determination device adds the obtained information corresponding to the virtual collision frame to the data corresponding to the frame image to be processed, so that the combination of the frame image to be processed and the virtual collision frame is completed, and the result of the combination is the frame image to be rendered. Here, after the collision frame determination device obtains the corresponding frame image to be rendered for each frame image to be processed in the frame image sequence to be processed, a set of frame images to be rendered corresponding to each frame image to be processed in the frame image sequence to be processed is the frame image sequence to be rendered.
It should be noted that the frame image to be rendered is used for rendering a virtual scene including a virtual object, and the virtual collision box is used for determining whether an operation instruction acting on a page of the virtual scene touches the virtual object.
And S412, updating the frame image sequence to be processed in the frame image sequence based on the frame image sequence to be rendered, and obtaining the animation to be rendered.
It should be noted that the to-be-rendered animation is the to-be-processed animation in which the determination of the collision box is completed. Here, the collision frame determination device may further obtain a virtual collision frame sequence corresponding to the sequence of frame images to be processed based on a determination manner of the virtual collision frame corresponding to the frame images to be processed; and correspondingly adding the virtual collision frame sequence into the frame image sequence to be processed in the frame image sequence, thereby obtaining the animation to be rendered.
In this embodiment of the present application, the manner of obtaining the duty ratio threshold and the sub-duty ratio threshold includes: the collision frame determination equipment displays a threshold setting control, wherein the threshold setting control is used for triggering the setting of the occupation ratio threshold and the sub occupation ratio threshold; in response to a threshold setting operation acting on the threshold setting control, an occupancy threshold and a sub-occupancy threshold are obtained. Here, the threshold setting control may be displayed on the same page as the frame image sequence.
In the embodiment of the present application, after determining the virtual collision frame (i.e., S405, S408, or S410), the collision frame determination method further includes S413 and S414, which are described below.
And S413, acquiring the effective area occupation ratio of the target collision frame and the effective area occupation ratio of the target virtual object corresponding to the virtual collision frame.
It should be noted that, when the virtual collision frame is the collision frame corresponding to the maximum area occupation ratio, the effective area occupation ratio of the target collision frame and the effective area occupation ratio of the target virtual object, that is, the maximum area occupation ratio; when the virtual collision frame is at least two sub-virtual collision frames, the effective area proportion of the target collision frame and the effective area proportion of the target virtual object are the maximum sub-area proportion of the sub-virtual collision frame in the at least two sub-virtual collision frames or the corresponding average value; when the virtual collision frame is each sub-virtual collision frame, the effective area occupation ratio of the target collision frame and the effective area occupation ratio of the target virtual object are the maximum sub-area occupation ratio of the sub-virtual collision frame in each sub-virtual collision frame or the corresponding average value.
And S414, displaying the effective area occupation ratio of the target collision frame and the effective area occupation ratio of the target virtual object.
It can be understood that, by displaying the target collision frame effective area proportion and the target virtual object effective area proportion, the collision frame determination device can perform targeted adjustment on the collision frame with a lower degree of comparison according to the display, and thus, can improve the accuracy of the collision frame.
In the embodiment of the present application, after determining the virtual collision frame (i.e., S405, S408, or S410), the collision frame determination method further includes S415 and S416, which are described below.
S415, acquiring a target frame image comprising a virtual object area from the animation to be processed.
It should be noted that in the animation to be processed, there is a case where virtual object regions in the multi-frame images are identical or close to each other, and the multi-frame images with identical or close virtual object regions are target frame images.
And S416, determining the virtual collision frame as a collision frame of each frame image in the target frame image.
In the embodiment of the application, the collision frame determination device directly determines the obtained virtual collision frame as the collision frame of each frame of image in the target frame of image, so as to improve the determination efficiency of the collision frame of the animation to be processed.
Next, an exemplary application of the embodiment of the present application in a practical application scenario will be described.
Referring to fig. 13, fig. 13 is a schematic flowchart illustrating an exemplary process of determining a collision frame according to an embodiment of the present application; as shown in fig. 13, the exemplary process of determining a collision frame includes:
and S1301, starting.
S1302, acquiring a sequence frame image (to-be-processed frame image), a first preset value (occupation ratio threshold) and a second preset value (sub-occupation ratio threshold).
It should be noted that, referring to fig. 14, fig. 14 is a schematic page diagram of an exemplary triggered collision box determination provided in the embodiment of the present application; as shown in FIG. 14, on the page 14-1, there are displayed an animation frame sequence 14-11 (frame image sequence) with a frame-by-frame animation of the fighting game, a threshold setting control 14-12, and a "generate bump box" button 14-13. When the user clicks on the animation frame sequence 14-11, selects the sequence frame image 14-111 from the animation frame sequence 14-11, inputs the preset value of 0.8 and the preset value of 0.82, and clicks on the button 14-13, the sequence frame image 14-111, the first preset value (0.8), and the second preset value (0.82) are obtained.
S1303, the area of the virtual character (virtual object region) in the sequence frame image is determined.
At step S1304, the area occupation ratio of the collision frame in a different shape and position (each area occupation ratio corresponding to each collision frame) is calculated violently, which is equal to the area of the virtual character.
It should be noted that the accuracy of a collision frame can be measured by using the following two indexes:
the effective area ratio of the collision frame is equal to the virtual character area in the collision frame/the total area of the collision frame
Virtual character effective area ratio (virtual character area/virtual character total area in collision frame)
The effective area occupation ratio of the collision frame is as large as possible but the collision frame is possibly too small, and the effective area occupation ratio of the virtual character is as large as possible but the collision frame is possibly too large and exceeds the size of the virtual character; therefore, when the two indexes are as large as possible, the accuracy of the collision frame is high, and the approximation degree of the collision frame and the virtual character is high.
It should be further noted that the area of the collision frame is the same as the area of the virtual character (which may be the sum of the number of pixels); thus, the collision frame effective area proportion and the virtual character effective area proportion are equal, and at this time, the complex multi-target (collision frame effective area proportion and virtual character effective area proportion) optimization is changed to single-target (collision frame effective area proportion or virtual character effective area proportion) optimization.
Here, the outline of the virtual character is first approximated using a collision box.
And S1305, determining a collision frame corresponding to the maximum area ratio based on the violence calculation result.
It should be noted that, the method for determining a collision frame is as follows: the size of the fixed collision frame is the same as the area of the virtual character, the effective area occupation ratio of the collision frame under different shapes and positions is calculated violently, and the result of the maximum effective area occupation ratio of the collision frame is the optimal solution for approaching the virtual character by using one collision frame.
S1306, judging whether the maximum area ratio is larger than or equal to a first preset value; if so, S1311 is performed, otherwise S1307 is performed.
S1307 adjusts the leg height (dividing line height) of the virtual character to obtain the upper body part (upper body region) having the largest region occupation ratio (largest maximum sub-region occupation ratio).
It should be noted that, the maximum area occupation ratio is smaller than the first preset value, which indicates that the approximation degree of a collision box to the virtual character is low; thus, the virtual character is approximated by using two collision frames.
S1308, a collision frame (second sub-virtual collision frame) of the leg (leg region) is acquired.
It should be noted that, because the legs are separated, the corresponding collision frame is determined for the legs based on the leg characteristics; thus, a height is determined to separate the leg portion from the body portion, where the height of the leg portion is adjusted so that the area ratio of the body portion is the highest, and at this time, the collision frame of the body portion and the division line of the leg portion are determined, and thus the leg portion is determined. Here, since the middle of the leg is blank, the leg center line is determined first so that the areas of the leg regions on both sides of the leg center line are the same; when the area of the leg portion is 1/4 from the medial to lateral direction, the resulting impact frame is the impact frame of the leg portion.
S1309, judging whether the maximum area occupation ratio is greater than or equal to a second preset value; if so, S1312 is performed, and if not, S1310 is performed.
It should be noted that, if the maximum area occupation ratio is smaller than the second preset value, it indicates that the approximation degree of the two collision boxes to the virtual character is still low, and at this time, the three collision boxes are adopted to approximate the virtual character.
S1310, the collision frame of the head in the upper body part, the collision frame of the trunk in the upper body part, and the collision frame of the legs are set as the determined collision frames. S1313 is performed.
It should be noted that, the manner of obtaining the collision frame of the head and the collision frame of the torso is similar to the manner of determining the collision frame of the upper body, and the details of the embodiment of the present application are not repeated here.
In addition, the collision frame of the head may be square, and the maximum area occupancy of the collision frame of the head is higher than the head occupancy threshold (here, the head occupancy threshold may be the highest threshold), and the collision frame of the head may also be between the upper left side and the upper right side of the body part.
S1311, the collision frame corresponding to the maximum area occupation ratio is used as the determined collision frame. S1314 is performed.
S1312 determines the collision frame corresponding to the largest area ratio and the collision frame of the leg as the determined collision frame. S1314 is performed.
S1313, outputs the determined collision frame (virtual collision frame) and the corresponding area ratio (target collision frame effective area ratio and target virtual object effective area ratio).
It should be noted that, when the determined collision frame is one collision frame, the corresponding area occupation ratio is the maximum area occupation ratio; when the determined collision frames are two collision frames, the corresponding area occupation ratio is the maximum area occupation ratio corresponding to the upper body part; when the determined collision frame is three collision frames, the corresponding area occupation ratio is the ratio of the sum of the area of the virtual character in the collision frame of the head and the area of the virtual character in the collision frame of the trunk to the area of the upper body part.
Here, the maximum number of collision frames is set to 3 (maximum number threshold). In addition, when the number of the determined collision frames is 3, one collision frame is used for approximating the trunk of the virtual object, the other collision frame is used for approximating the legs, and the last collision frame is used for approximating the head of the virtual object; and the importance degrees of the three collision frames are decreased in sequence.
And S1314, ending.
Referring to fig. 15 and 16, fig. 15 and 16 are schematic views of results obtained using an exemplary determine collision frame flow corresponding to fig. 13; as shown in fig. 15, in the sequence frame image 15-1, the collision boxes of the virtual character 15-11 are the collision box 15-12, the collision box 15-13, and the collision box 15-14; wherein the leg midline is dashed line 15-15. Similarly, as shown in FIG. 16, in the sequence frame image 16-1, the collision boxes of the virtual character 16-11 are the collision box 16-12, the collision box 16-13, and the collision box 16-14; wherein the leg midline is dashed line 16-15.
In summary, referring to fig. 17, fig. 17 is a schematic diagram of an exemplary determination of an animation to be rendered according to an embodiment of the present application; referring to fig. 17, for the frame-by-frame animation 17-1 of the fighting game, the exemplary process of determining the collision frame corresponding to fig. 13 is adopted to determine the collision frame 17-3 of the sequence frame image 17-2, and the collision frame 17-3 is written into the logical data corresponding to the sequence frame image 17-2 in the frame-by-frame animation 17-1, so that the animation 17-4 to be rendered is obtained.
On one hand, the approximation degree of the collision frame and the virtual object area is determined through two indexes so as to automatically generate the collision frame, and the production efficiency of the virtual object in the frame-by-frame animation fighting game can be improved; on the other hand, the collision frame is generated based on the same rule, so that the range of the collision frame corresponding to each virtual object basically follows the same rule, and the rendering effect of the virtual object can be improved in the virtual scene rendering application corresponding to the operation and the micro-operation; on the other hand, the approximation degree of the collision frame and the virtual object region is determined through the two indexes, and when the approximation degree of the collision frame to the virtual object region is determined through the two indexes, the collision frame can be modified in a targeted mode, for example, the collision frame is drawn manually, so that the approximation degree of the collision frame to the virtual object region can be improved.
Continuing with the exemplary structure of the collision frame determination device 255 provided by the embodiments of the present application as software modules, in some embodiments, as shown in fig. 3, the software modules stored in the collision frame determination device 255 of the memory 250 may include:
an image obtaining module 2551, configured to obtain a frame image to be processed corresponding to the animation to be processed, and obtain each collision frame positively correlated to a virtual object region in the frame image to be processed;
a region determining module 2552, configured to determine a coincidence region of each collision frame of the respective collision frames with the virtual object region;
a proportion determining module 2553, configured to obtain a proportion of collision frame effective areas corresponding to the overlapping area and each of the collision frames, and a proportion of virtual object effective areas corresponding to the overlapping area and the virtual object area;
the occupation ratio determining module 2553 is further configured to determine the effective area occupation ratio of the collision frame and the effective area occupation ratio of the virtual object as an area occupation ratio corresponding to each collision frame, so as to obtain each area occupation ratio corresponding to each collision frame;
a collision frame determining module 2554, configured to determine a collision frame corresponding to the largest area ratio of the area ratios as a virtual collision frame of the virtual object area.
In this embodiment of the application, the collision frame determining apparatus 255 further includes a region dividing module 2555, configured to divide the virtual object region into at least two sub virtual object regions when the maximum region occupancy is smaller than a occupancy threshold; acquiring at least two sub-virtual collision frames corresponding to the at least two sub-virtual object areas, wherein each sub-virtual collision frame in the at least two sub-virtual collision frames is a sub-collision frame corresponding to a target maximum sub-area occupation ratio of a target sub-virtual object area corresponding to the at least two sub-virtual object areas; and determining the at least two sub-virtual collision frames as the virtual collision frame.
In this embodiment of the application, the region dividing module 2555 is further configured to, when a maximum sub-region-to-be-divided ratio between a sub-virtual object region to be divided and a sub-virtual collision frame to be divided is smaller than a sub-to-be-divided ratio threshold, iteratively divide the sub-virtual object region to be divided until a division end condition is met, and end the division, where the sub-virtual object region to be divided is at least one of the at least two sub-virtual object regions, and the target sub-virtual collision frame corresponds to the virtual object region to be divided; and determining each sub-virtual collision frame corresponding to each sub-virtual object area divided currently as the virtual collision frame.
In this embodiment of the application, the region dividing module 2555 is further configured to adjust a dividing line position of the virtual object region, and divide the virtual object region into various region combinations, where each of the various region combinations includes at least two initial sub-virtual object regions; and acquiring a target area combination from the various area combinations, wherein the target area combination comprises the at least two sub-virtual object areas, and the at least two sub-virtual collision frames corresponding to the target area combination are most approximate to the virtual object area.
In this embodiment of the application, when the at least two initial sub-virtual object regions include an initial upper body region and an initial leg region, the region dividing module 2555 is further configured to obtain each sub-collision frame corresponding to the initial upper body region in each of the various region combinations; determining each sub-region occupation ratio corresponding to each sub-collision frame based on a sub-coincidence region of each sub-collision frame in the sub-collision frames and the initial upper body region; obtaining the maximum sub-region occupation ratio from the sub-region occupation ratios so as to obtain the maximum sub-region occupation ratios corresponding to the various region combinations; determining a region combination corresponding to a largest sub-region occupation ratio among the largest sub-region occupation ratios as the target region combination including an upper body region and a leg region, thereby obtaining the at least two sub-virtual object regions including the upper body region and the leg region, wherein the largest sub-region occupation ratio is the target largest sub-region occupation ratio.
In this embodiment of the application, the region dividing module 2555 is further configured to determine the sub collision frame corresponding to the largest sub region ratio as a first sub virtual collision frame corresponding to the upper body region; acquiring a leg middle line corresponding to the leg region; combining the leg region and the extension parameter, and extending towards two sides by taking the leg middle line as a center; determining the extended border as a second sub-virtual collision box, thereby obtaining the at least two sub-virtual collision boxes including the first sub-virtual collision box and the second sub-virtual collision box.
In this embodiment of the application, when the sub-virtual object region to be divided is the upper body region, the sub-virtual collision frame to be divided is the first sub-virtual collision frame, and the maximum sub-region occupation ratio to be divided is the maximum sub-region occupation ratio; the region dividing module 2555 is further configured to divide the upper body region based on the object portion included in the upper body region to obtain a first sub upper body region and a second sub upper body region, where the second sub virtual collision frame, a third sub virtual collision frame corresponding to the first sub upper body region, and a fourth sub virtual collision frame corresponding to the second sub upper body region are the sub virtual collision frames.
In this embodiment of the application, when the area corresponding to each collision frame is equal to the virtual object area, the area occupation ratio is the collision frame effective area occupation ratio or the virtual object effective area occupation ratio.
In this embodiment of the application, the occupation ratio determining module 2553 is further configured to calculate a ratio by taking the overlapping area as a numerator and each collision frame as a denominator, and obtain the effective area occupation ratio of the collision frame; and taking the superposed region as a numerator and the virtual object region as a denominator, and calculating a ratio to obtain the virtual object effective region ratio.
In this embodiment of the application, the image obtaining module 2551 is further configured to display a frame image sequence of the animation to be processed; and responding to a frame image selection operation aiming at the frame image sequence, and determining a frame image sequence to be processed, wherein the frame image to be processed is any frame image in the frame image sequence to be processed.
In this embodiment of the application, the collision frame determining device 255 further includes a collision frame processing module 2556, configured to combine the frame image to be processed and the virtual collision frame to obtain a frame image to be rendered, so as to obtain a frame image sequence to be rendered corresponding to the frame image sequence to be processed; and updating the frame image sequence to be processed in the frame image sequence based on the frame image sequence to be rendered to obtain the animation to be rendered.
In this embodiment of the present application, the collision frame determining apparatus 255 further includes a threshold obtaining module 2557, configured to display a threshold setting control, where the threshold setting control is configured to trigger setting of the duty threshold; obtaining the duty ratio threshold in response to a threshold setting operation acting on the threshold setting control.
In this embodiment of the application, the collision frame determining device 255 further includes an information display module 2558, configured to obtain a target collision frame effective area ratio and a target virtual object effective area ratio corresponding to the virtual collision frame; and displaying the effective area occupation ratio of the target collision frame and the effective area occupation ratio of the target virtual object.
In this embodiment of the application, the collision frame processing module 2556 is further configured to obtain a target frame image including the virtual object region from the animation to be processed; determining the virtual collision frame as a collision frame of each frame image in the target frame images.
Embodiments of the present application provide a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions to cause the computer device to execute the collision frame determination method described above in the embodiments of the present application.
Embodiments of the present application provide a computer-readable storage medium storing executable instructions, which when executed by a processor, cause the processor to perform a collision box determination method provided by embodiments of the present application, for example, a collision box determination method as shown in fig. 4.
In some embodiments, the computer-readable storage medium may be memory such as FRAM, ROM, PROM, EPROM, EEPROM, flash, magnetic surface memory, optical disk, or CD-ROM; or may be various devices including one or any combination of the above memories.
In some embodiments, executable instructions may be written in any form of programming language (including compiled or interpreted languages), in the form of programs, software modules, scripts or code, and may be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
By way of example, executable instructions may correspond, but do not necessarily have to correspond, to files in a file system, and may be stored in a portion of a file that holds other programs or data, such as in one or more scripts in a hypertext Markup Language (HTML) document, in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub-programs, or portions of code).
By way of example, executable instructions may be deployed to be executed on one computing device or on multiple computing devices at one site or distributed across multiple sites and interconnected by a communication network.
In summary, according to the embodiment of the application, on one hand, the approach degree of the collision frame and the virtual object area is determined through the two indexes so as to automatically generate the collision frame, and the production efficiency of the virtual object in the frame-by-frame animation combat game can be improved; on the other hand, the collision frame is generated based on the same rule, so that the range of the collision frame corresponding to each virtual object basically follows the same rule, and the rendering effect of the virtual object can be improved in the virtual scene rendering application corresponding to the operation and the micro-operation; on the other hand, the approximation degree of the collision frame and the virtual object region is determined through the two indexes, and when the approximation degree of the collision frame to the virtual object region is determined through the two indexes, the collision frame can be modified in a targeted mode, for example, the collision frame is drawn manually, so that the approximation degree of the collision frame to the virtual object region can be improved.
The above description is only an example of the present application, and is not intended to limit the scope of the present application. Any modification, equivalent replacement, and improvement made within the spirit and scope of the present application are included in the protection scope of the present application.

Claims (15)

1. A collision frame determination method, comprising:
acquiring a frame image to be processed corresponding to the animation to be processed, and acquiring each collision frame positively correlated with a virtual object area in the frame image to be processed;
determining a coincidence region of each collision frame in the collision frames and the virtual object region;
acquiring the effective area occupation ratio of the collision frame corresponding to the coincidence area and each collision frame, and the effective area occupation ratio of the virtual object corresponding to the coincidence area and the virtual object area;
determining the effective area ratio of the collision frame and the effective area ratio of the virtual object as the area ratio corresponding to each collision frame, so as to obtain each area ratio corresponding to each collision frame;
and determining the collision frame corresponding to the maximum area occupation ratio in the area occupation ratios as the virtual collision frame of the virtual object area.
2. The method of claim 1, wherein after obtaining respective area fractions corresponding to the respective collision boxes, the method further comprises:
when the maximum area ratio is smaller than a ratio threshold, dividing the virtual object area into at least two sub-virtual object areas;
acquiring at least two sub-virtual collision frames corresponding to the at least two sub-virtual object areas, wherein each sub-virtual collision frame in the at least two sub-virtual collision frames is a sub-collision frame corresponding to a target maximum sub-area occupation ratio of a target sub-virtual object area corresponding to the at least two sub-virtual object areas;
and determining the at least two sub-virtual collision frames as the virtual collision frame.
3. The method according to claim 2, wherein after obtaining at least two sub-virtual collision boxes corresponding to the at least two sub-virtual object regions, the method further comprises:
when the ratio of the sub-virtual object area to be divided to the maximum sub-area to be divided corresponding to the sub-virtual collision frame to be divided is less than the sub-ratio threshold value to be divided, the sub-virtual object area to be divided is divided in an iterative manner until a division ending condition is met, and the division is ended, wherein,
the sub virtual object area to be divided is at least one of the at least two sub virtual object areas, and the target sub virtual collision frame corresponds to the virtual object area to be divided;
and determining each sub-virtual collision frame corresponding to each sub-virtual object area divided currently as the virtual collision frame.
4. The method of claim 3, wherein the dividing the virtual object region into at least two sub-virtual object regions comprises:
adjusting the dividing line position of the virtual object area, and dividing the virtual object area into various area combinations, wherein each area combination in the various area combinations comprises at least two initial sub-virtual object areas;
and acquiring a target area combination from the various area combinations, wherein the target area combination comprises the at least two sub-virtual object areas, and the at least two sub-virtual collision frames corresponding to the target area combination are most approximate to the virtual object area.
5. The method of claim 4, wherein when the at least two initial sub-virtual object regions include an initial upper body region and an initial leg region, the obtaining a target region combination from the various region combinations comprises:
acquiring each sub-collision frame corresponding to the initial upper body area in each area combination of the area combinations;
determining each sub-region occupation ratio corresponding to each sub-collision frame based on a sub-coincidence region of each sub-collision frame in the sub-collision frames and the initial upper body region;
obtaining the maximum sub-region occupation ratio from the sub-region occupation ratios so as to obtain the maximum sub-region occupation ratios corresponding to the various region combinations;
determining a region combination corresponding to a largest sub-region occupation ratio among the largest sub-region occupation ratios as the target region combination including an upper body region and a leg region, thereby obtaining the at least two sub-virtual object regions including the upper body region and the leg region, wherein the largest sub-region occupation ratio is the target largest sub-region occupation ratio.
6. The method according to claim 5, wherein the obtaining at least two sub-virtual collision boxes corresponding to the at least two sub-virtual object regions comprises:
determining the sub-collision frame corresponding to the maximum sub-region occupation ratio as a first sub-virtual collision frame corresponding to the upper body region;
acquiring a leg middle line corresponding to the leg region;
combining the leg region and the extension parameter, and extending towards two sides by taking the leg middle line as a center;
determining the extended border as a second sub-virtual collision box, thereby obtaining the at least two sub-virtual collision boxes including the first sub-virtual collision box and the second sub-virtual collision box.
7. The method according to claim 6, wherein when the sub-virtual object region to be divided is the upper body region, the sub-virtual collision frame to be divided is the first sub-virtual collision frame, and the maximum sub-region occupation ratio to be divided is the maximum sub-region occupation ratio;
the dividing the sub virtual object area to be divided comprises the following steps:
dividing the upper body region based on a target part included in the upper body region to obtain a first sub-upper body region and a second sub-upper body region,
and the second sub virtual collision frame, the third sub virtual collision frame corresponding to the first sub upper body area and the fourth sub virtual collision frame corresponding to the second sub upper body area are all the sub virtual collision frames.
8. The method according to any one of claims 1 to 7, wherein when the area corresponding to each collision frame is equal to the virtual object area, the area ratio is the collision frame effective area ratio or the virtual object effective area ratio.
9. The method according to any one of claims 1 to 7, wherein the obtaining of the collision frame effective area ratio of the coincidence region to each collision frame and the virtual object effective area ratio of the coincidence region to the virtual object area comprises:
taking the overlapped area as a numerator and each collision frame as a denominator, and calculating a ratio to obtain an effective area occupation ratio of the collision frames;
and taking the superposed region as a numerator and the virtual object region as a denominator, and calculating a ratio to obtain the virtual object effective region ratio.
10. The method according to any one of claims 1 to 7, wherein the obtaining of the to-be-processed frame image corresponding to the to-be-processed animation comprises:
displaying the frame image sequence of the animation to be processed;
responding to a frame image selection operation aiming at the frame image sequence, and determining a frame image sequence to be processed, wherein the frame image to be processed is any frame image in the frame image sequence to be processed;
after determining the collision frame corresponding to the maximum area ratio of the area ratios as the virtual collision frame of the virtual object area, the method further includes:
combining the frame image to be processed and the virtual collision frame to obtain a frame image to be rendered, so as to obtain a frame image sequence to be rendered corresponding to the frame image sequence to be processed;
and updating the frame image sequence to be processed in the frame image sequence based on the frame image sequence to be rendered to obtain the animation to be rendered.
11. The method of claim 10, wherein after displaying the sequence of frame images of the to-be-processed animation, the method further comprises:
displaying a threshold setting control, wherein the threshold setting control is used for triggering the setting of the duty threshold;
obtaining the duty ratio threshold in response to a threshold setting operation acting on the threshold setting control.
12. The method according to any one of claims 1 to 7, wherein the collision frame corresponding to the largest area proportion of the area proportions is determined as a virtual collision frame of the virtual object area, and the method further comprises:
acquiring the effective area occupation ratio of a target collision frame and the effective area occupation ratio of a target virtual object corresponding to the virtual collision frame;
and displaying the effective area occupation ratio of the target collision frame and the effective area occupation ratio of the target virtual object.
13. The method according to any one of claims 1 to 7, wherein the collision frame corresponding to the largest area proportion of the area proportions is determined as a virtual collision frame of the virtual object area, and the method further comprises:
acquiring a target frame image comprising the virtual object area from the animation to be processed;
determining the virtual collision frame as a collision frame of each frame image in the target frame images.
14. A collision frame determination apparatus, characterized by comprising:
a memory for storing executable instructions;
a processor for implementing the method of any one of claims 1 to 13 when executing executable instructions stored in the memory.
15. A computer-readable storage medium having stored thereon executable instructions for, when executed by a processor, implementing the method of any one of claims 1 to 13.
CN202110219639.9A 2021-02-26 2021-02-26 Collision frame determination method and device and computer readable storage medium Active CN112827175B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110219639.9A CN112827175B (en) 2021-02-26 2021-02-26 Collision frame determination method and device and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110219639.9A CN112827175B (en) 2021-02-26 2021-02-26 Collision frame determination method and device and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN112827175A true CN112827175A (en) 2021-05-25
CN112827175B CN112827175B (en) 2022-07-29

Family

ID=75933981

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110219639.9A Active CN112827175B (en) 2021-02-26 2021-02-26 Collision frame determination method and device and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN112827175B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6128018A (en) * 1996-02-20 2000-10-03 Namco, Ltd. Simulation apparatus and information storage medium
CN1828670A (en) * 2005-03-02 2006-09-06 任天堂株式会社 Collision determination program and collision determination apparatus
CN108714303A (en) * 2018-05-16 2018-10-30 深圳市腾讯网络信息技术有限公司 Collision checking method, equipment and computer readable storage medium
CN110262729A (en) * 2019-05-20 2019-09-20 联想(上海)信息技术有限公司 A kind of object processing method and equipment
CN112179602A (en) * 2020-08-28 2021-01-05 北京邮电大学 Mechanical arm collision detection method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6128018A (en) * 1996-02-20 2000-10-03 Namco, Ltd. Simulation apparatus and information storage medium
CN1828670A (en) * 2005-03-02 2006-09-06 任天堂株式会社 Collision determination program and collision determination apparatus
CN108714303A (en) * 2018-05-16 2018-10-30 深圳市腾讯网络信息技术有限公司 Collision checking method, equipment and computer readable storage medium
CN110262729A (en) * 2019-05-20 2019-09-20 联想(上海)信息技术有限公司 A kind of object processing method and equipment
CN112179602A (en) * 2020-08-28 2021-01-05 北京邮电大学 Mechanical arm collision detection method

Also Published As

Publication number Publication date
CN112827175B (en) 2022-07-29

Similar Documents

Publication Publication Date Title
EP4300430A2 (en) Device, method, and graphical user interface for composing cgr files
JP6576874B2 (en) Image processing apparatus, image processing system, and image processing method
DE112020002268T5 (en) DEVICE, METHOD AND COMPUTER READABLE MEDIA FOR REPRESENTING COMPUTER GENERATED REALITY FILES
CN112156464B (en) Two-dimensional image display method, device and equipment of virtual object and storage medium
WO2015051269A2 (en) Generating augmented reality content for unknown objects
TW201143866A (en) Tracking groups of users in motion capture system
CN104200506A (en) Method and device for rendering three-dimensional GIS mass vector data
JP2020523691A (en) Delayed lighting optimization, foveal adaptation of particles, and simulation model in foveal rendering system
US20230405452A1 (en) Method for controlling game display, non-transitory computer-readable storage medium and electronic device
CN110322571B (en) Page processing method, device and medium
WO2022142626A1 (en) Adaptive display method and apparatus for virtual scene, and electronic device, storage medium and computer program product
US20060139358A1 (en) 3D graphic engine and method of providing graphics in mobile communication terminal
CN114067042A (en) Image rendering method, device, equipment, storage medium and program product
US10839587B2 (en) Image processing methods and devices for moving a target object by using a target ripple
CN114359458A (en) Image rendering method, device, equipment, storage medium and program product
CN112827175B (en) Collision frame determination method and device and computer readable storage medium
CN112700541A (en) Model updating method, device, equipment and computer readable storage medium
KR20170013539A (en) Augmented reality based game system and method
McDermott Creating 3D Game Art for the iPhone with Unity: Featuring modo and Blender pipelines
WO2017002483A1 (en) Program, information processing device, depth definition method, and recording medium
CN116958344A (en) Animation generation method and device for virtual image, computer equipment and storage medium
JP2012200288A (en) Device and program for preparing character
US20240161640A1 (en) Control method for history-based coding education system
CN107945100A (en) Methods of exhibiting, virtual reality device and the system of virtual reality scenario
CN114712853A (en) Game map loading and displaying method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40044184

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant