CN108525304B - Image analysis method and device, storage medium and electronic device - Google Patents

Image analysis method and device, storage medium and electronic device Download PDF

Info

Publication number
CN108525304B
CN108525304B CN201810339544.9A CN201810339544A CN108525304B CN 108525304 B CN108525304 B CN 108525304B CN 201810339544 A CN201810339544 A CN 201810339544A CN 108525304 B CN108525304 B CN 108525304B
Authority
CN
China
Prior art keywords
image
frame
analysis
image information
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810339544.9A
Other languages
Chinese (zh)
Other versions
CN108525304A (en
Inventor
梁蕴锋
巫振棠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN201810339544.9A priority Critical patent/CN108525304B/en
Publication of CN108525304A publication Critical patent/CN108525304A/en
Application granted granted Critical
Publication of CN108525304B publication Critical patent/CN108525304B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/60Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/50Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers
    • A63F2300/53Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers details of basic data processing
    • A63F2300/538Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers details of basic data processing for performing operations on behalf of the game client, e.g. rendering
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/6009Methods for processing data by generating or executing the game program for importing or creating game content, e.g. authoring tools during game development, adapting content to different platforms, use of a scripting language to create content
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/66Methods for processing data by generating or executing the game program for rendering three dimensional images

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention provides an image analysis method, an image analysis device, a storage medium and an electronic device, wherein the method comprises the following steps: in the process of drawing the image in the current frame, acquiring image information which accords with a preset condition in the image; analyzing the acquired image information; and outputting an analysis result. According to the invention, the problems of complicated image analysis based on complete pictures and information loss caused by interference in the related technology are solved, and effective analysis on the image being drawn can be realized, so that the beneficial effects of reducing the image processing difficulty, simplifying the operation and facilitating the analysis are achieved.

Description

Image analysis method and device, storage medium and electronic device
Technical Field
The present invention relates to image processing technologies, and in particular, to an image analysis method, an image analysis apparatus, a storage medium, and an electronic apparatus.
Background
In the prior art, a 3D graphics rendering API (Application Programming Interface) engine (hereinafter referred to as an engine) is a dedicated software module with complete functions, and provides a set of complete 3D drawing program interfaces. The interface can be used for completing complex 3D scene drawing, and the technology is widely introduced in the fields of game picture development, data visualization and the like.
The engine is an essential component of a simulator with a graphical interface, wherein the simulator is proprietary software that runs on a PC platform and is used by users to experience programs of other platforms, such as an android simulator. Fig. 1 is a schematic structural diagram of a prior art android simulator running on a PC platform.
The android simulator can enable a user to use a PC to run the android program. The android simulator can fully exert the characteristics of good performance and convenient operation of a PC, and is mainly used in the field of android games. In addition, based on the characteristics of the simulator architecture, targeted picture analysis and auxiliary tool development can be performed on the game. At present, in the simulator industry, third-party game picture analysis and development of auxiliary tools are in a starting stage, and there are not many successful cases. In the prior art, a simulator obtains picture input through screen capture operation based on a complete picture, namely a picture seen by an end user, and analyzes the picture through a digital image processing technology.
The basic screens for screen analysis and tool development of current third-party applications tend to be complex in content and structure, for example:
1. different interface elements are mutually shielded;
2. occlusion between interface elements often has a semi-transparent effect, or other more complex effects;
3. the same interface element may have different geometric or effect due to different user configurations, such as a common map in a game, and a position change of a wheel;
4. the dynamic change of the picture caused by the user operation is very frequent, and particularly in a multiplayer network game, the dynamic change of the above situations is always caused.
The above-mentioned situations bring great difficulties to analysis and processing, often require complicated image processing techniques for analysis, and rely on more complicated parameter adjustment processes and experiences. In some cases, information may be lost due to problems such as occlusion, and effective analysis cannot be performed essentially.
Disclosure of Invention
The embodiment of the invention provides an image analysis method, an image analysis device, a storage medium and an electronic device, which are used for at least solving the problems that image analysis based on a complete picture is complex and is easy to interfere to cause information loss in the related technology.
According to an embodiment of the present invention, there is provided an image analysis method including: in the process of drawing the image in the current frame, acquiring image information which accords with a preset condition in the image; analyzing the acquired image information; and outputting an analysis result.
Optionally, the acquiring of the image information meeting the predetermined condition in the image includes: and acquiring image information corresponding to the pre-acquired data resources in the image.
Optionally, the data resource comprises at least one of: vertex data, texture data.
Optionally, in the process of drawing the image in the current frame, acquiring image information in the image that meets a predetermined condition includes: respectively acquiring image information meeting preset conditions in the image drawn by each layer in the process of drawing the image in the current frame in a layered manner; and summarizing image information meeting the preset conditions in the respectively obtained images drawn on each layer on a preset drawing board.
Optionally, before acquiring image information meeting a predetermined condition in the image, the method further includes: receiving a frame control command; and determining the information of the frame start and the information of the frame end of the image to be analyzed according to the frame control command.
According to another embodiment of the present invention, there is provided an image analysis apparatus including: the acquisition module is used for acquiring image information meeting a preset condition in an image in the process of drawing the image in the current frame; the analysis module is used for analyzing the acquired image information; and the output module is used for outputting the analysis result.
Optionally, the obtaining module includes: the first acquisition unit is used for acquiring image information corresponding to the pre-acquired data resources in the image.
Optionally, the obtaining module includes: the second acquisition unit is used for respectively acquiring image information meeting preset conditions in the image drawn in each layer in the process of drawing the image in the current frame in a layered manner; and the drawing unit is used for summarizing the image information which meets the preset conditions in the respectively obtained image drawn by each layer on a preset drawing board.
According to a further embodiment of the present invention, there is also provided a storage medium having a computer program stored therein, wherein the computer program is arranged to perform the steps of any of the above method embodiments when executed.
According to yet another embodiment of the present invention, there is also provided an electronic device, including a memory in which a computer program is stored and a processor configured to execute the computer program to perform the steps in any of the above method embodiments.
According to the invention, the acquired image information meeting the preset conditions is analyzed and the result is output aiming at the image being drawn in the current frame, so that the interference on the image when the final complex complete picture is analyzed and processed is avoided, therefore, the problem of information loss caused by the complex and easily interfered complete picture-based image analysis in the related technology can be solved, the effective analysis on the image being drawn can be realized, the image processing difficulty is further reduced, and the beneficial effects of simple operation and easy analysis are achieved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
FIG. 1 is a schematic diagram of a prior art android simulator operating on a PC platform;
FIG. 2 is a flow chart of a method of image analysis according to an embodiment of the present invention;
FIG. 3 is a control flow diagram of a rendering process provided according to an embodiment of the invention;
FIG. 4 is a flowchart illustrating an operation of an image analysis method according to an embodiment of the present invention;
FIG. 5 is a schematic illustration of a game interface provided in accordance with an embodiment of the present invention;
FIG. 6 is a schematic view of a waypoint on a map indication provided in accordance with an embodiment of the invention;
FIG. 7 is a schematic diagram of a map indicating that an upper waypoint has been struck and killed in accordance with an embodiment of the present invention;
FIG. 8 is a schematic view of a game interface with a player's avatar blocking outliers provided in accordance with an embodiment of the present invention;
FIG. 9 is a schematic view of a game interface provided according to an embodiment of the present invention with the entire interface occluded;
FIG. 10 is a schematic diagram of a texture of a captured map provided in accordance with an embodiment of the present invention;
FIG. 11 is a schematic illustration of a map element texture provided in accordance with an embodiment of the present invention;
FIG. 12(a) is a schematic map texture diagram of a game interface for fetching a frame according to an embodiment of the present invention;
FIG. 12(b) is a schematic view of the occlusion of the outlier of the game interface for extracting a frame according to the embodiment of the present invention;
FIG. 12(c) is a schematic view of the outliers of the game interface for extracting a frame according to the embodiment of the invention;
FIG. 12(d) is a first schematic diagram illustrating level segmentation performed on a stipple screen of a game interface for extracting a frame according to an embodiment of the present invention;
FIG. 12(e) is a schematic diagram of performing hierarchical segmentation on the outlier screen of the game interface for extracting a frame according to the embodiment of the present invention;
fig. 13 is a block diagram of a configuration of an image analysis apparatus according to an embodiment of the present invention.
Detailed Description
The invention will be described in detail hereinafter with reference to the accompanying drawings in conjunction with embodiments. It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order.
The embodiment of the invention is based on simulator development, can carry out effective analysis processing on the picture of a third-party program running on the simulator through close interaction with the engine of the simulator, and is beneficial to development of auxiliary tools on the basis of the effective analysis processing. The embodiment of the invention provides an original picture analysis method, which can effectively analyze pictures and effectively solve the problems caused by occlusion, special effects and change.
Example 1
In the present embodiment, an image analysis method is provided, and fig. 2 is a flowchart of an image analysis method according to an embodiment of the present invention, as shown in fig. 2, the flowchart includes the following steps:
step S202, in the process of drawing the image in the current frame, acquiring image information meeting the preset conditions in the image;
step S204, analyzing the acquired image information;
and step S206, outputting an analysis result.
The main body for executing the above operations may be a PC, but is not limited thereto. Note that the PC described in this specification refers to a personal computer in general, and not to a personal computer of a Windows operating system in particular.
Through the steps, the acquired image information meeting the preset conditions is analyzed and the result is output aiming at the image which is currently drawn in the current frame, so that the interference on the image when the final complex complete picture is used for analyzing and processing the image is avoided, therefore, the problem of information loss caused by the fact that the image analysis is complex and is easy to interfere based on the complete picture in the related technology can be solved, the effective analysis on the image which is currently drawn can be realized, the image processing difficulty is reduced, the operation is simple, and the beneficial effect of easy analysis is achieved.
In an optional embodiment, acquiring image information meeting a predetermined condition in the image includes: and acquiring image information corresponding to the pre-acquired data resources in the image. In this embodiment, in the process of drawing the image in the current frame, image information meeting the predetermined condition in the image is obtained, for example, by dividing the original complex graph into a plurality of simpler stroke graphs (the stroke graphs described herein are only words proposed to explain the process of the present invention. briefly, when the game engine draws the game picture, the process of drawing a picture like a painter is performed step by step for each frame of picture, and each step in drawing each frame of picture can be called as a stroke graph), wherein, the image information meeting the predetermined condition, such as the "interesting" stroke/stroke diagram mentioned in the present embodiment, by analyzing these interesting stroke diagrams, the stroke/stroke diagram developed by the analysis target or auxiliary tool can be completed, which in turn are rendering data that locates the strokes of interest via data resources. The data resources are strokes which can be found by the auxiliary tool to be interested, taking the game shown in fig. 5 as an example, some key data resources, such as data of a map and wild point map data, need to be extracted in advance, and when the engine renders a picture, if a map stroke or a wild store stroke needs to be drawn, the engine inevitably uses related map data; the pre-extracted chartlet data and the used chartlet data are compared, if the pre-extracted chartlet data and the used chartlet data are matched, the interesting strokes are confirmed, and the interesting strokes are extracted, so that the image information corresponding to the pre-collected data resources in the image is obtained.
In an optional embodiment, the data resource includes at least one of: vertex data, texture data. In this embodiment, the data resource refers to various data information required for rendering an image, such as vertex data and texture data. In practice, texture features are often most useful. Texture data (or map data, rendering terminology) refers to image data used in games. In practice, the map data is one of the most effective data for positioning strokes. There are still other possible data for localization, such as model vertex data for rendering (rendering terminology, coordinate data for defining object shapes), using rendering programs (rendering terminology, for defining rendering manipulations), etc. The way of using the data resources to position the interesting strokes is the same as that of mapping data, namely, the strokes are extracted in advance and compared. This process is performed by the data resource location module.
In an optional embodiment, in the process of rendering the image in the current frame, acquiring image information meeting a predetermined condition in the image includes: respectively acquiring image information meeting preset conditions in an image drawn by each layer in the process of drawing the image in the current frame in a layered manner; and summarizing image information meeting the preset conditions in the respectively obtained images drawn on each layer on a preset drawing board. In this embodiment, in the process of drawing the image in the current frame, it is necessary to divide the image of a certain frame into a plurality of relatively simple stroke images. When the game engine draws, the instruction sequence can be divided into a plurality of groups through the analysis of the instruction sequence, and each group corresponds to one step (namely, one stroke) of drawing. Each stroke generates a certain change to the picture, when all strokes are drawn, a final frame picture is formed, and the image information meeting the predetermined condition in each layer of the drawn image obtained respectively is gathered on a predetermined drawing board (the predetermined drawing board described herein actually refers to a display interface capable of being used for drawing the image).
In an optional embodiment, before acquiring image information meeting a predetermined condition in the image, the method further includes: receiving a frame control command; and determining the information of the frame start and the information of the frame end of the image to be analyzed according to the frame control command. In this embodiment, the rendering engine executes the rendering instruction in an instruction stream manner to complete drawing of one frame. When a frame needs to be analyzed, a frame is located, and the beginning and the end of the frame are usually marked to indicate the boundary of the frame, so as to achieve the real-time performance of multi-frame rendering. The matching flags of all resources are cleared at the beginning of a frame, i.e. the start information of the frame is initialized, as needed to obtain the start information of the frame. Initializing at the beginning of a frame, analyzing the extracted strokes at the end of the frame, and outputting the result of the frame; at the beginning of the next frame, the above operations are repeatedly performed to complete the real-time analysis.
The core of the scheme provided by the embodiment of the invention is an analysis idea utilizing an engine command. By utilizing complete control over an engine and a rendering process, the scheme provides a concept of rendering strokes, and the idea is summarized as follows: 1) dividing the rendering frame boundary; 2) validating the data resource of interest; 3) locating the strokes of interest; 4) extracting strokes; 5) stroke analysis.
The following is further described with reference to the flowchart of the rendering process provided by the embodiment of the present invention (i.e., fig. 3 described below).
Fig. 3 is a control flow diagram of a rendering process provided according to an embodiment of the present invention, as shown in fig. 3,
a) the engine is the subject of rendering work, which accomplishes drawing and refreshing of graphics through commands that are constantly sent. At the same time of completing the drawing task, the frame control command is also notified to the frame location module (corresponding to S304 in fig. 3), and the drawing command is sent to the data resource location module (corresponding to S306 in fig. 3) and the stroke location and extraction module (corresponding to S307 in fig. 3) for further processing according to different functions. In the present embodiment, when a render drawing command (corresponding to S301 in fig. 3) is initiated, a frame control command (corresponding to S302 in fig. 3) and a drawing command (corresponding to S303 in fig. 3) are issued; the frame control command sends a frame command stream to the frame location module (corresponding to S304 in fig. 3), and the draw command sends a data preparation command stream to the data resource location module (corresponding to S306 in fig. 3) and a draw command stream to the stroke location and extraction module (corresponding to S307 in fig. 3) according to their functions.
b) The data resource collection module (corresponding to S305 in fig. 3) is a relatively independent module, and its main function is to collect the data resource of interest. The data resources refer to various data information required for drawing, including vertex data, texture data, shaders, and instructions themselves. These data are passed to the data resource locator module via drawing commands. What resources to collect is relevant to the specific image analysis task. For example, if it is desired to analyze a skill icon on a game screen, icon texture information may be collected. For example, if a certain fixed model on the game screen is interested, the model can be positioned to collect vertex data, etc. In practical applications, texture features are often most useful. The collection data resource usually needs to perform a full review on the render drawing command to determine the corresponding data characteristics, and this work is often manually completed and needs to be performed only once. The collected data is needed to be used as a data resource positioning basis. In this embodiment, the data resource positioning module is mainly responsible for comparing the data resource (provided by the data resource acquisition module) of interest with the data resource provided by the data preparation command stream, and marking the data resource provided by the data preparation command stream according to the comparison result. The data resource positioning module needs to store all interested data resources, but does not need to store the data resources in the data preparation command stream, and can send the resources to the stroke positioning and extracting module after marking the resources.
According to the invention, the data resource acquisition module is used for acquiring and storing the data resources capable of positioning the interesting strokes. Based on the fact that in the process of applying frame-by-frame rendering, for a certain element, some resources are fixed and the resources that must be used only need to be collected once and used for later comparison. Such as a map of a game, the map used remains unchanged throughout the game. Further alternatively, strokes that need to be positioned to an effect on the display may be considered for a particular rendering program. While not all of the resources it uses for an element remain unchanged. Therefore, when data resources are collected, an application needs to be run in advance, a frame command is examined, and the available unchanged data resources are determined, which needs to be completed manually.
c) The frame positioning module (corresponding to S304 in fig. 3) mainly functions to partition a boundary of a certain frame according to the received frame control command, and send information of frame start and frame end to other modules. In this embodiment, after receiving the frame command stream, the frame location module needs to determine the boundary of a certain frame, for example, sending the frame start information to the data resource location module, and sending the frame start/frame end information to the stroke analysis module. And the rendering engine executes the rendering instruction in an instruction stream mode to finish drawing of one frame. When a frame needs to be analyzed first, the frame is located first, which is the function of the frame locating module. By marking the start and end of a frame (often a particular instruction or label), other modules can be informed of the frame's boundaries. Thus, the real-time performance of multi-frame rendering can be achieved. The data resource positioning module needs to know the start information of a frame, and needs to clear all resource matching marks for initialization when a frame starts. The stroke analysis module also needs to know the boundary of the frame, initializes when the frame starts, analyzes the extracted stroke when the frame ends, and outputs the result of the frame; reinitializing at the beginning of the next frame and circularly performing real-time analysis.
d) The main role of the data resource location module (corresponding to S306 in fig. 3) is to locate a specific data resource. The data resource positioning module takes the data collected by the data collection module as input and matches the data with the commands in the received data preparation command stream. And if the matching is successful, marking the resource corresponding to the data preparation command. The module also performs some initialization operations, such as clearing all the flags, according to the starting and ending states of the current frame. In this embodiment, when the data resource positioning module receives the frame start message, the matching resource flag of the previous frame is cleared. Whether the resource of the last frame is matched or not has no relation with the current frame.
e) The main role of the stroke location and extraction module (corresponding to S304 in fig. 3) is to extract the current stroke. When the drawing process of a certain stroke uses at least one resource marked by the data resource positioning module, the stroke is considered to need to be extracted. And when the engine finishes drawing the stroke, a new drawing board (corresponding to the preset drawing board) is built internally, the stroke is redrawn, and the stroke extraction is finished. The output of the stroke extraction includes, in addition to the stroke of interest, resource information attached to the stroke. In this embodiment, redrawing a stroke refers to using all of the resources that draw the stroke, but drawing once more on a blank drawing board. For each frame, multiple strokes are often of interest, requiring a comprehensive analysis of the multiple strokes to complete the task. Thus, multiple data resources may need to be prepared to extract all strokes of interest. One item here is a data resource.
f) The main role of the stroke analysis module (corresponding to S308 in fig. 3) is the collection and analysis of strokes. The module collects and sorts all interesting strokes in a frame under the support of the frame positioning module, and analyzes according to the strokes and additional information thereof. The analysis work depends on the specific analysis objective. In this embodiment, the additional information is that it includes tagged resources that define which stroke is of interest (if there are multiple strokes of interest to analyze). According to the invention, the image of a certain frame needs to be segmented into a plurality of simpler stroke pictures. The input of the stroke analysis module is all the strokes of interest, and the screening process is not needed. The screening process is actually completed in the "data resource location" module. The analysis module only needs to sort the interesting strokes and analyze the interesting strokes according to a specific target. The analysis process is target dependent.
Fig. 4 is a flowchart of an image analysis method according to an embodiment of the present invention, and as shown in fig. 4, the flowchart is a brief and typical flowchart of the present invention, and the working process is as follows:
s401: a render command is received. The final drawing of the graphics is performed by the engine continually sending commands.
S402: and judging the type of the drawing command. When the data preparation command stream is determined, sending the data preparation command stream to a data resource positioning module; when the drawing command stream is determined, sending the drawing command stream to a stroke positioning and extracting module; and when the frame positioning command stream is determined, sending the frame positioning command stream to a frame positioning module.
S403: it is determined whether the resource is of interest. The data resource location module determines the data resources of the strokes of interest based on the received data preparation command stream.
S404: and marking the resources. After determining that the data resources are the data resources of interest, the data resource locating module marks all the data resources of interest.
S405: it is determined whether the drawing command uses the marked resource. The data resource positioning module marks frame start information, and in the drawing process of a certain stroke, the stroke positioning and extracting module only extracts the stroke using the marked data resource.
S406: and extracting strokes and recording. And the stroke positioning and extracting module extracts and records the strokes.
S407: the frame location module updates the information. The frame positioning module determines the frame start/end information to divide the boundary of a certain frame according to the received frame positioning command. Initializing when the frame starts, extracting strokes for analysis when the frame ends, outputting a frame result, and circularly and repeatedly completing real-time analysis.
S408: a new frame start is determined. The data resource location module needs to know the start information of the frame.
S409: and clearing the resource mark. When the data resource positioning module determines the start information of the frame, the resource mark is cleared at the beginning of each frame for initialization.
S410: all interesting strokes are sorted and analyzed.
S411: it is determined whether the analysis can be completed. If yes, performing stroke analysis after the final frame positioning module, the data resource positioning module and the stroke positioning and extracting module determine the final extracted stroke; if not, the process returns to S401.
The scheme provided by the invention is explained in detail by combining a specific embodiment as follows:
1) the embodiment of the invention takes the game shown in fig. 5 as an example.
The game shown in fig. 5 is a game in which a match of 5v5 is played on a map of a fixed size. Strange points are distributed on the map. The player can gain revenue by killing the monster. After the wild monster is killed, the wild monster can be regenerated after a fixed time. The player may obtain whether a monster is currently present, as indicated by the in-game map. FIG. 5 is a schematic view of a game interface provided by an embodiment of the present invention, as shown in FIG. 5. Wherein, the top left corner is a map indication, fig. 6 is a schematic view of a outlier on the map indication provided by the specific embodiment of the present invention, as shown in fig. 6, the outlier is represented by some map element surfaces; fig. 7 is a schematic diagram of a map indicating that a wild point is killed by the click, where the wild point is as shown in fig. 7 after the wild point is killed by the click.
The wild timing function is that when a wild monster is killed, a reading second countdown is displayed for a player at the wild on a map according to a fixed time interval, so that the player can conveniently and reasonably arrange action strategies.
2) The problem with existing rendering graphics based strategies.
The game shown in fig. 5 is a third party application, and it is difficult to complete the time of the wild if only the original image analysis is performed. There are some issues to consider:
a) and (6) positioning a map. Since the map of the game shown in fig. 5 can be enlarged or reduced, it can be set on the left or right side of the interface according to the preference, and it is necessary to divide and position the position of the map on the game screen. The game map area has more elements (wild points, head portrait, signal indication, tower) and changes more quickly; in addition, the game map is drawn in a semitransparent manner, so that the task often needs complex mixing of a plurality of digital image processing technologies, such as edge extraction or SIFT (Scale-invariant feature transform) feature determination, and the effect is not necessarily ideal due to the relationship of parameter selection.
b) And (5) judging the state of the wild point. Assuming that map positioning is judged by graphic processing, the position of the field point is well processed, and its coordinates within the map range are fixed. Fig. 8 is a schematic view of a game interface in which a player head portrait blocks outliers according to an embodiment of the present invention, such as the player head portrait shown in fig. 8; or as shown in fig. 9, fig. 9 is a schematic view of a game interface with an entire interface being blocked according to an embodiment of the present invention, and when a player opens some game interfaces, the entire interface is blocked, in this case, not only the wild point state cannot be determined, but also the map cannot be located. These situations cause information loss and the wild timing function cannot be accomplished per se.
3) Engine stroke based scheme.
To address the problems encountered with the aforementioned rendered visual-based schemes, the collected data and extracted strokes using the stroke-based scheme are as follows:
a) texture data is extracted. Aiming at the characteristics of interesting strokes, firstly, the texture of a map and the texture of map elements are collected as the basis for comparison, the step is finished by a data collection module, and the collected texture images are as follows. Fig. 10 is a schematic diagram of a texture of a captured map provided in an embodiment of the present invention, and fig. 11 is a schematic diagram of a texture of an element of a map provided in an embodiment of the present invention.
b) Using the above texture for stroke location and extraction, for a certain frame, as shown in fig. 12, a group of schematic diagrams for hierarchical segmentation based on a picture can be obtained, and related strokes can be obtained. FIG. 12(a) is a schematic map texture diagram of a game interface for fetching a frame according to an embodiment of the present invention; FIG. 12(b) is a schematic view of the occlusion of the outlier of the game interface for extracting a frame according to the embodiment of the present invention; FIG. 12(c) is a schematic view of the outliers of the game interface for extracting a frame according to the embodiment of the invention; fig. 12(d) is a schematic diagram of performing hierarchical segmentation on the outlier picture of the game interface for extracting a frame according to the embodiment of the present invention, and fig. 12(e) is a schematic diagram of performing hierarchical segmentation on the outlier picture of the game interface for extracting a frame according to the embodiment of the present invention. The problems of complex picture structure and shielding are effectively solved.
c) And (5) image analysis. The effective division can be carried out on the stroke graph in a simple mode, and the specific steps are as follows:
firstly, determining the range of a map through map base map strokes, and only considering the strokes in the range in the subsequent strokes; then obtaining the coordinates of the corresponding wild points according to the map range and the pre-calculated percentage coordinates; aiming at all other collected stroke graphs, checking corresponding color values in the coordinates of the wild points, and easily obtaining the state of the current wild points; according to the state, the auxiliary tool for playing the wild time in the game can be completed by combining game logic.
The original complex graph can be divided into a plurality of simpler stroke graphs through the method, and compared with the original graph, the stroke graphs have the advantages that:
1. the content is simple and easy to analyze;
2. different elements belong to different strokes, and mutual interference, such as shielding or special effect interference, does not exist;
3. different strokes for the same data resource may be directed to the same element on the screen.
Through the above description of the embodiments, those skilled in the art can clearly understand that the method according to the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but the former is a better implementation mode in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
Example 2
In this embodiment, an image analysis apparatus is further provided, and the apparatus is used to implement the foregoing embodiments and preferred embodiments, and the description of the apparatus is omitted for brevity. As used below, the term "module" may be a combination of software and/or hardware that implements a predetermined function. Although the means described in the embodiments below are preferably implemented in software, an implementation in hardware, or a combination of software and hardware is also possible and contemplated.
Fig. 13 is a block diagram of a configuration of an image analysis apparatus according to an embodiment of the present invention, as shown in fig. 13, the apparatus including: an obtaining module 132, configured to obtain image information meeting a predetermined condition in an image during a process of drawing the image in the current frame; an analysis module 134, connected to the obtaining module 132, for analyzing the obtained image information; and an output module 136 connected to the analysis module 134 for outputting the analysis result.
Optionally, the obtaining module 132 includes: the first acquisition unit is used for acquiring image information corresponding to the pre-acquired data resources in the image.
Optionally, the data resource includes at least one of: vertex data, texture data.
Optionally, the obtaining module 132 further includes: the second acquisition unit is used for respectively acquiring image information meeting preset conditions in the image drawn in each layer in the process of drawing the image in the current frame in a layered manner; and the drawing unit is used for summarizing image information meeting the preset conditions in the respectively obtained image drawn by each layer on a preset drawing board.
In an alternative embodiment, the apparatus is further configured to: receiving a frame control command before acquiring image information meeting a preset condition in the image; and determining the information of the frame start and the information of the frame end of the image to be analyzed according to the frame control command.
It should be noted that, the above modules may be implemented by software or hardware, and for the latter, the following may be implemented, but not limited to: the modules are all positioned in the same processor; alternatively, the modules are respectively located in different processors in any combination.
Example 3
Embodiments of the present invention also provide a storage medium having a computer program stored therein, wherein the computer program is arranged to perform the steps of any of the above method embodiments when executed.
Optionally, in this embodiment, the storage medium may include, but is not limited to: various media capable of storing computer programs, such as a usb disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk.
Embodiments of the present invention also provide an electronic device comprising a memory having a computer program stored therein and a processor arranged to run the computer program to perform the steps of any of the above method embodiments.
Optionally, the electronic apparatus may further include a transmission device and an input/output device, wherein the transmission device is connected to the processor, and the input/output device is connected to the processor.
It will be apparent to those skilled in the art that the modules or steps of the present invention described above may be implemented by a general purpose computing device, they may be centralized on a single computing device or distributed across a network of multiple computing devices, and alternatively, they may be implemented by program code executable by a computing device, such that they may be stored in a storage device and executed by a computing device, and in some cases, the steps shown or described may be performed in an order different than that described herein, or they may be separately fabricated into individual integrated circuit modules, or multiple ones of them may be fabricated into a single integrated circuit module. Thus, the present invention is not limited to any specific combination of hardware and software.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. An image analysis method, wherein the image analysis method is based on processing work of a simulator, and performs effective analysis processing on a picture of a third-party program running on the simulator by interacting with an engine of the simulator, wherein the image analysis method comprises:
in the process of drawing the image in the current frame, acquiring image information which accords with a preset condition in the image;
analyzing the acquired image information;
and outputting an analysis result.
2. The method of claim 1, wherein obtaining image information in the image that meets a predetermined condition comprises:
and acquiring image information corresponding to the pre-acquired data resources in the image.
3. The method of claim 2, wherein the data resources comprise at least one of:
vertex data, texture data.
4. The method of claim 1, wherein during the process of rendering the image in the current frame, acquiring image information meeting a predetermined condition in the image comprises:
respectively acquiring image information meeting preset conditions in the image drawn by each layer in the process of drawing the image in the current frame in a layered manner;
and summarizing image information meeting the preset conditions in the respectively obtained images drawn on each layer on a preset drawing board.
5. The method according to claim 1, wherein before acquiring image information meeting a predetermined condition in the image, the method further comprises:
receiving a frame control command;
and determining the information of the frame start and the information of the frame end of the image to be analyzed according to the frame control command.
6. An image analysis apparatus that performs processing based on a simulator and performs effective analysis processing on a screen of a third-party program running on the simulator by interacting with an engine of the simulator, the image analysis apparatus comprising:
the acquisition module is used for acquiring image information meeting a preset condition in an image in the process of drawing the image in the current frame;
the analysis module is used for analyzing the acquired image information;
and the output module is used for outputting the analysis result.
7. The apparatus of claim 6, wherein the obtaining module comprises:
the first acquisition unit is used for acquiring image information corresponding to the pre-acquired data resources in the image.
8. The apparatus of claim 6, wherein the obtaining module comprises:
the second acquisition unit is used for respectively acquiring image information meeting preset conditions in the image drawn in each layer in the process of drawing the image in the current frame in a layered manner;
and the drawing unit is used for summarizing the image information which meets the preset conditions in the respectively obtained image drawn by each layer on a preset drawing board.
9. A storage medium, in which a computer program is stored, wherein the computer program is arranged to perform the method of any of claims 1 to 5 when executed.
10. An electronic device comprising a memory and a processor, wherein the memory has stored therein a computer program, and wherein the processor is arranged to execute the computer program to perform the method of any of claims 1 to 5.
CN201810339544.9A 2018-04-16 2018-04-16 Image analysis method and device, storage medium and electronic device Active CN108525304B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810339544.9A CN108525304B (en) 2018-04-16 2018-04-16 Image analysis method and device, storage medium and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810339544.9A CN108525304B (en) 2018-04-16 2018-04-16 Image analysis method and device, storage medium and electronic device

Publications (2)

Publication Number Publication Date
CN108525304A CN108525304A (en) 2018-09-14
CN108525304B true CN108525304B (en) 2021-06-22

Family

ID=63481199

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810339544.9A Active CN108525304B (en) 2018-04-16 2018-04-16 Image analysis method and device, storage medium and electronic device

Country Status (1)

Country Link
CN (1) CN108525304B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109316747B (en) * 2018-09-28 2022-02-25 珠海豹趣科技有限公司 Game auxiliary information prompting method and device and electronic equipment

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106345118A (en) * 2016-08-24 2017-01-25 网易(杭州)网络有限公司 Rendering method and device

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7529410B2 (en) * 2004-01-07 2009-05-05 Microsoft Corporation Local localization using fast image match
US9821224B2 (en) * 2010-12-21 2017-11-21 Microsoft Technology Licensing, Llc Driving simulator control with virtual skeleton
EP2608529B1 (en) * 2011-12-22 2015-06-03 Axis AB Camera and method for optimizing the exposure of an image frame in a sequence of image frames capturing a scene based on level of motion in the scene
JP5902229B2 (en) * 2013-07-09 2016-04-13 エヌエイチエヌ エンターテインメント コーポレーションNHN Entertainment Corporation Simulation method and system
CN104978117B (en) * 2014-04-11 2018-11-09 阿里巴巴集团控股有限公司 A kind of method and apparatus for realizing screenshotss

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106345118A (en) * 2016-08-24 2017-01-25 网易(杭州)网络有限公司 Rendering method and device

Also Published As

Publication number Publication date
CN108525304A (en) 2018-09-14

Similar Documents

Publication Publication Date Title
US8269722B2 (en) Gesture recognition system and method thereof
CN110610453B (en) Image processing method and device and computer readable storage medium
CN111957040B (en) Detection method and device for shielding position, processor and electronic device
CN110168607A (en) System and method for the identification of automatic table game activities
CN106055295B (en) Image processing method, picture method for drafting and device
CN110090440B (en) Virtual object display method and device, electronic equipment and storage medium
CN105069754B (en) System and method based on unmarked augmented reality on the image
JP2003256807A (en) Land block data preparing method and device
CN110149551B (en) Media file playing method and device, storage medium and electronic device
JP2007148677A (en) Image processor and image processing method
CN110796701B (en) Identification method, device and equipment of mark points and storage medium
CN111970557A (en) Image display method, image display device, electronic device, and storage medium
WO2018177112A1 (en) Object rendering method, device, storage medium, and electronic device
CN108307189A (en) Three-dimensional content provides system, method and computer readable recording medium storing program for performing
CN111414948A (en) Target object detection method and related device
US10891801B2 (en) Method and system for generating a user-customized computer-generated animation
CN115063518A (en) Track rendering method and device, electronic equipment and storage medium
CN108525304B (en) Image analysis method and device, storage medium and electronic device
US10546406B2 (en) User generated character animation
CN111167119B (en) Game development display method, device, equipment and storage medium
CN111107264A (en) Image processing method, image processing device, storage medium and terminal
CN110719415A (en) Video image processing method and device, electronic equipment and computer readable medium
CN113486941B (en) Live image training sample generation method, model training method and electronic equipment
CN116862920A (en) Portrait segmentation method, device, equipment and medium
CN113262466A (en) Vibration control method and device, mobile terminal and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant