CN116308996A - Graphic display method, graphic display device, graphic display apparatus, graphic display storage medium and graphic display program product - Google Patents

Graphic display method, graphic display device, graphic display apparatus, graphic display storage medium and graphic display program product Download PDF

Info

Publication number
CN116308996A
CN116308996A CN202310288005.8A CN202310288005A CN116308996A CN 116308996 A CN116308996 A CN 116308996A CN 202310288005 A CN202310288005 A CN 202310288005A CN 116308996 A CN116308996 A CN 116308996A
Authority
CN
China
Prior art keywords
rendering rate
target
roi
determining
rate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310288005.8A
Other languages
Chinese (zh)
Inventor
张天为
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Spreadtrum Communications Shanghai Co Ltd
Original Assignee
Spreadtrum Communications Shanghai Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Spreadtrum Communications Shanghai Co Ltd filed Critical Spreadtrum Communications Shanghai Co Ltd
Priority to CN202310288005.8A priority Critical patent/CN116308996A/en
Publication of CN116308996A publication Critical patent/CN116308996A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/20Processor architectures; Processor configuration, e.g. pipelining
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • Geometry (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Databases & Information Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Generation (AREA)

Abstract

The application provides a graphic display method, a graphic display device, a graphic display storage medium and a graphic display program product. The method comprises the following steps: and predicting the ROI in the current image and the characteristic parameters of the target in the ROI by acquiring the current image and then pre-training the obtained prediction model, so that the region of interest is finer. And then determining a target rendering rate according to the characteristic parameters, carrying out graphic display on the ROI according to the target rendering rate, and simultaneously carrying out graphic display on the non-ROI in the current image according to a preset rendering rate, wherein the preset rendering rate is larger than the target rendering efficiency, so that the ROI and the non-ROI are respectively rendered through different rendering rates, and the load of the GPU can be reduced.

Description

Graphic display method, graphic display device, graphic display apparatus, graphic display storage medium and graphic display program product
Technical Field
The present disclosure relates to the field of graphics display technologies, and in particular, to a graphics display method, apparatus, device, storage medium, and program product.
Background
The parallel computing power of the graphics processor (Graphics Processing Unit, GPU) enables it to quickly compute and display the graphics results in all pixels of the screen. In mobile devices, the graphic computation with highest load and higher effect requirement is mainly a game scene, and the improvement of the quality of game pictures becomes a requirement, so that the requirement on the overall performance of the GPU of the mobile device is also higher.
However, how to reduce the burden on the GPU while improving the rendering effect of graphics is a highly desirable problem.
Disclosure of Invention
The application provides a graphic display method, a graphic display device, a graphic display storage medium and a graphic display program product, which are used for solving the problem of reducing the load of a Graphic Processing Unit (GPU) under the condition of improving the rendering effect of graphics.
In a first aspect, the present application provides a graphic display method, including:
acquiring a current image;
predicting a region of interest (ROI) in the current image and characteristic parameters of a target in the ROI through a pre-trained prediction model;
determining a target rendering rate according to the characteristic parameters;
and according to the target rendering rate, carrying out graphic display on the ROI, and according to a preset rendering rate, carrying out graphic display on the non-ROI in the current graph, wherein the preset rendering rate is larger than the target rendering rate.
In one possible implementation, the characteristic parameters include motion speed and depth of field;
the determining the target rendering rate according to the characteristic parameters comprises the following steps:
determining a first rendering rate of the target in the horizontal direction and a second rendering rate of the target in the vertical direction according to the motion speed and the depth of field;
And determining the target rendering rate according to the first rendering rate and the second rendering rate.
In one possible implementation, the determining the target rendering rate according to the first rendering rate and the second rendering rate includes:
acquiring a first motion speed of a target in the ROI in a first image in a horizontal direction and a second motion speed of the target in a vertical direction, wherein the first image is a previous frame image of the current image;
acquiring a third movement speed of a target in the ROI in a horizontal direction and a fourth movement speed of the target in a vertical direction in a second image, wherein the second image is a previous frame image of the first image;
and determining the target rendering rate according to the first movement speed, the second movement speed, the third movement speed and the fourth movement speed.
In one possible implementation, the determining the target rendering rate according to the first motion speed, the second motion speed, the third motion speed, and the fourth motion speed includes:
correcting the first rendering rate according to the first movement speed and the third movement speed to obtain a corrected first rendering rate;
Correcting the second rendering rate according to the second movement speed and the fourth movement speed to obtain a corrected second correction rate;
and determining the target rendering rate according to the corrected first rendering rate and the corrected second rendering rate.
In one possible implementation, the determining the target rendering rate according to the first rendering rate and the second rendering rate includes:
if the first rendering rate and the second rendering rate are respectively smaller than a first threshold value, determining that the target rendering rate is P multiplied by P;
if the first rendering rate is less than the first threshold and the second rendering rate is between the first threshold and the second threshold, determining that the target rendering rate is p×q;
if the first rendering rate is between the first threshold and the second rendering rate is less than the first threshold, determining that the target rendering rate is Q x P
If the first rendering rate is between the first threshold and the second rendering rate is greater than or equal to the second threshold, determining that the target rendering rate is Q×L;
If the first rendering rate is greater than or equal to the second threshold and the second rendering rate is between the first threshold and the second threshold, determining that the target rendering rate is L×Q;
wherein P is smaller than Q, Q is smaller than L, and P, Q and L are integers greater than or equal to 1 respectively.
In one possible implementation, the predictive model includes a feature extraction network, a first branch network, a second branch network, a third branch network, and a fourth branch network;
the predicting the feature parameters of the region of interest ROI in the current image and the target in the ROI by the pre-trained prediction model comprises:
inputting the current image into the prediction model, and obtaining a feature map of the current image through the feature extraction network;
processing the feature map through the first branch network and the second branch network to obtain the ROI;
and processing the feature map through the third branch network and the fourth branch network to obtain the feature parameters.
In one possible implementation, the processing the feature map through the first branch network and the second branch network to obtain the ROI includes:
Processing the feature map through the first branch network to obtain a local shape feature map;
processing the feature map through the second branch network to obtain a global saliency map;
the ROI is determined from the local shape feature map and the global saliency map.
In one possible implementation manner, the processing the feature map through the third branch network and the fourth branch network to obtain the feature parameter includes:
processing the feature map through the third branch network to obtain a movement speed;
and processing the feature map through the fourth branch network to obtain depth of field, wherein the feature parameters comprise the motion speed and the depth of field.
In a second aspect, the present application provides a graphic display device comprising:
the acquisition module is used for acquiring the current image;
the prediction module is used for predicting a region of interest (ROI) in the current image and characteristic parameters of a target in the ROI through a pre-trained prediction model;
the determining module is used for determining a target rendering rate according to the characteristic parameters;
the display module is used for carrying out graphic display on the ROI according to the target rendering rate and carrying out graphic display on the non-ROI in the current graph according to a preset rendering rate, and the preset rendering rate is larger than the target rendering rate.
In one possible implementation, the characteristic parameters include motion speed and depth of field; the determining module is specifically configured to:
determining a first rendering rate of the target in the horizontal direction and a second rendering rate of the target in the vertical direction according to the motion speed and the depth of field;
and determining the target rendering rate according to the first rendering rate and the second rendering rate.
In one possible implementation, the determining module is specifically configured to:
acquiring a first motion speed of a target in the ROI in a first image in a horizontal direction and a second motion speed of the target in a vertical direction, wherein the first image is a previous frame image of the current image;
acquiring a third movement speed of a target in the ROI in a horizontal direction and a fourth movement speed of the target in a vertical direction in a second image, wherein the second image is a previous frame image of the first image;
and determining the target rendering rate according to the first movement speed, the second movement speed, the third movement speed and the fourth movement speed.
In one possible implementation, the determining module is specifically configured to:
Correcting the first rendering rate according to the first movement speed and the third movement speed to obtain a corrected first rendering rate;
correcting the second rendering rate according to the second movement speed and the fourth movement speed to obtain a corrected second correction rate;
and determining the target rendering rate according to the corrected first rendering rate and the corrected second rendering rate.
In one possible implementation, the determining module is specifically configured to:
if the first rendering rate and the second rendering rate are respectively smaller than a first threshold value, determining that the target rendering rate is P multiplied by P;
if the first rendering rate is less than the first threshold and the second rendering rate is between the first threshold and the second threshold, determining that the target rendering rate is p×q;
if the first rendering rate is between the first threshold and the second rendering rate is less than the first threshold, determining that the target rendering rate is Q x P
If the first rendering rate is between the first threshold and the second rendering rate is greater than or equal to the second threshold, determining that the target rendering rate is Q×L;
If the first rendering rate is greater than or equal to the second threshold and the second rendering rate is between the first threshold and the second threshold, determining that the target rendering rate is L×Q;
wherein P is smaller than Q, Q is smaller than L, and P, Q and L are integers greater than or equal to 1 respectively.
In one possible implementation, the predictive model includes a feature extraction network, a first branch network, a second branch network, a third branch network, and a fourth branch network; the prediction module is specifically configured to:
inputting the current image into the prediction model, and obtaining a feature map of the current image through the feature extraction network;
processing the feature map through the first branch network and the second branch network to obtain the ROI;
and processing the feature map through the third branch network and the fourth branch network to obtain the feature parameters.
In one possible implementation, the prediction module is specifically configured to:
processing the feature map through the first branch network to obtain a local shape feature map;
Processing the feature map through the second branch network to obtain a global saliency map;
the ROI is determined from the local shape feature map and the global saliency map.
In one possible implementation, the prediction module is specifically configured to:
processing the feature map through the third branch network to obtain a movement speed;
and processing the feature map through the fourth branch network to obtain depth of field, wherein the feature parameters comprise the motion speed and the depth of field.
In a third aspect, the present application provides an electronic device, comprising: a processor, and a memory communicatively coupled to the processor;
the memory stores computer-executable instructions;
the processor executes computer-executable instructions stored in the memory to implement the graphical display method as described in the first aspect.
In a fourth aspect, the present application provides a computer-readable storage medium having stored therein computer-executable instructions for implementing the graphical display method according to the first aspect when executed by a computer.
In a fifth aspect, the present application provides a computer program product comprising a computer program for implementing the graphical display method of the first aspect when the computer program is executed by a computer.
In a sixth aspect, embodiments of the present application provide a chip on which a computer program is stored, where the computer program when executed by the chip causes the graphics display method of the first aspect to be performed.
In one possible embodiment, the chip is a chip in a chip module.
In a seventh aspect, embodiments of the present application provide a module apparatus, where the module apparatus includes a power module, a storage module, and a chip module;
the power supply module is used for providing electric energy for the module equipment;
the storage module is used for storing data and instructions;
the chip module is used for executing the graphic display method in the first aspect.
According to the graphic display method, the device, the equipment, the storage medium and the program product, the interested region (Region of Interest, ROI) in the current image and the characteristic parameters of the target in the ROI are predicted by acquiring the current image and then by a pre-training obtained prediction model, so that the interested region is finer. And then determining a target rendering rate according to the characteristic parameters, carrying out graphic display on the ROI according to the target rendering rate, and simultaneously carrying out graphic display on the non-ROI in the current image according to a preset rendering rate, wherein the preset rendering rate is larger than the target rendering efficiency, so that the ROI and the non-ROI are respectively rendered through different rendering rates, and the load of the GPU can be reduced.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application.
FIG. 1 is a flowchart of a graphic display method according to an embodiment of the present application;
FIG. 2 is a schematic diagram of an ROI illustrated in an embodiment of the present application;
FIG. 3 is a flowchart of another graphic display method according to the second embodiment of the present application;
FIG. 4 is a schematic diagram of a target rendering rate as exemplified herein;
FIG. 5 is a flowchart of another graphic display method according to the third embodiment of the present application;
FIG. 6 is a schematic diagram of a prediction model of the examples of the present application;
FIG. 7 is a schematic illustration of a feature map of an example of the present application;
FIG. 8 is a flowchart of another graphic display method according to the fourth embodiment of the present application;
fig. 9 is a schematic structural diagram of a graphics display device according to a fifth embodiment of the present application;
fig. 10 is a schematic structural diagram of an electronic device according to a sixth embodiment of the present application.
Specific embodiments thereof have been shown by way of example in the drawings and will herein be described in more detail. These drawings and the written description are not intended to limit the scope of the inventive concepts in any way, but to illustrate the concepts of the present application to those skilled in the art by reference to specific embodiments.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present application as detailed in the accompanying claims.
For a clearer description of the present application, the following description will be made with reference to the terms related to the embodiments of the present application:
ROI: in the fields of machine vision, image processing and the like, a region to be processed is outlined from a processed image in a manner of a square frame, a circle, an ellipse, an irregular polygon and the like, namely, a certain specific region is interested, namely, a region of interest, and the region generally comprises people or objects, namely, a non-background region.
Rendering rate: in the embodiment of the present application, the color rate is the number of pixel shader operations that are invoked for each pixel when rendering an image. Higher shading rates may increase the accuracy of the rendered image, but higher demands on hardware (e.g., graphics cards) may result in performance loss, while lower shading rates may come at the cost of image quality to increase performance. With the rendering rate, a single pixel shader operation can be made applicable to a block of pixels, e.g., a rendering rate of 4 x 4, meaning that it is only one operation to render a 4 x 4 block of pixels, rather than sixteen separate operations.
In the case of mobile devices, the most loaded and most effective graphic computation is mainly a game scene, and as the market size of mobile games becomes more and more mainstream, the improvement of game picture quality becomes a requirement, which will very test the overall performance of the mobile GPU.
However, how to reduce the burden on the GPU while improving the rendering effect of graphics is a highly desirable problem.
Therefore, the application provides a prediction model obtained through pre-training, which predicts the ROI of the current image and the characteristic parameters of the targets in the ROI to obtain a finer region of interest, and determines the target rendering rate of the ROI according to the characteristic parameters, so that the ROI is rendered through the target rendering rate and the non-ROI is rendered through the preset rendering rate, and the burden of the GPU can be reduced.
By not letting the rendering rate render different areas in the image, it may be referred to as variable rate rendering (Variable Rate Shading, VRS), i.e. having a single pixel applied to multiple pixels.
The application scene of the application method can be a game scene of the terminal equipment, a movie projection scene of the terminal equipment, or a video playing scene, or other application scenes with dynamic picture playing.
In this embodiment of the present application, the terminal device may also be referred to as a User Equipment (UE), and some examples of the terminal device may be: a Mobile Phone (Mobile Phone), a tablet (Pad), a computer with a wireless transceiving function (such as a notebook computer, a palm computer, etc.), a Mobile internet device (Mobile Internet Device, MID), a Virtual Reality (VR) device, an augmented Reality (Augmented Reality, AR) device, an Extended Reality (XR) device, a wireless terminal in industrial control (industrial control), a wireless terminal in unmanned (Self Driving), a wireless terminal in Remote Medical (Remote Medical), a wireless terminal in Smart Grid (Smart Grid), a wireless terminal in transportation security (Transportation Safety), a wireless terminal in Smart City (Smart City), a wireless terminal in Smart Home (Smart Home), and the like.
The following describes the technical solutions of the present application and how the technical solutions of the present application solve the above technical problems in detail with specific embodiments. The following specific embodiments may exist alone or in combination with one another, and the same or similar concepts or processes may not be described in detail in some embodiments. Embodiments of the present application will be described below with reference to the accompanying drawings.
Fig. 1 is a schematic flow chart of a graphics display method provided in an embodiment of the present application, where the method may be performed by a terminal device, or may be performed by a graphics display apparatus provided in the terminal device, and the apparatus may be a chip, or may be a chip module, or may be an integrated development environment (integrated development environment, IDE), etc., referring to fig. 1, the method includes the following steps:
s101, acquiring a current image.
Before displaying the current image, the terminal device may acquire an image to be displayed currently, where the current image is a current frame image in the video stream.
The current image may be in a joint photographic experts group (Joint Photographic Experts Group, JPEG) format, a portable network graphics (Portable Network Graphics, PNG) format, or a Red Green Blue (RGB) format, as examples, but may be in other formats, as the present application is not limited in this regard.
S102, predicting the ROI in the current image and the characteristic parameters of the target in the ROI through a pre-trained prediction model.
After the current image is acquired, the terminal device can input the current image as a pre-trained prediction model, namely, input the current image into the prediction model, and predict the ROI in the current image and the characteristic parameters of the target in the ROI.
In this embodiment of the present application, the ROI may be a region range formed by irregular polygons outlining a region of interest, so as to determine the region of interest more precisely, for example, as shown in fig. 2, fig. 2 is a schematic diagram of an ROI illustrated in the embodiment of the present application, and a range of a dashed box is the ROI.
It is understood that there may be multiple ROIs of the current image, and that one ROI includes at least one object, for example, the object may be a person or an object. As can be seen from fig. 2, the region coverage of the ROI has a high overlap ratio with the region coverage of the multiple occupied objects in the ROI, and the feature parameters of the objects in the ROI can be used as the feature parameters of the ROI.
S103, determining a target rendering rate according to the characteristic parameters.
After predicting the feature parameters of the target in the ROI, determining the target rendering rate according to the feature parameters.
The manner in which the terminal device determines the target rendering rate may be, for example: the target rendering rate can be determined according to the motion speed and depth of field of the target in the characteristic parameters, namely, when the target rendering rate is determined, the depth of field of the target is considered, so that the rendering rate is more accurate.
S104, performing graphic display on the ROI according to a target rendering rate, and performing graphic display on the non-ROI in the current image according to a preset rendering rate, wherein the preset rendering rate is larger than the target rendering rate.
After determining the target rendering rate, the terminal device may render the ROI of the current image according to the target rendering rate, and graphically display the non-ROI in the current image according to the preset rendering rate.
It will be appreciated that the picture in the ROI is the main picture in the current image, i.e. the region of significant attention to the user's eye, as compared to the picture in the non-ROI, and the rendering requirements for this region will be higher (i.e. the picture requirements will be clearer). The ROI needs to be rendered more clearly than the non-ROI, the rendering rate of the ROI is lower than the non-ROI, and between the ROIs, the rendering rate is further subdivided by the target motion speed and depth of field.
For example, for ROIs where there is a high speed movement or a large depth of field, where the human eye is not particularly sensitive to the rendering rate of the target, the ROI may be rendered at a higher target rendering rate than the non-ROI rendering rate, it being understood that the rendering rate of the ROI where there is a high speed movement or a large depth of field is greater than the rendering rate of the ROI where there is a low speed movement or a small depth of field.
In this embodiment, the terminal device may predict the ROI in the current image and the feature parameters of the target in the ROI by acquiring the current image and then by pre-training the obtained prediction model, so that the region of interest is finer. And then determining a target rendering rate according to the characteristic parameters, carrying out graphic display on the ROI according to the target rendering rate, and simultaneously carrying out graphic display on the non-ROI in the current image according to a preset rendering rate, wherein the preset rendering rate is larger than the target rendering efficiency, so that the ROI and the non-ROI are respectively rendered through different rendering rates, and the load of the GPU can be reduced.
Next, another graphic display method provided in the present application will be described by way of example two.
Fig. 3 is a flow chart of another graphic display method provided in the second embodiment of the present application, where the method may be executed by a terminal device, or may be executed by a graphic display apparatus provided in the terminal device, and the apparatus may be a chip, or may be a chip module, or may be an IDE, etc., and referring to fig. 3, the method includes the following steps:
s301, acquiring a current image.
S302, predicting a region of interest (ROI) in a current image and characteristic parameters of a target in the ROI through a pre-trained prediction model, wherein the characteristic parameters comprise a motion speed and depth of field.
S303, determining a first rendering rate of the target in the horizontal direction and a second rendering rate of the target in the vertical direction according to the motion speed and the depth of field.
After predicting the motion speed and depth of field of the target in the ROI, the terminal device may determine a first rendering speed of the target in a horizontal direction and a second rendering speed of the target in a vertical direction according to the motion speed and depth of field.
The motion speed may include a speed in a horizontal direction and a speed in a vertical direction, and in particular, the terminal device may determine the first rendering speed through formula (1) and determine the second rendering speed according to formula (2).
V x (shader)=a×V x +b×D (1)
V y (shader)=a×V y +b×D (2)
Wherein V is x (loader) is a first rendering speed, i.e. the rendering speed of the shader (loader) in the horizontal direction, V y (loader) is a second rendering speed, which is a rendering speed representing the shader (loader) in the vertical direction. V (V) x For the velocity of the target in the ROI in the horizontal direction, V y For the speed of the object in the ROI in the vertical direction, D is the depth of field of the position of the object in the ROI, a is the speed weight, b is the depth of field weight, and for example, a may be set to 1, b may be set to 0.5, that is, it means that the current scene focuses on the object with a higher focusing speed, and of course, the weight values of a and b may be set according to the actual application scene, which is not limited in this application.
S304, determining a target rendering rate according to the first rendering rate and the second rendering rate.
The terminal device may determine the target rendering rate in the following two ways.
Mode 1
The terminal device may determine a target rendering rate according to the first rendering rate and the second rendering rate.
Specifically, the terminal device may determine the target rendering rate according to the following manner:
case 1: if the first rendering rate and the second rendering rate are respectively smaller than the first threshold, the target rendering rate is determined to be p×p, for example, P may be 1, and for example, 1×1 may be taken as the original rendering rate, i.e. one pixel performs a coloring operation.
Case 2: if the first rendering rate is smaller than the first threshold and the second rendering rate is between the first threshold and the second threshold, which indicates that the rendering rate of the target in the vertical direction is higher and the rendering rate of the target in the horizontal direction is lower, the target rendering rate may be determined to be p×q, and P may be 1, Q may be 2, that is, the target rendering rate may be 1×2, that is, a coloring operation is performed on two pixels of the target in the horizontal direction and a coloring operation is performed on one pixel of the target in the vertical direction.
Case 3: if the first rendering rate is between the first threshold and the second threshold, and the second rendering rate is smaller than the first threshold, which indicates that the rendering rate of the target in the horizontal direction is higher and the rate of the target in the vertical direction is lower, it may be determined that the target rendering rate is qxp, for example, P may be 1, Q may be 2, that is, the target rendering rate may be 2×1, that is, one coloring operation is performed on two pixels of the target in the horizontal direction and one coloring operation is performed on one pixel of the target in the vertical direction.
Case 4: if the first rendering rate is between the first threshold and the second threshold, and the second rendering rate is also between the first threshold and the second threshold, the rendering rate of the target in the vertical direction is higher, and the rate of the target in the horizontal direction is also higher, it may be determined that the target rendering rate is qxq, for example, Q may be 2, that is, the target rendering rate is 2 x 2, that is, a coloring operation is performed on two pixels of the target in the vertical direction and the horizontal direction, respectively.
Case 5: if the first rendering rate is greater than or equal to the second threshold, the second rendering rate is between the first threshold and the second threshold, which indicates that the rendering rate of the target in the horizontal direction is higher, and the rate of the target in the vertical direction is lower, the target rendering rate may be determined to be lxq, and L may be 4, and Q may be 2, that is, the target rendering rate is 4×2, that is, a coloring operation is performed on two pixels of the target in the horizontal direction, and a coloring operation is performed on four pixels of the target in the vertical direction.
Case 6: if the first rendering rate is greater than or equal to the second threshold, the second rendering rate is less than the first threshold, which indicates that the rendering rate of the target in the horizontal direction is higher, and the rate of the target in the vertical direction is lower, it may be determined that the target rendering rate is lxp, L may be 4, P may be 1, that is, the target rendering rate is 4×1, that is, one coloring operation is performed on one pixel of the target in the horizontal direction, and one coloring operation is performed on four pixels of the target in the vertical direction.
Case 7: if the second rendering rate is greater than or equal to the second threshold, the first rendering rate is between the first threshold and the second threshold, the rendering rate of the target in the vertical direction is higher, and the rate of the target in the horizontal direction is lower, and it may be determined that the target rendering rate is qxl, and illustratively, L may be 4, Q may be 2, that is, the target rendering rate is 2×4, that is, a coloring operation is performed on four pixels of the target in the horizontal direction, and a coloring operation is performed on two pixels of the target in the vertical direction.
Case 8: if the second rendering rate is greater than or equal to the second threshold, the first rendering rate is less than the first threshold, which indicates that the rendering rate of the target in the horizontal direction is higher, and the rate of the target in the vertical direction is lower, it may be determined that the target rendering rate is p×l, L may be 4, P may be 1, that is, the target rendering rate is 1×4, that is, one coloring operation is performed on four pixels of the target in the horizontal direction, and one coloring operation is performed on one pixel of the target in the vertical direction.
Case 9: if the second rendering rate is greater than or equal to the second threshold value and the first rendering rate is greater than or equal to the second threshold value, the rendering rate of the target in the vertical direction is higher, and the rate of the target in the horizontal direction is also higher, the target rendering rate may be determined to be lxl, and illustratively, L may be 4, that is, the target rendering rate is 4×4, that is, one coloring operation is performed on four pixels of the target in the horizontal direction and one coloring operation is performed on four pixels of the target in the vertical direction. In the above case, the rendering rate is between the first threshold and the second threshold, which may be expressed as the rendering rate being greater than or equal to the first threshold and less than the second threshold.
For the target rendering rates (1×1, 1×2, 1×4, 2×1, 2×2, 2×4, 4×1, 4×2, 4×4) in the above example, reference may be made to fig. 4.
It should be noted that, in order to reduce the burden of the GPU, the minimum value of the rendering rates in the above examples is set to 1×1, but 1×1 is not the smallest rendering rate, and the minimum value of the rendering rate may also be set to a rendering rate with higher precision, that is, a rendering rate less than 1×1, which may be set according to the actual requirement, which is not limited in this application.
Mode 2
In one possible implementation, when determining the target rendering rate, since the moving speed of the target has a certain acceleration or deceleration, the speed direction and the size are not easy to change greatly in a short time of the frame interval. In order to avoid significant changes in the speed detected by the current image (i.e., the current frame) and the speed and direction of the two previous frames of images of the current image, the detection result of the motion speed of the target of the current image may be corrected, so that the terminal device may determine the target rendering rate according to the corrected first rendering rate and the corrected second rendering rate.
Optionally, the terminal device may determine whether the detection result of the motion speed of the target in the current image needs to be corrected according to the motion speed of the target in the ROI in the previous frame image of the current image and the motion speed of the target in the ROI in the next frame image of the current image, for example, determine whether the difference between the motion speed of the target in the previous frame image and the motion speed of the target in the current image is greater than a preset value, and/or determine whether the difference between the motion speed of the target in the next frame image and the motion speed of the target in the current image is greater than a preset value, if so, it indicates that the motion speed of the target in the current image is abnormal, and then correct the motion speed. The motion speed of the target in the image of the next frame may be determined by the same operation of the embodiment of the present application on the current image, which is not described herein.
Specifically, the terminal device may determine the target rendering mode by:
the terminal device may acquire a first movement speed of the target in the ROI in the first image in the horizontal direction and a second movement speed of the target in the vertical direction, where the first image is a previous frame image of the current image, and acquire a third movement speed of the target in the ROI in the horizontal direction and a fourth movement speed of the target in the vertical direction, where the second image is a previous frame image of the first image.
Then determining a target rendering rate according to the first movement speed, the second movement speed, the third movement speed and the fourth movement speed, wherein the target rendering rate is realized specifically as follows:
and correcting the first rendering rate according to the first motion speed and the third motion speed to obtain a corrected first rendering rate. And then correcting the second rendering rate according to the second motion speed and the fourth motion speed to obtain a corrected second correction rate, so that the target rendering rate can be determined according to the corrected first rendering rate and the corrected second rendering rate.
Illustratively, the terminal device may modify the first rendering rate through formula (3) to obtain a modified first rendering rate.
Figure BDA0004140707710000141
Wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure BDA0004140707710000142
representing a corrected first rendering rate of the target in the ROI in the current image in the horizontal direction,
Figure BDA0004140707710000143
representing the speed of the object in the first image in the horizontal direction, i.e. the first speed of movement, a x Representing acceleration of the target in the horizontal direction, t is the frame interval duration, e.g., t is 16.7 milliseconds (ms) at a refresh frequency of 60 Hertz (HZ).
Wherein the acceleration a x The method comprises the following steps:
Figure BDA0004140707710000144
Figure BDA0004140707710000151
representing the speed of the object in the second image in the horizontal direction, i.e. the third movement speed.
Illustratively, the terminal device may modify the second rendering rate through equation (5) to obtain a modified second rendering rate.
Figure BDA0004140707710000152
Wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure BDA0004140707710000153
representing a corrected first rendering rate of the object in the ROI in the current image in the vertical direction,
Figure BDA0004140707710000154
representing the velocity of the object in the first image in the vertical direction, i.e. the second velocity of movement, a y Representing acceleration in the vertical direction of the target, t is the frame interval duration, for example, t is 16.7 milliseconds (ms) at a refresh frequency of 60 Hertz (HZ).
Wherein the acceleration a y The method comprises the following steps:
Figure BDA0004140707710000155
Figure BDA0004140707710000156
representing the speed of the object in the second image in the horizontal direction, i.e. the fourth movement speed.
After obtaining the corrected first rendering rate and the corrected second rendering rate, the terminal device may determine the target rendering rate according to the corrected first rendering rate and the corrected second rendering rate, and the specific manner may refer to determining the corrected first rendering rate and the first threshold and the second threshold in the method 1, and determining the corrected second rendering rate and the first threshold and the second threshold in order to determine the corresponding target rendering rate, which may refer to examples 1 to 6 specifically and not be repeated herein.
S305, performing graphic display on the ROI according to the target rendering rate, and performing graphic display on the non-ROI in the current image according to the preset rendering rate.
After determining the target rendering rate, the terminal device may render the ROI of the current image according to the target rendering rate, and simultaneously perform graphics display on the non-ROI in the current image according to the preset rendering rate.
Illustratively, the rendering rate of the non-ROI may be 4×4, 8×8, or 16×16, as this application is not limited.
In this embodiment, the terminal device may predict the ROI in the current image and the motion speed and depth of field of the target in the ROI by acquiring the current image and then by pre-training the obtained prediction model, so that the region of interest is finer. And then determining a first rendering rate of the target in the horizontal direction and a second rendering rate of the target in the vertical direction according to the motion speed and the depth of field, and determining the target rendering rate according to the first rendering rate and the second rendering rate. The target rendering rate is determined based on the depth of field and the motion speed, so that the accuracy of the target rendering rate can be provided, the terminal equipment can perform graphic display on the ROI according to the target rendering rate, and simultaneously, can perform graphic display on the non-ROI in the current image according to the preset rendering rate, wherein the preset rendering rate is larger than the target rendering efficiency, and the ROI and the non-ROI are respectively rendered through different rendering rates, so that the rendering effect can be improved, and the load of the GPU can be reduced.
Next, another graphic display method provided in the present application will be described by way of example three.
Fig. 5 is a flowchart of another graphics display method provided in the third embodiment of the present application, where the method may be performed by a terminal device, or may be performed by a graphics display apparatus provided in the terminal device, and the apparatus may be a chip, or may be a chip module, or may be an IDE, etc., and referring to fig. 5, the method includes the following steps:
s501, acquiring a current image.
S502, inputting the current image into a prediction model, and obtaining a feature map of the current image through a feature extraction network.
In the embodiment of the application, the prediction model is a neural network constructed based on a deep learning method, the high-dimensional characteristics of each pixel can be learned, multiple branches are used, one branch converts a region (mask) detection problem into a classification problem of whether the pixel belongs to a target or not, and the other branches regress to predict the motion speed and depth of field of the target.
The prediction model includes a feature extraction network, a first branch network, a second branch network, a third branch network, and a fourth branch network, and fig. 6 is an exemplary schematic structural diagram of the prediction model.
The feature extraction network may use any convolutional neural network as a backbone network to extract high-dimensional image features, so as to obtain feature graphs, for example, a convolutional neural network of a visual geometry group (Visual Geometry Group, VGG). The feature extraction network is composed of a convolution layer, an activation function layer (e.g., relu), a pooling layer, and a fully connected layer.
The terminal device may input the current image into the feature extraction network, for example, the current image may be represented as (X, Y, 3), where 3 is the number of channels, that is, the format adopted by the current image is represented as a 3-channel format, for example, RGB, X represents the number of pixels of the current image in the horizontal direction, and Y represents the number of pixels of the current image in the vertical direction. The current image is input into a feature extraction network, and after operations such as convolution, activation, pooling and full connection, the dimension of the obtained feature image is W multiplied by H multiplied by Z, W represents the width of the feature image, H represents the height of the feature image, and Z represents the number of channels.
S503, the feature map is processed through the first branch network and the second branch network, and the ROI is obtained.
After the feature map is obtained, the feature map may be input to the first branch network, the second branch network, the third branch network, and the fourth branch network, respectively, and the feature map may be processed through the first branch network and the second branch network to obtain the ROI.
Specifically, the first branch network is focused on utilizing the deep high semantic information in the image, and can extract a clear local shape of the target, so that the feature map can be processed through the first branch network to obtain a local shape feature map, wherein the size (size) of the feature map can be w×h×2, namely the size of the target, of the front part network in the first branch network. Then through another part of the first branch network, a clear characteristic diagram is extracted, and the size is W multiplied by H multiplied by S 2 Representing that on an image representing W×H, each pixel position is 1×1×S 2 Vector of (3)
The above-mentioned product is obtained as W X H X S 2 After the feature map of (2), the center point is determined as (x) 0 ,y 0 ) The shape of each pixel in the feature map may be expressed as F (x, y), and for each pixel, the corresponding vector 1×1×s is first used 2 The shape of (reshape) is changed to s×s size, then the extracted size (i.e., w×h×2) is changed to w×h according to s×s, exemplarily, as shown in fig. 7, and then a local shape feature map based on deep semantics is determined according to w×h obtained from a plurality of pixel points.
The above local shape feature map represents only the local shape result of a target object, and since the feature is predicted by a fixed shape vector (s×s) and only a coarser region can be obtained, a global saliency map is also required to be obtained to further improve the accuracy.
The feature map may be processed through a second branch network to obtain a global saliency map, where the global saliency map has a size of w×h×1, and the number of channels 1 represents a probability of whether each pixel belongs to a target.
After the local shape feature map and the global saliency map are obtained, the ROI can be determined according to the local shape feature map and the global saliency map. That is, the local information and the global information of the current image are integrated, specifically:
recording the local shape characteristic diagram as L k ∈R H×W The local shape feature map can determine a corresponding point in the global saliency map from whichThe positions of the points are cut out of the global part saliency map to form a rectangle with the same size as the local shape feature map, and the rectangle is marked as G k ∈R H×W The activation functions (e.g. sigmoid) are used to integrate them into the interval of (0, 1), and finally the two are subjected to Hadamard product (Hadamard), i.e. the elements at the same position of the two matrices are multiplied respectively, for example, formula (7).
M k =L k ×G k (7)
Wherein M is k To predict the obtained ROI, L k For the matrix corresponding to the local shape characteristic diagram, G k K represents the kth object as a matrix corresponding to the local saliency map.
S504, the feature map is processed through the third branch network and the fourth branch network to obtain feature parameters.
The feature map extracted in S502 may be input into a third branch network, and the feature map is processed through the third branch network to obtain a movement speed, where the movement speed includes a speed in a horizontal direction and a speed in a vertical direction.
The feature map extracted in S502 may be input to a fourth branch network, and the feature map is processed through the fourth branch network to obtain depth of field.
S505, determining a target rendering rate according to the characteristic parameters.
S506, performing graphic display on the ROI according to the target rendering rate, and performing graphic display on the non-ROI in the current image according to the preset rendering rate, wherein the preset rendering rate is larger than the target rendering rate.
S505 and S506 may refer to the above embodiments, and are not described herein.
In this embodiment, the terminal device may obtain the feature map of the current image by acquiring the current image, inputting the current image into the prediction model, and processing the feature map through the first branch network and the second branch network to obtain the ROI, and processing the feature map through the third branch network and the fourth branch network to obtain the feature parameter, so that the region of interest is finer. And then determining a target rendering rate according to the characteristic parameters, carrying out graphic display on the ROI according to the target rendering rate, and simultaneously carrying out graphic display on the non-ROI in the current image according to a preset rendering rate, wherein the preset rendering rate is larger than the target rendering efficiency, so that the ROI and the non-ROI are respectively rendered through different rendering rates, and the load of the GPU can be reduced.
For the prediction model in the above embodiment, the preset model may be obtained by training the training data, and in particular, another graphic display method provided in the present application will be described below through embodiment four.
Fig. 8 is a flowchart of another graphics display method provided in the fourth embodiment of the present application, where the method may be performed by a terminal device, or may be performed by a graphics display apparatus provided in the terminal device, and the apparatus may be a chip, or may be a chip module, or may be an IDE, etc., and referring to fig. 8, the method includes the following steps:
s801, training data is acquired.
The training data includes a plurality of historical images preceding the current image, and a data tag, wherein the data tag includes an ROI in each historical image, a motion speed and a depth of field of the target in each ROI.
Specifically, the terminal device may be a plurality of history images, which may be continuous video images in a certain game scene, and then perform data preprocessing on the plurality of history images, for example, performing contrast enhancement processing on the plurality of history images.
The terminal device may then determine the data tag based on the preprocessed plurality of historical images.
Specifically, the terminal device may process the plurality of history images through the object detection model to determine the ROI in each history image. Or the terminal device can obtain the ROI obtained after manual labeling. The moving speed of the object is divided into a speed in the horizontal direction and a speed in the vertical direction, and the speeds in the horizontal direction and the vertical direction of the object in the ROI in the currently processed history image can be determined according to the following manner:
and determining the moving distance of the target in the ROI in the historical image and the target in the next frame image of the historical image in the horizontal direction, and then obtaining the speed (in pixels per second, pixels per s) in the horizontal direction according to the ratio of the moving distance to the frame interval duration.
And determining the moving distance of the target in the ROI in the historical image and the target in the next frame image of the historical image in the vertical direction, and then obtaining the speed in the vertical direction according to the ratio of the moving distance to the frame interval duration.
The depth of field of the target in the ROI in the historical image can be obtained by obtaining the data obtained after manual annotation, the value of the depth of field can be between (0 and 1), and the larger the value is, the farther the distance from the target to the image observer is represented.
S802, training the preset model through training data to obtain a prediction model.
After the training data is obtained, the preset model can be trained for multiple times through the training data to obtain a prediction model, wherein the structure of the preset model is the same as the network structure of the prediction model, the difference is that parameters in the preset model are all initialization parameters, the training of the preset model can be understood as iteration of the initialization parameters to determine proper parameters, and the proper parameters are parameters in the prediction model.
In the training process, the loss function of the preset model is as follows:
Loss=λ 1 Loss mask2 Loss speed3 Loss depth (8)
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure BDA0004140707710000191
Figure BDA0004140707710000201
Figure BDA0004140707710000202
wherein N is the target number, bce represents the cross entropy of two classes for the pixel, M k ROI, T predicted for the image in the test set for the prediction model k For the true ROI marked in the image, the moving speed and depth of field depth loss are calculated by adopting average absolute error loss, and S '' k Motion velocity of target in ROI predicted by image in test set for prediction model, D' k Predicting depth of field of a target in an ROI predicted from an image in a test set for a prediction model S k And D k Are all marked true values (namely group trunk), and the weight lambda of the whole loss function when three parameters are combined linearly 1 Can be set to 1, lambda 2 Can be set to 0.1 lambda 3 May be set to 0.1.
It will be appreciated that if the value of the currently trained loss function is sufficiently small, e.g., less than a preset threshold, then the currently trained predictive model may be determined as the final predictive model. The model may be iteratively updated using an adaptive moment estimation (Adaptive Moment Estimation, adam) optimizer during training.
In this embodiment, the terminal device may acquire training data, and then train the preset model through the training data to obtain a prediction model, so that the terminal device may process the current image through the prediction model to predict the ROI in the current image and the feature parameters of the target in the ROI, so that the region of interest is finer.
Fig. 9 is a schematic structural diagram of a graphics display device according to a fifth embodiment of the present application. Referring to fig. 9, the apparatus 90 includes: an acquisition module 901, a prediction module 902, a determination module 903 and a display module 904.
An acquisition module 901, configured to acquire a current image.
A prediction module 902, configured to predict, by using a pre-trained prediction model, a region of interest ROI in a current image and a feature parameter of a target in the ROI.
A determining module 903, configured to determine a target rendering rate according to the feature parameter.
The display module 904 is configured to graphically display the ROI according to a target rendering rate, and graphically display the non-ROI in the current map according to a preset rendering rate, where the preset rendering rate is greater than the target rendering rate.
In one possible implementation, the characteristic parameters include motion speed and depth of field.
The determining module 903 is specifically configured to:
and determining a first rendering rate of the target in the horizontal direction and a second rendering rate of the target in the vertical direction according to the motion speed and the depth of field.
And determining a target rendering rate according to the first rendering rate and the second rendering rate.
In one possible implementation, the determining module 903 is specifically configured to:
and acquiring a first movement speed of a target in the ROI in the first image in the horizontal direction and a second movement speed of the target in the vertical direction, wherein the first image is a previous frame image of the current image.
And acquiring a third movement speed of the target in the ROI in the horizontal direction and a fourth movement speed of the target in the vertical direction in a second image, wherein the second image is a previous frame image of the first image.
And determining a target rendering rate according to the first movement speed, the second movement speed, the third movement speed and the fourth movement speed.
In one possible implementation, the determining module 903 is specifically configured to:
and correcting the first rendering rate according to the first motion speed and the third motion speed to obtain a corrected first rendering rate.
And correcting the second rendering rate according to the second movement speed and the fourth movement speed to obtain a corrected second correction rate.
And determining a target rendering rate according to the corrected first rendering rate and the corrected second rendering rate.
In one possible implementation, the determining module 903 is specifically configured to:
and if the first rendering rate and the second rendering rate are respectively smaller than the first threshold value, determining that the target rendering rate is P multiplied by P.
If the first rendering rate is less than the first threshold and the second rendering rate is between the first threshold and the second threshold, determining the target rendering rate as p×q.
If the first rendering rate is between the first threshold and the second rendering rate is less than the first threshold, determining the target rendering rate as Q×P.
And if the first rendering rate is between the first threshold value and the second rendering rate is greater than or equal to the second threshold value, determining that the target rendering rate is Q multiplied by L. Wherein P is less than Q, Q is less than L, P, Q and L are integers greater than or equal to 1, respectively.
In one possible implementation, the predictive model includes a feature extraction network, a first branch network, a second branch network, a third branch network, and a fourth branch network.
The prediction module 902 is specifically configured to:
and inputting the current image into a prediction model, and obtaining a feature map of the current image through a feature extraction network.
And processing the feature map through the first branch network and the second branch network to obtain the ROI.
And processing the feature map through a third branch network and a fourth branch network to obtain feature parameters.
In one possible implementation, the prediction module 902 is specifically configured to:
and processing the feature map through a first branch network to obtain a local shape feature map.
And processing the feature map through a second branch network to obtain a global saliency map.
The ROI is determined from the local shape feature map and the global saliency map.
In one possible implementation, the prediction module 902 is specifically configured to:
and processing the feature map through a third branch network to obtain the movement speed.
And processing the feature map through a fourth branch network to obtain depth of field, wherein the feature parameters comprise motion speed and depth of field.
The device of the present embodiment may be used to execute the technical solutions of the foregoing method embodiments, and the specific implementation manner and the technical effects are similar, and are not repeated herein.
Fig. 10 is a schematic structural diagram of an electronic device according to a sixth embodiment of the present application, and as shown in fig. 10, the electronic device 100 may include: at least one processor 1001 and memory 1002.
Memory 1002 for storing programs. In particular, the program may include program code including computer-executable instructions.
The Memory 1002 may include random access Memory (Random Access Memory, RAM) and may also include Non-volatile Memory (Non-volatile Memory), such as at least one disk Memory.
The processor 1001 is configured to execute computer-executable instructions stored in the memory 1002 to implement the methods described in the foregoing method embodiments. The processor 1001 may be a central processing unit (Central Processing Unit, CPU), or an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), or one or more integrated circuits configured to implement embodiments of the present application.
Optionally, the electronic device 100 may further include: communication interface 1003. In a specific implementation, if the communication interface 1003, the memory 1002, and the processor 1001 are implemented independently, the communication interface 1003, the memory 1002, and the processor 1001 may be connected to each other through buses and perform communication with each other. The bus may be an industry standard architecture (Industry Standard Architecture, ISA) bus, an external device interconnect (Peripheral Component, PCI) bus, or an extended industry standard architecture (Extended Industry Standard Architecture, EISA) bus, among others. Buses may be divided into address buses, data buses, control buses, etc., but do not represent only one bus or one type of bus.
Alternatively, in a specific implementation, if the communication interface 1003, the memory 1002, and the processor 1001 are implemented integrally on one chip, the communication interface 1003, the memory 1002, and the processor 1001 may complete communication through internal interfaces.
The electronic device 100 may be a chip, a chip module, an IDE, a terminal device, or the like.
The electronic device of the present embodiment may be used to execute the technical solutions of the foregoing method embodiments, and the specific implementation manner and the technical effects are similar, and are not repeated herein.
A seventh embodiment of the present application provides a computer-readable storage medium, which may include: various media capable of storing computer execution instructions, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a RAM, a magnetic disk, or an optical disc, etc., specifically, the computer execution instructions are stored in the computer readable storage medium, and when the computer execution instructions are executed by a computer, the technical scheme shown in the foregoing method embodiment is executed, and specific implementation manner and technical effects are similar and are not repeated herein.
An eighth embodiment of the present application provides a computer program product, which includes a computer program, and when the computer program is executed by a computer, the technical solution shown in the foregoing method embodiment is executed, and the specific implementation manner and the technical effect are similar, and are not repeated herein.
A ninth embodiment of the present application provides a chip, where a computer program is stored on the chip, and when the computer program is executed by the chip, the technical solution shown in the foregoing method embodiment is executed.
In one possible implementation, the chip may also be a chip module.
The chip of this embodiment may be used to execute the technical solutions shown in the foregoing method embodiments, and the specific implementation manner and the technical effects are similar, and are not repeated here
The tenth embodiment of the application provides a module device, and the module device includes power module, storage module and chip module.
The power supply module is used for providing electric energy for the module equipment.
The storage module is used for storing data and instructions.
The chip module of the embodiment may be used to execute the technical solution shown in the foregoing method embodiment, and the specific implementation manner and the technical effect are similar, and are not repeated here.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the application following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the application pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It is to be understood that the present application is not limited to the precise arrangements and instrumentalities shown in the drawings, which have been described above, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the application is limited only by the appended claims.
In this application, "and/or" is merely an association relationship describing an association object, and indicates that three relationships may exist, for example, a and/or B may indicate: a exists alone, A and B exist together, and B exists alone. In this context, the character "/" indicates that the front and rear associated objects are an "or" relationship.
"at least one (item) below" or the like, refers to any combination of these items, including any combination of single item(s) or plural items(s). For example, at least one (one) of a, b or c may represent: a, b, c, a and b, a and c, b and c, or a, b and c, wherein each of a, b, c may itself be an element, or may be a collection comprising one or more elements.
The term "at least one" in this application means one or more. "plurality" means two or more. The first, second, etc. descriptions in the embodiments of the present application are only used for illustrating and distinguishing the description objects, and no order division is used, nor does it indicate that the number of the devices in the embodiments of the present application is particularly limited, and no limitation on the embodiments of the present application should be construed. For example, the first threshold and the second threshold are merely for distinguishing between different thresholds, and are not intended to represent differences in the size, priority, importance, or the like of the two thresholds.
In this application, "exemplary," "in some embodiments," "in other embodiments," etc. are used to indicate an example, instance, or illustration. Any embodiment or design described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, the term use of an example is intended to present concepts in a concrete fashion.
"of", corresponding "and" associated "in this application may be sometimes used in combination, and it should be noted that the meaning of the expression is consistent when the distinction is not emphasized. Communication, transmission may sometimes be mixed in embodiments of the present application, it should be noted that the meaning expressed is consistent with the de-emphasis. For example, a transmission may include sending and/or receiving, either nouns or verbs.
In this application, "equal to" may be used in conjunction with "less than" or "greater than" but not in conjunction with "less than" and "greater than" at the same time. When the combination of the 'equal' and the 'less' is adopted, the method is applicable to the technical scheme adopted by the 'less'. When being used with 'equal to' and 'greater than', the method is applicable to the technical scheme adopted by 'greater than'.

Claims (12)

1. A graphical display method, comprising:
acquiring a current image;
predicting a region of interest (ROI) in the current image and characteristic parameters of a target in the ROI through a pre-trained prediction model;
determining a target rendering rate according to the characteristic parameters;
and performing graphic display on the ROI according to the target rendering rate, and performing graphic display on the non-ROI in the current image according to a preset rendering rate, wherein the preset rendering rate is larger than the target rendering rate.
2. The method of claim 1, wherein the characteristic parameters include a motion speed and a depth of field;
the determining the target rendering rate according to the characteristic parameters comprises the following steps:
determining a first rendering rate of the target in the horizontal direction and a second rendering rate of the target in the vertical direction according to the motion speed and the depth of field;
and determining the target rendering rate according to the first rendering rate and the second rendering rate.
3. The method of claim 2, wherein the determining the target rendering rate from the first rendering rate and the second rendering rate comprises:
Acquiring a first motion speed of a target in the ROI in a first image in a horizontal direction and a second motion speed of the target in a vertical direction, wherein the first image is a previous frame image of the current image;
acquiring a third movement speed of a target in the ROI in a horizontal direction and a fourth movement speed of the target in a vertical direction in a second image, wherein the second image is a previous frame image of the first image;
and determining the target rendering rate according to the first movement speed, the second movement speed, the third movement speed and the fourth movement speed.
4. A method according to claim 3, wherein said determining said target rendering rate from said first, second, third and fourth motion speeds comprises:
correcting the first rendering rate according to the first movement speed and the third movement speed to obtain a corrected first rendering rate;
correcting the second rendering rate according to the second movement speed and the fourth movement speed to obtain a corrected second correction rate;
And determining the target rendering rate according to the corrected first rendering rate and the corrected second rendering rate.
5. The method of claims 2-4, wherein the determining the target rendering rate from the first rendering rate and the second rendering rate comprises:
if the first rendering rate and the second rendering rate are respectively smaller than a first threshold value, determining that the target rendering rate is P multiplied by P;
if the first rendering rate is less than the first threshold and the second rendering rate is between the first threshold and the second threshold, determining that the target rendering rate is p×q;
if the first rendering rate is between the first threshold and the second rendering rate is less than the first threshold, determining that the target rendering rate is Q x P
If the first rendering rate is between the first threshold and the second rendering rate is greater than or equal to the second threshold, determining that the target rendering rate is Q×L;
if the first rendering rate is greater than or equal to the second threshold and the second rendering rate is between the first threshold and the second threshold, determining that the target rendering rate is L×Q; wherein P is smaller than Q, Q is smaller than L, and P, Q and L are integers greater than or equal to 1 respectively.
6. The method of any of claims 1-5, wherein the predictive model includes a feature extraction network, a first branch network, a second branch network, a third branch network, and a fourth branch network;
the predicting the feature parameters of the region of interest ROI in the current image and the target in the ROI by the pre-trained prediction model comprises:
inputting the current image into the prediction model, and obtaining a feature map of the current image through the feature extraction network;
processing the feature map through the first branch network and the second branch network to obtain the ROI;
and processing the feature map through the third branch network and the fourth branch network to obtain the feature parameters.
7. The method of claim 6, wherein the processing the feature map through the first and second branch networks to obtain the ROI comprises:
processing the feature map through the first branch network to obtain a local shape feature map;
processing the feature map through the second branch network to obtain a global saliency map;
The ROI is determined from the local shape feature map and the global saliency map.
8. The method according to claim 6, wherein the processing the feature map through the third branch network and the fourth branch network to obtain the feature parameters includes:
processing the feature map through the third branch network to obtain a movement speed;
and processing the feature map through the fourth branch network to obtain depth of field, wherein the feature parameters comprise the motion speed and the depth of field.
9. A graphic display device, comprising:
the acquisition module is used for acquiring the current image;
the prediction module is used for predicting a region of interest (ROI) in the current image and characteristic parameters of a target in the ROI through a pre-trained prediction model;
the determining module is used for determining a target rendering rate according to the characteristic parameters;
the display module is used for carrying out graphic display on the ROI according to the target rendering rate and carrying out graphic display on the non-ROI in the current graph according to a preset rendering rate, and the preset rendering rate is larger than the target rendering rate.
10. An electronic device, comprising: a processor, and a memory communicatively coupled to the processor;
the memory stores computer-executable instructions;
the processor executes computer-executable instructions stored in the memory to implement the graphical display method of any of claims 1-8.
11. A computer readable storage medium having stored therein computer executable instructions which when executed by a processor are adapted to implement a graphical display method as claimed in any one of claims 1 to 8.
12. A computer program product comprising a computer program which, when executed by a processor, implements the graphical display method of any of claims 1-8.
CN202310288005.8A 2023-03-22 2023-03-22 Graphic display method, graphic display device, graphic display apparatus, graphic display storage medium and graphic display program product Pending CN116308996A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310288005.8A CN116308996A (en) 2023-03-22 2023-03-22 Graphic display method, graphic display device, graphic display apparatus, graphic display storage medium and graphic display program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310288005.8A CN116308996A (en) 2023-03-22 2023-03-22 Graphic display method, graphic display device, graphic display apparatus, graphic display storage medium and graphic display program product

Publications (1)

Publication Number Publication Date
CN116308996A true CN116308996A (en) 2023-06-23

Family

ID=86797573

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310288005.8A Pending CN116308996A (en) 2023-03-22 2023-03-22 Graphic display method, graphic display device, graphic display apparatus, graphic display storage medium and graphic display program product

Country Status (1)

Country Link
CN (1) CN116308996A (en)

Similar Documents

Publication Publication Date Title
US11170210B2 (en) Gesture identification, control, and neural network training methods and apparatuses, and electronic devices
US11107232B2 (en) Method and apparatus for determining object posture in image, device, and storage medium
US10832069B2 (en) Living body detection method, electronic device and computer readable medium
US11182885B2 (en) Method and apparatus for implementing image enhancement, and electronic device
US11270158B2 (en) Instance segmentation methods and apparatuses, electronic devices, programs, and media
CN110827193B (en) Panoramic video significance detection method based on multichannel characteristics
CN112308095A (en) Picture preprocessing and model training method and device, server and storage medium
US20230214976A1 (en) Image fusion method and apparatus and training method and apparatus for image fusion model
US11676390B2 (en) Machine-learning model, methods and systems for removal of unwanted people from photographs
US20100054584A1 (en) Image-based backgrounds for images
US9824429B2 (en) Image processing apparatus and method, and program
CN111292236B (en) Method and computing system for reducing aliasing artifacts in foveal gaze rendering
CN112135041B (en) Method and device for processing special effect of human face and storage medium
CN109035147B (en) Image processing method and device, electronic device, storage medium and computer equipment
CN114627173A (en) Data enhancement for object detection by differential neural rendering
CN113052923A (en) Tone mapping method, tone mapping apparatus, electronic device, and storage medium
CN116091292B (en) Data processing method and related device
US20220138906A1 (en) Image Processing Method, Apparatus, and Device
KR101592087B1 (en) Method for generating saliency map based background location and medium for recording the same
CN116308996A (en) Graphic display method, graphic display device, graphic display apparatus, graphic display storage medium and graphic display program product
CN116883770A (en) Training method and device of depth estimation model, electronic equipment and storage medium
JP2015125543A (en) Line-of-sight prediction system, line-of-sight prediction method, and line-of-sight prediction program
US20200035039A1 (en) Image processing apparatus, image processing method, and storage medium
JP6305942B2 (en) Image texture operation method, image texture operation device, and program
US20240037701A1 (en) Image processing and rendering

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination