CN113192173B - Image processing method and device of three-dimensional scene and electronic equipment - Google Patents

Image processing method and device of three-dimensional scene and electronic equipment Download PDF

Info

Publication number
CN113192173B
CN113192173B CN202110528438.7A CN202110528438A CN113192173B CN 113192173 B CN113192173 B CN 113192173B CN 202110528438 A CN202110528438 A CN 202110528438A CN 113192173 B CN113192173 B CN 113192173B
Authority
CN
China
Prior art keywords
image
scene
processing
pixel
channel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110528438.7A
Other languages
Chinese (zh)
Other versions
CN113192173A (en
Inventor
袁佳平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Chengdu Co Ltd
Original Assignee
Tencent Technology Chengdu Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Chengdu Co Ltd filed Critical Tencent Technology Chengdu Co Ltd
Priority to CN202110528438.7A priority Critical patent/CN113192173B/en
Publication of CN113192173A publication Critical patent/CN113192173A/en
Application granted granted Critical
Publication of CN113192173B publication Critical patent/CN113192173B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/02Non-photorealistic rendering
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/60Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/80Shading
    • G06T15/83Phong shading
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/30Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by output arrangements for receiving control signals generated by the game device
    • A63F2300/308Details of the user interface

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application provides an image processing method, an image processing device, electronic equipment and a computer readable storage medium of a three-dimensional scene; the method comprises the following steps: performing image acquisition processing on the three-dimensional scene to obtain a scene image of the three-dimensional scene; storing the scene image into an image buffer area of a memory; rendering, by at least one rendering component, the scene image in the image buffer to update the scene image to a target scene image having target visual features; rendering the target scene image in the image buffer area to present the target scene image at the human-computer interaction interface. According to the method and the device, the target visual characteristics can be accurately and rapidly added into the three-dimensional scene.

Description

Image processing method and device of three-dimensional scene and electronic equipment
Technical Field
The present application relates to computer technology, and in particular, to a method and apparatus for processing an image of a three-dimensional scene, an electronic device, and a computer readable storage medium.
Background
With rapid development of computer technology, an emerging modeling technology, namely, a three-dimensional modeling technology, appears in the fields of game making, animation making, virtual Reality (VR) and the like, wherein three-dimensional (3D) refers to a three-dimensional coordinate system formed by adding a new direction vector to a planar two-dimensional coordinate system. The three-dimensional scene with three-dimensional sense and sense of reality can be obtained through three-dimensional modeling, and good presentation effect can be obtained.
Complex and varying business requirements may arise in real business associated with three-dimensional scenes, such as adding specific visual features (i.e., visual effects) to the three-dimensional scene. In view of this, in the solution provided by the related art, it is common for the modeling person to re-model, i.e. re-manually make, the three-dimensional scene according to the visual features that need to be added. However, this solution requires a lot of time and labor costs, and cannot meet the complex and variable business requirements.
Disclosure of Invention
The embodiment of the application provides an image processing method, an image processing device, electronic equipment and a computer readable storage medium for a three-dimensional scene, which can accurately and rapidly add target visual characteristics into the three-dimensional scene and adapt to complex and changeable business requirements.
The technical scheme of the embodiment of the application is realized as follows:
the embodiment of the application provides an image processing method of a three-dimensional scene, which comprises the following steps:
performing image acquisition processing on a three-dimensional scene to obtain a scene image of the three-dimensional scene;
storing the scene image into an image buffer area of a memory;
rendering the scene image in the image buffer by at least one rendering component to update the scene image to a target scene image having target visual characteristics;
Rendering the target scene image in the image buffer area so as to present the target scene image on a human-computer interaction interface.
An embodiment of the present application provides an image processing apparatus for a three-dimensional scene, including:
the acquisition module is used for carrying out image acquisition processing on the three-dimensional scene to obtain a scene image of the three-dimensional scene;
the storage module is used for storing the scene image into an image buffer area of the memory;
a shading module for shading the scene image in the image buffer by at least one shading component to update the scene image to a target scene image having target visual features;
and the rendering module is used for rendering the target scene image in the image buffer area so as to present the target scene image on a human-computer interaction interface.
An embodiment of the present application provides an electronic device, including:
a memory for storing executable instructions;
and the processor is used for realizing the image processing method of the three-dimensional scene provided by the embodiment of the application when executing the executable instructions stored in the memory.
The embodiment of the application provides a computer readable storage medium which stores executable instructions for causing a processor to execute, thereby realizing the image processing method of the three-dimensional scene provided by the embodiment of the application.
The embodiment of the application has the following beneficial effects:
the three-dimensional scene is subjected to image acquisition processing to obtain a scene image, the scene image is subjected to coloring processing in an image buffer area through a coloring component, and a target scene image obtained through the coloring processing is presented, so that on one hand, the target visual characteristics can be accurately added into the three-dimensional scene (the scene image) through the coloring component, and if the added target visual characteristics need to be changed, the coloring component is correspondingly adjusted, so that the complex and changeable business requirements can be met; on the other hand, compared with the scheme provided by the related art, the embodiment of the application can reduce the operation of a user and simultaneously reduce the consumption of computing resources of the electronic equipment.
Drawings
FIG. 1 is a schematic architecture diagram of an image processing system for three-dimensional scenes provided by an embodiment of the present application;
fig. 2 is a schematic architecture diagram of a terminal device according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a three-dimensional engine provided by an embodiment of the present application;
fig. 4A is a flowchart of an image processing method of a three-dimensional scene according to an embodiment of the present application;
fig. 4B is a flowchart of an image processing method of a three-dimensional scene according to an embodiment of the present application;
FIG. 4C is a flowchart of a pixel rendering process according to an embodiment of the present application;
fig. 4D is a flowchart of an image processing method of a three-dimensional scene according to an embodiment of the present application;
FIG. 5A is a schematic diagram of a frame of a scene image of a three-dimensional scene provided by an embodiment of the application;
FIG. 5B is a schematic diagram of a target scene image with added screen failure special effects provided by an embodiment of the present application;
FIG. 6 is a schematic illustration of post-processing provided by an embodiment of the present application;
FIG. 7 is a schematic diagram of a post-processing chain provided by an embodiment of the present application;
FIG. 8 is a schematic diagram of an image processed by an RGB color conversion shader according to an embodiment of the present application;
FIG. 9 is a schematic diagram of a target scene image processed by an RGB color conversion shader according to an embodiment of the present application;
FIG. 10 is a schematic diagram of a target scene image processed by a noisy point shader according to an embodiment of the present application;
fig. 11 is a flowchart of an image processing method of a three-dimensional scene according to an embodiment of the present application.
Detailed Description
The present application will be further described in detail with reference to the accompanying drawings, for the purpose of making the objects, technical solutions and advantages of the present application more apparent, and the described embodiments should not be construed as limiting the present application, and all other embodiments obtained by those skilled in the art without making any inventive effort are within the scope of the present application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is to be understood that "some embodiments" can be the same subset or different subsets of all possible embodiments and can be combined with one another without conflict.
In the following description, the terms "first", "second", and the like are merely used to distinguish between similar objects and do not represent a particular ordering of the objects, it being understood that the "first", "second", or the like may be interchanged with one another, if permitted, to enable embodiments of the application described herein to be practiced otherwise than as illustrated or described herein. In the following description, the term "plurality" refers to at least two.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the application only and is not intended to be limiting of the application.
Before describing embodiments of the present application in further detail, the terms and terminology involved in the embodiments of the present application will be described, and the terms and terminology involved in the embodiments of the present application will be used in the following explanation.
1) Three-dimensional scene: the three-dimensional space is constructed based on the three-dimensional modeling technology, and objects (i.e., three-dimensional objects) in the three-dimensional space can be described by three-dimensional coordinates, wherein the three-dimensional coordinates can refer to coordinates in a three-dimensional coordinate system including an x-axis, a y-axis and a z-axis. In some embodiments, the three-dimensional scene may be a virtual scene (or three-dimensional virtual scene), which is a scene output by an electronic device and distinguished from the real world, and the visual perception of the virtual scene can be formed by naked eyes or assistance of a specific device. The virtual scene may be a simulation environment for the real world, a semi-simulation and semi-fictional virtual environment, or a pure fictional virtual environment.
2) Three-dimensional engine: a set of codes (instructions) designed for an electronic device outputting a three-dimensional scene that can be recognized by the electronic device is used to control how the three-dimensional scene is produced and output. From another perspective, the three-dimensional engine may refer to a three-dimensional scene development environment that encapsulates hardware operations and image algorithms. In embodiments of the present application, image processing may be implemented using a camera component and a shading component in a three-dimensional engine.
3) Coloring assembly: also called Shader (Shader), which refers to an editable program for shading an image, can implement relevant computation of 3D graphics, and since the shading component has editability, various visual features (visual effects) can be added to the image by the shading component without being limited by a fixed rendering pipeline of a graphics card. In an embodiment of the present application, the shading component may include a vertex shading component and a pixel shading component, where the vertex shading component is mainly responsible for operations such as geometric relationships of vertices, and the pixel shading component is mainly responsible for calculations such as pixel colors. In the embodiment of the application, the coloring component can run in a graphic processor (Graphics Processing Unit, GPU) of the electronic device, wherein the graphic processor is also called a display core, a visual processor and a display chip, and is a processor which is specially used for performing graphic correlation operation in the electronic device (such as a personal computer, a workstation, a game machine, a tablet computer and a smart phone).
4) Post-Processing (Post-Processing): the processing procedure before the image is presented in the man-machine interaction interface is referred to, in which the image in the image buffer of the Memory can be subjected to coloring processing, so as to apply the target visual characteristics meeting the service requirements to the image. The image buffer area refers to a buffer area which is partitioned for a three-dimensional scene in a memory and is used for storing images acquired from the three-dimensional scene (namely, scene images). In addition, embodiments of the present application are not limited in the type of target visual features, and may be, for example, specific depths of field, luminescence, film grain, and various types of antialiasing, etc.
5) Rendering (Render): in the embodiment of the application, the image is rendered to a man-machine interaction interface (such as a man-machine interaction interface provided by a browser) so as to present the image in the man-machine interaction interface.
6) Strategy: the electronic device may parse the policy according to the set logic and perform corresponding operations to implement the corresponding functions, e.g., the electronic device may perform pixel rendering processing according to the pixel rendering policy to implement the pixel rendering functions. In embodiments of the application, the pixel rendering policy may be set manually by the relevant personnel or automatically by artificial intelligence (Artificial Intelligence, AI). The embodiment of the application does not limit the specific form of the pixel coloring strategy, and can be, for example, codes directly recognized and executed by electronic equipment.
7) Pixel (Pixel): refers to elements in the image that cannot continue to be segmented.
8) Image channel: for describing pixels in an image, in an embodiment of the present application, an image channel may include at least one of a color channel and a transparency channel (also referred to as Alpha channel). Wherein color channels are used to describe the color of a pixel, there are differences in the color channels in different color spaces (also known as color modes), e.g., color channels in RGB color space include Red (Red) channel, green (Green) channel, and Blue (Blue) channel; also for example, color channels in the HSL color space include Hue (Hue) channels, saturation (Saturation) channels, and brightness (light) channels. The transparency channel is used to describe the transparency of the pixel.
The embodiment of the application provides an image processing method, an image processing device, electronic equipment and a computer readable storage medium for a three-dimensional scene, which can accurately and rapidly add target visual characteristics into the three-dimensional scene and adapt to complex and changeable business requirements. An exemplary application of the electronic device provided by the embodiment of the present application is described below, where the electronic device provided by the embodiment of the present application may be implemented as various types of terminal devices, or may be implemented as a server.
Referring to fig. 1, fig. 1 is a schematic architecture diagram of an image processing system 100 for a three-dimensional scene according to an embodiment of the present application, where a terminal device 400 is connected to a server 200 through a network 300, and the server 200 is connected to a database 500, where the network 300 may be a wide area network or a local area network, or a combination of the two.
In some embodiments, taking an example that the electronic device is a terminal device, the image processing method of the three-dimensional scene provided by the embodiment of the application may be implemented by the terminal device. For example, the terminal device 400 may calculate data required for display by a graphics computing hardware (e.g., GPU), and complete loading, parsing, and rendering of display data (e.g., a target scene image), and output an image capable of forming a visual perception of a three-dimensional scene by means of a graphics output hardware (e.g., screen), for example, rendering the target scene image on a display screen of a smartphone.
For example, the terminal device 400 performs image acquisition processing on the three-dimensional scene to obtain a scene image, and stores the scene image in an image buffer area included in a memory of the terminal device 400, where relevant data of the three-dimensional scene may be acquired by the terminal device 400 from the outside (such as the server 200, the database 500, or the blockchain, or the like), or may be generated in the terminal device 400. The terminal device 400 performs a shading process on the scene image in the image buffer by at least one shading component to update the scene image to a target scene image having target visual characteristics. Finally, the terminal device 400 performs rendering processing on the target scene image in the image buffer area to present the target scene image at the human-computer interaction interface.
In some embodiments, taking an electronic device as an example of a server, the image processing method for a three-dimensional scene provided in the embodiments of the present application may also be implemented by cooperation of a terminal device and the server. For example, the server 200 performs computation of three-dimensional scene-related display data and transmits the same to the terminal device 400, and the terminal device 400 performs loading, parsing and rendering of the display data depending on the graphic computation hardware and outputs an image depending on the graphic output hardware to form visual perception.
For example, the server 200 performs image acquisition processing on the three-dimensional scene to obtain a scene image, where the server 200 may acquire relevant data of the three-dimensional scene from the terminal device 400, the database 500, the distributed file system or the blockchain of the server 200 itself, and the like, which is not limited. The server 200 stores the scene image in an image buffer included in a memory of the server 200 and performs a shading process on the scene image in the image buffer by at least one shading component to update the scene image to a target scene image having a target visual characteristic. The server 200 then transmits the target scene image to the terminal device 400, and the terminal device 400 may perform rendering processing on the received target scene image to present the target scene image at the human-computer interaction interface.
In some embodiments, the electronic device may perform image acquisition processing on the three-dimensional scene when the three-dimensional scene satisfies the visual change condition, to obtain a scene image of the three-dimensional scene. Here, the visual change condition is not limited, and for example, a trigger operation for a three-dimensional scene may be received in the human-computer interaction interface. For example, as shown in fig. 1, a three-dimensional scene and a visual change option are being presented in the man-machine interaction interface of the terminal device 400, when the terminal device 400 receives a trigger operation for the visual change option, the trigger operation is used as a trigger operation for the three-dimensional scene being presented, in which case, the terminal device 400 may perform image acquisition processing on the three-dimensional scene being presented, or may send related data of the three-dimensional scene being presented to the server 200, so that the server 200 performs image acquisition processing on the three-dimensional scene being presented. The type of the triggering operation is not limited, and may be, for example, a touch operation, such as a click operation or a long press operation; and may be, for example, a non-contact operation such as a voice input operation or a gesture input operation. After a series of processes performed by the terminal device 400 or the server 200, finally, the terminal device 400 presents a target scene image on the human-computer interaction interface, where the target scene image has a target visual characteristic, that is, a visual change is achieved. Therefore, the correctness and rationality of the image processing time can be improved in a man-machine interaction mode, and the user requirements can be fully met.
In some embodiments, various results involved in the image processing process (such as a three-dimensional scene, a scene image, a target scene image, etc.) can be stored in the blockchain, and the accuracy of the data in the blockchain can be ensured because the blockchain has the characteristic of non-falsification. The electronic device may send a query request to the blockchain to query data stored in the blockchain, e.g., when a target scene image needs to be presented, the terminal device may query the target scene image stored in the blockchain and perform rendering processing.
In some embodiments, the terminal device 400 or the server 200 may implement the image processing method of the three-dimensional scene provided by the embodiment of the present application by running a computer program, such as the client 410 shown in fig. 1. For example, the computer program may be a native program or a software module in an operating system; a Native Application (APP), i.e. a program that needs to be installed in an operating system to run, such as a browser Application, a short video Application, a military simulation program, or a game Application; the method can also be an applet, namely a program which can be run only by being downloaded into a browser environment; but also an applet that can be embedded in any APP, which applet can be run or shut down by the user control. In general, the computer programs described above may be any form of application, module or plug-in. For the game application, it may be any one of a First-Person shooter (FPS) game, a Third-Person shooter (TPS) game, a multiplayer online tactical competition (Multiplayer Online Battle Arena, MOBA) game, and a multiplayer warfare survival game, which are not limited thereto.
In some embodiments, the server (such as the server 200 shown in fig. 1) may be an independent physical server, or may be a server cluster or a distributed system formed by a plurality of physical servers, or may be a cloud server that provides cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, content delivery networks (Content Delivery Network, CDNs), and basic cloud computing services such as big data and artificial intelligence platforms, where the cloud services may be image processing services for a terminal device to call. The terminal device (such as the terminal device 400 shown in fig. 1) may be a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart television, a smart watch, etc., but is not limited thereto. The terminal device and the server may be directly or indirectly connected through wired or wireless communication, which is not limited in the embodiment of the present application.
In some embodiments, a database (such as database 500 shown in FIG. 1) and a server (such as server 200 shown in FIG. 1) may be provided independently. In some embodiments, the database and the server may also be integrated together, i.e., the database may be considered to exist inside the server, integrated with the server, and the server may provide data management functions of the database.
Taking the electronic device provided by the embodiment of the present application as an example of a terminal device, it can be understood that, in the case where the electronic device is a server, portions (such as a user interface, a presentation module, and an input processing module) in the structure shown in fig. 2 may be default. Referring to fig. 2, fig. 2 is a schematic structural diagram of a terminal device 400 provided in an embodiment of the present application, and the terminal device 400 shown in fig. 2 includes: at least one processor 410, a memory 450, at least one network interface 420, and a user interface 430. The various components in terminal device 400 are coupled together by bus system 440. It is understood that the bus system 440 is used to enable connected communication between these components. The bus system 440 includes a power bus, a control bus, and a status signal bus in addition to the data bus. But for clarity of illustration the various buses are labeled in fig. 2 as bus system 440.
The processor 410 may be an integrated circuit chip having signal processing capabilities such as a general purpose processor, such as a microprocessor or any conventional processor, or the like, a digital signal processor (DSP, digital Signal Processor), or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or the like.
The user interface 430 includes one or more output devices 431, including one or more speakers and/or one or more visual displays, that enable presentation of the media content. The user interface 430 also includes one or more input devices 432, including user interface components that facilitate user input, such as a keyboard, mouse, microphone, touch screen display, camera, other input buttons and controls.
Memory 450 may be removable, non-removable, or a combination thereof. Exemplary hardware devices include solid state memory, hard drives, optical drives, and the like. Memory 450 optionally includes one or more storage devices physically remote from processor 410.
Memory 450 includes volatile memory or nonvolatile memory, and may also include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read Only Memory (ROM), and the volatile Memory may be a random access Memory (RAM, random Access Memory). The memory 450 described in embodiments of the present application is intended to comprise any suitable type of memory.
In some embodiments, memory 450 is capable of storing data to support various operations, examples of which include programs, modules and data structures, or subsets or supersets thereof, as exemplified below.
An operating system 451 including system programs, e.g., framework layer, core library layer, driver layer, etc., for handling various basic system services and performing hardware-related tasks, for implementing various basic services and handling hardware-based tasks;
a network communication module 452 for accessing other electronic devices via one or more (wired or wireless) network interfaces 420, the exemplary network interface 420 comprising: bluetooth, wireless compatibility authentication (WiFi), and universal serial bus (USB, universal Serial Bus), etc.;
a presentation module 453 for enabling presentation of information (e.g., a user interface for operating peripheral devices and displaying content and information) via one or more output devices 431 (e.g., a display screen, speakers, etc.) associated with the user interface 430;
an input processing module 454 for detecting one or more user inputs or interactions from one of the one or more input devices 432 and translating the detected inputs or interactions.
In some embodiments, the image processing apparatus for a three-dimensional scene provided by the embodiments of the present application may be implemented in software, and fig. 2 shows an image processing apparatus 455 for a three-dimensional scene stored in a memory 450, which may be software in the form of a program, a plug-in, or the like, including the following software modules: the acquisition module 4551, the storage module 4552, the rendering module 4553 and the rendering module 4554 are logical, and thus may be arbitrarily combined or further split according to the functions implemented. The functions of the respective modules will be described hereinafter.
Referring to fig. 3, fig. 3 is a schematic diagram of a three-dimensional engine according to an embodiment of the present application, where the three-dimensional scene is a game virtual scene, the three-dimensional engine may be a game engine. As shown in fig. 3, the three-dimensional engine includes, but is not limited to, a rendering component (e.g., a renderer), an editing component (e.g., an editor for editing/producing a three-dimensional scene), an underlying algorithm, scene management (for managing different three-dimensional scenes), sound effects (for managing audio corresponding to a three-dimensional scene), a script engine, a camera component, and a shading component, wherein the shading component may include a vertex shading component and a pixel shading component. The image processing method of the three-dimensional scene provided by the embodiment of the present application may be implemented by each module in the image processing apparatus 455 of the three-dimensional scene shown in fig. 2 invoking the related components of the three-dimensional engine shown in fig. 3, and is described below by way of example.
For example, the acquisition module 4551 is configured to invoke a camera component in the three-dimensional engine to perform image acquisition processing on a three-dimensional scene, so as to obtain a scene image of the three-dimensional scene; the storage module 4552 is configured to invoke a camera component in the three-dimensional engine to store a scene image in an image buffer corresponding to the camera component, where a correspondence between the camera component and the image buffer may be pre-established; the shading module 4553 is configured to invoke a shading component in the three-dimensional engine to perform shading processing on the scene image in the image buffer, thereby updating the scene image to a target scene image having a target visual characteristic; the rendering module 4554 is configured to invoke a rendering component in the three-dimensional engine to render the target scene image in the image buffer, so as to present the target scene image on the human-computer interaction interface.
Of course, the above examples are not limited to the embodiments of the present application, and the calling relationships of the components included in the three-dimensional engine and the modules in the image processing apparatus 455 of the three-dimensional scene to the components in the three-dimensional engine may be adjusted according to the actual application scenario.
The image processing method of the three-dimensional scene provided by the embodiment of the application will be described in connection with the exemplary application and implementation of the electronic device provided by the embodiment of the application.
Referring to fig. 4A, fig. 4A is a flowchart of an image processing method of a three-dimensional scene according to an embodiment of the present application, and will be described with reference to the steps shown in fig. 4A.
In step 101, an image acquisition process is performed on a three-dimensional scene to obtain a scene image of the three-dimensional scene.
For example, image acquisition processing may be performed on a three-dimensional scene being presented in a human-computer interaction interface to obtain a scene image in a current field of view (referred to as a field of view in presentation) or a specific field of view; for example, when the three-dimensional scene is not presented, the three-dimensional scene may be subjected to image acquisition processing according to a specific view range to obtain a scene image, where the view range used in the image acquisition processing may be preset or may be automatically determined.
The embodiment of the application does not limit the mode of image acquisition and processing, for example, the image acquisition and processing can be realized by a screen capturing mode, and the image acquisition and processing can also be realized by a camera component of a three-dimensional engine.
In some embodiments, the three-dimensional scene includes at least one three-dimensional object; the above image acquisition processing of the three-dimensional scene can be realized in such a way that a scene image of the three-dimensional scene is obtained: determining a minimum space simultaneously comprising at least one three-dimensional object in the three-dimensional scene according to the coordinates of the at least one three-dimensional object in the three-dimensional scene; determining a field of view of the camera assembly based on the minimum space; scene images in the field of view in the three-dimensional scene are acquired by the camera assembly.
In the embodiment of the application, the visual field range for image acquisition processing can be automatically determined. For example, if the three-dimensional scene includes at least one three-dimensional object, in order to ensure that each three-dimensional object can be acquired during the image acquisition process, coordinates of each three-dimensional object may be determined by the three-dimensional engine, and a minimum space (also referred to as a minimum three-dimensional space) including all three-dimensional objects in the three-dimensional scene is determined according to the coordinates of each three-dimensional object.
And adjusting the visual field range of the camera assembly of the three-dimensional engine according to the determined minimum space so that the adjusted visual field range can at least cover the minimum space. Then, a scene image in the adjusted field of view in the three-dimensional scene is acquired by a camera assembly of the three-dimensional engine. Thus, the accuracy of the determined visual field range can be improved, and the acquired scene image is ensured to comprise all three-dimensional objects; meanwhile, the visual field range does not need to be set and tested manually, so that the labor cost can be saved, and the waste of calculation resources caused in the repeated test process of the visual field range can be avoided.
In some embodiments, the above-mentioned image acquisition processing of the three-dimensional scene may be implemented in such a manner that a scene image of the three-dimensional scene is obtained: when the three-dimensional scene meets the vision change condition, performing image acquisition processing on the three-dimensional scene to obtain a scene image of the three-dimensional scene; wherein the visual change condition comprises at least one of: receiving triggering operation aiming at a three-dimensional scene at a human-computer interaction interface; the presentation duration of the three-dimensional scene meets the duration condition; the audio characteristic variation amplitude of the audio corresponding to the three-dimensional scene meets the amplitude condition.
In the embodiment of the application, the visual change condition set for the three-dimensional scene can be obtained, and the three-dimensional scene is triggered to be subjected to image acquisition processing when the three-dimensional scene meets the visual change condition. The conditions for changing the vision are not limited, and may include at least one of the following three, for example, and will be described separately.
1) And receiving triggering operation aiming at the three-dimensional scene in the human-computer interaction interface. For example, in the process of presenting a three-dimensional scene through a human-computer interaction interface, a triggering operation for a certain area of the human-computer interaction interface is received, wherein the area can be any one area or a preset certain area.
2) The presentation duration of the three-dimensional scene satisfies the duration condition. For example, since a three-dimensional scene has three-dimensional properties, it is generally necessary to present the three-dimensional scene from different angles (field of view), similar to the process of video playback, and in the process of presenting the three-dimensional scene, when the presentation duration that has been presented satisfies the duration condition, image acquisition processing is triggered for the three-dimensional scene being presented. The time length condition may be set according to an actual application scenario, for example, the time length of the presented presentation accords with a certain time point (for example, 30 th second), and for example, the time length of the presented presentation falls into a certain time period (for example, a time period between 30 th second and 40 th second).
3) The audio characteristic variation amplitude of the audio corresponding to the three-dimensional scene meets the amplitude condition. In order to improve the presentation effect of the three-dimensional scene, audio corresponding to the three-dimensional scene is configured in some scenes to play a role of background music, so that the presentation effect is improved. In view of this, in the process of presenting a three-dimensional scene, when it is detected that the amplitude of the audio feature variation of the audio being played satisfies the amplitude condition, image acquisition processing is triggered on the three-dimensional scene being presented. The audio characteristic change amplitude can refer to any audio characteristic change amplitude, and the audio characteristic can be frequency or decibel; the amplitude condition may be set according to the characteristics of the audio features. For example, when the audio feature is decibel, the amplitude condition may be set such that the amplitude of the decibel change between the current time and the previous time is greater than the amplitude threshold.
Through the mode, the flexibility of image processing can be improved, good interaction with a user is realized, and man-machine interaction experience is improved.
In some embodiments, prior to step 101, further comprising: any one of the following processes is performed: acquiring a created three-dimensional scene comprising three-dimensional objects; acquiring multimedia content to be processed, and creating a three-dimensional object corresponding to the image to be processed in the newly created three-dimensional scene; wherein the image to be processed is any one image in the multimedia content.
In the embodiment of the application, the created three-dimensional scene comprising the three-dimensional object can be acquired, and the image processing is carried out on the three-dimensional scene.
In addition, multimedia content to be processed may also be acquired, where Multimedia content refers to content including at least one media form, such as text, sound, and image, and in the embodiment of the present application, the Multimedia content includes at least an image, for example, the Multimedia content may be an image or a video. For multimedia content, a three-dimensional scene may be created and a three-dimensional object corresponding to the image to be processed may be created in the three-dimensional scene. The image to be processed is an image to be processed in the multimedia content, for example, each frame of image in the video; the image to be processed can be added to the surface of the three-dimensional object in the form of texture (map), so that the image to be processed can be acquired when the image acquisition processing is performed subsequently. By the mode, the application range of the embodiment of the application can be widened, namely, the method and the device are not only applicable to three-dimensional scenes, but also applicable to multimedia contents of non-three-dimensional scenes.
In step 102, the scene image is stored to an image buffer of the memory.
Here, the image buffer is a buffer area divided in the memory, and is used for storing (e.g. temporarily storing) the scene image obtained by the image acquisition processing. In the embodiment of the application, the memory space occupied by the image buffer area can be preset, and the image buffer area is divided in the memory in advance according to the set memory space; or, the required memory space can be determined in real time according to the acquired scene image, and the image buffer area can be divided in real time according to the required memory space in the memory, which will be described later.
In step 103, the scene image in the image buffer is rendered by at least one rendering component to update the scene image to a target scene image having target visual characteristics.
For example, the three-dimensional engine includes at least one shading component for adding the target visual features to the scene image. The embodiment of the application does not limit the type of the target visual characteristics, such as specific depth of field, luminescence, film grain, various types of antialiasing, screen fault special effects and the like, and can be determined according to actual business requirements. According to the preset target visual characteristics, the at least one coloring component can be edited correspondingly, so that the edited at least one coloring component can realize the function of adding the target visual characteristics.
Here, the scene image in the image buffer is subjected to a shading process by at least one shading component (i.e., at least one shading component after editing) to update the scene image to a target scene image having target visual characteristics. It should be noted that, since the image buffer is located in the memory, the coloring process is invisible to the user, i.e., the coloring process can be implicitly and quickly implemented.
In step 104, a rendering process is performed on the target scene image in the image buffer to present the target scene image at the human-machine interface.
For example, the target scene image in the image buffer area is rendered to the man-machine interaction interface so as to present the target scene image with the target visual characteristics in the man-machine interaction interface, and compared with the original scene image, the target scene image is presented, so that the presentation effect can be improved, and the actual business requirement is met.
It should be noted that, after the rendering process, the target scene image in the image buffer may be deleted immediately, or the target scene image in the image buffer may be deleted after waiting for a set period of time (e.g. 1 minute), so as to reduce the waste of memory resources.
As shown in fig. 4A, in the embodiment of the present application, the coloring process is performed by at least one coloring component, so that the target visual feature can be quickly and accurately added to the scene image, and if the added target visual feature needs to be changed, the coloring component is adjusted accordingly, so that the method can adapt to complex and changeable business requirements; meanwhile, the embodiment of the application can realize automatic image processing, reduce user operation and avoid the waste of calculation resources caused by reproducing the three-dimensional scene.
In some embodiments, referring to fig. 4B, fig. 4B is a flowchart of a method for processing an image of a three-dimensional scene according to an embodiment of the present application, and step 103 shown in fig. 4A may be implemented by steps 201 to 202, which will be described in connection with the steps.
In step 201, vertex information in the three-dimensional scene is obtained through the vertex shading components in the traversed shading components, and vertex shading processing is performed on the scene image in the image buffer according to the vertex information, so as to obtain a plurality of restored vertices.
In the embodiment of the present application, the at least one coloring component may be traversed according to the sequence of the at least one coloring component, and the coloring processing may be performed by the traversed coloring component, and of course, when the number of coloring components is only one, the coloring processing may be performed directly by the coloring component.
The shading components may include vertex shading components and pixel shading components, and accordingly, the shading process may also include vertex shading and pixel shading. And in the traversing process, vertex information in the three-dimensional scene is acquired through a vertex coloring component in the traversed coloring components, and vertex coloring processing is carried out on the scene image in the image buffer area according to the vertex information, so that a plurality of restored vertices are obtained. Wherein, the vertex is an element for describing a three-dimensional object in a coordinate system of the three-dimensional scene, and the vertex information can comprise at least one of the coordinates of the vertex and the color information of the vertex, and can also comprise other information; the vertex coloring process is to restore multiple vertices of the three-dimensional scene in the scene image, i.e. to realize mapping of the vertices.
In step 202, a plurality of vertices are rasterized by a pixel shading component of the traversed shading components, and pixels resulting from the rasterization are pixel shaded according to a pixel shading policy to update the scene image in the image buffer.
After the multiple restored vertexes are obtained through the vertex coloring components in the traversed coloring components, since the scene image is not described through the vertexes but pixels are taken as the inseparable elements, the multiple restored vertexes are subjected to Rasterization (Rasterization) through the pixel coloring components in the traversed coloring components, and pixels needing to be subjected to pixel coloring processing are obtained. It should be noted that the rasterization process may also be implemented by vertex shading components among the traversed shading components.
For the pixels obtained by the rasterization process, the pixel shading process is performed by the pixel shading components in the traversed shading components, and the pixel shading process is also a process of updating the scene image in the image buffer. The pixel coloring strategy used in the pixel coloring process can be pre-deployed in each pixel coloring component, and the pixel coloring strategy has a corresponding relation with the target visual feature, namely, the pixel coloring strategy is used for adding the target visual feature.
It should be noted that the pixel rendering strategies deployed by different pixel rendering components may be the same or different, depending on the actual application scenario.
In some embodiments, the rasterizing of multiple vertices by the pixel shading component of the traversed shading component described above may be implemented in such a way: performing the following processing by the pixel shading component of the traversed shading components: constructing a vertex region in the scene image according to the restored multiple vertices; pixels in the scene image that are covered by the vertex regions are determined as pixels that result from the rasterization process.
For the multiple restored vertexes in the scene image, the pixel coloring component in the traversed coloring component constructs vertex areas in the scene image according to the multiple restored vertexes, and the pixels covered by the vertex areas in the scene image are taken as pixels obtained through rasterization.
For example, the plurality of restored vertices may be sequentially read in units of a set number, where the number of vertices read at a time is equal to the set number. And connecting the vertexes read each time to obtain a vertex region, and taking pixels covered by the vertex region in the scene image as pixels obtained through rasterization. The set number is an integer greater than 2, and may be set according to an actual application scenario, for example, the set number may be 3, and after 3 vertices read each time are connected, a triangle vertex area may be obtained.
In determining whether pixels in the scene image are covered by the vertex region, a determination may be made using an algorithm such as a linear expression evaluation (Linear Expression Evaluation, LEE) algorithm or a Scan Line (Scan Line) algorithm, without limitation. By the method, effective rasterization processing can be realized, namely, the vertexes in the three-dimensional scene can be accurately converted into the pixels in the scene image.
As shown in fig. 4B, in the embodiment of the present application, the vertex shading component performs vertex shading processing, and the pixel shading component performs pixel shading processing, so that the target visual feature can be quickly and accurately added to the scene image.
In some embodiments, referring to fig. 4C, fig. 4C is a schematic flow chart of a pixel coloring process according to an embodiment of the present application, and the process of the pixel coloring process shown in fig. 4B may be implemented by at least one of steps 301 and 302, which will be described in connection with the steps.
In step 301, a numerical offset process is performed on the channel value of the image channel corresponding to any one pixel.
For example, when the target visual features to be added include a screen failure special effect, for each pixel obtained by the rasterization processing, the channel value of the image channel corresponding to the pixel may be subjected to the numerical shift processing. Wherein the image channel may include at least one of a color channel and a transparency channel.
In fig. 4C, step 301 shown may be implemented by steps 401 to 403, and will be described in connection with each step.
In step 401, cosine processing is performed on the random angle to obtain a cosine value, sine processing is performed on the random angle to obtain a sine value, and offset coordinates are constructed according to the cosine value and the sine value.
The embodiment of the application provides an example mode of numerical offset processing. First, a random angle is generated, for example, a random value is generated in an angle value range of [0,2 pi ] as the random angle, and of course, the angle value range is not limited thereto and may be set according to the actual application scenario. And performing cosine processing on the generated random angle to obtain a cosine value, performing sine processing on the random angle to obtain a sine value, and constructing an offset coordinate according to the cosine value and the sine value.
In some embodiments, to further enhance randomness, the cosine values may be weighted by a weighting parameter to obtain weighted cosine values, the sine values may be weighted by the weighting parameter to obtain weighted sine values, and the offset coordinates may be constructed from the weighted cosine values and the weighted sine values. The weighting parameter may be set according to an actual application scenario, for example, may be a random value generated in a value range [0,1 ].
In step 402, coordinate shift processing is performed on any one pixel according to the shift coordinates, and an offset pixel is obtained.
For convenience of explanation, a pixel to be subjected to the numerical shift processing is named as a pixel P1, and the pixel P1 may be subjected to the coordinate shift processing based on the shift coordinates to obtain a shift pixel P2.
For example, the offset coordinates and the coordinates of the pixel P1 may be subjected to a superimposition process (addition process), and a pixel corresponding to the coordinates obtained by the superimposition process may be used as the offset pixel P2; alternatively, the offset coordinates may be subtracted from the coordinates of the pixel P1, and the pixel corresponding to the obtained coordinates may be referred to as the offset pixel P2. Of course, the manner of the coordinate shift processing is not limited thereto.
In step 403, updating the channel value of the target color channel corresponding to any one pixel according to the channel value of the target color channel corresponding to the offset pixel; wherein the target color channel comprises at least one of a plurality of color channels.
Here, the object of the numerical offset processing may be at least one of a plurality of color channels, for example, a color channel in an RGB color space includes a red channel, a green channel, and a blue channel, and the target color channel may include at least one of the red channel, the green channel, and the blue channel, which may be set according to an actual application scenario.
After obtaining the offset pixel P2, the channel value of the target color channel corresponding to the pixel P1 may be replaced with the channel value of the target color channel corresponding to the offset pixel P2, so as to implement the update processing of the channel value. In this way, the effect that the pixels are shifted can be visually formed, that is, the screen failure special effect is added.
It is to be noted that, when the number of target color channels includes a plurality of channels, the update processing of the channel value may be performed separately for each target color channel. For example, in the case where the target color channel includes a red channel and a blue channel, the channel value of the red channel corresponding to the pixel P1 may be replaced with the channel value of the red channel corresponding to the offset pixel P2, and the channel value of the blue channel corresponding to the pixel P1 may be replaced with the channel value of the blue channel corresponding to the offset pixel P2.
In some embodiments, the target color channel includes a red channel and a blue channel; the above-mentioned coordinate shift processing for any one pixel according to the shift coordinates can be realized in such a manner that the shift pixel is obtained: carrying out coordinate offset processing in a first direction on any pixel according to the offset coordinates to obtain a first offset pixel; carrying out coordinate offset processing in a second direction on any pixel according to the offset coordinates to obtain a second offset pixel; wherein the first direction is opposite to the second direction; the above-mentioned updating process of the channel value of the target color channel corresponding to any one pixel according to the channel value of the target color channel corresponding to the offset pixel can be realized in such a way: updating the channel value of the red channel corresponding to any pixel according to the channel value of the red channel corresponding to the first offset pixel; and updating the channel value of the blue channel corresponding to any pixel according to the channel value of the blue channel corresponding to the second offset pixel.
For example, the coordinate shift processing in the first direction may refer to superposition processing of coordinates, and the coordinate shift processing in the second direction may refer to subtraction processing of coordinates; alternatively, the coordinate shift processing in the first direction may refer to subtraction processing of coordinates, and the coordinate shift processing in the second direction may refer to superposition processing of coordinates. The former case is exemplified for ease of understanding.
For example, the offset coordinates and the coordinates of the pixel P1 are superimposed, and the pixel corresponding to the coordinates obtained by the superimposed processing is regarded as the first offset pixel P2 1 The method comprises the steps of carrying out a first treatment on the surface of the Meanwhile, subtracting the offset coordinate from the coordinate of the pixel P1 will result inThe pixel corresponding to the coordinate is taken as a second offset pixel P2 2
In the case that the target color channel includes a red channel and a blue channel, the channel value of the pixel P1 corresponding to the red channel is replaced with the first offset pixel P2 1 The channel value corresponding to the red channel is replaced by the second offset pixel P2 2 Channel values corresponding to blue channels. In this way, in the scene image obtained through the numerical value offset processing, the effect that red moves to the upper right corner and blue moves to the lower left corner is formed, namely, the screen fault special effect is added.
In step 302 shown in fig. 4C, noise addition processing is performed on the channel value of the image channel corresponding to any one pixel.
For example, when the target visual feature to be added includes a snowflake noise special effect, for each pixel obtained through the rasterization processing, the noise adding processing may be performed on the channel value of the image channel corresponding to the pixel, where the image channel may also include at least one of a color channel and a transparency channel.
In fig. 4C, the illustrated step 302 may be implemented by steps 404 to 405, which will be described in connection with each step.
In step 404, random noise generation processing is performed on any one pixel, so as to obtain a noise channel value of the corresponding image channel.
Also taking the pixel P1 as an example, random noise generation processing can be performed on the pixel P1 to obtain a noise channel value corresponding to the image channel. For example, a random value can be generated in the value range corresponding to the image channel to be used as the noise channel value; for another example, a random operation process may be performed according to the coordinates of the pixel P1 to obtain a noise channel value of the corresponding image channel.
The image channel corresponding to the noise adding processing is not limited, for example, only one image channel, such as a transparency channel, can be corresponding; for example, a plurality of image channels may be assigned, for example, a transparency channel and all color channels may be assigned. In the latter case, a noise channel value is generated for each image channel to which the noise addition process corresponds.
In step 405, the channel value of the image channel corresponding to any one pixel is overlapped with the noise channel value.
For example, for each image channel to which the noise addition processing corresponds, the channel value of the image channel to which the pixel P1 corresponds and the noise channel value of the image channel are subjected to the superimposition processing (addition processing) to update the channel value of the image channel to which the pixel P1 corresponds. Therefore, the purpose of adding the snowflake noise special effect can be achieved.
It should be noted that, in the embodiment of the present application, a screen failure special effect and a snowflake noise special effect may also be added at the same time.
As shown in fig. 4C, the embodiment of the present application provides an example manner of the numerical offset processing and the noise adding processing, which can quickly and accurately add the corresponding target visual features.
In some embodiments, referring to fig. 4D, fig. 4D is a flowchart of an image processing method of a three-dimensional scene provided in an embodiment of the present application, and before step 102 shown in fig. 4A, the information amount of the scene image may also be determined in step 501.
In the embodiment of the application, the image buffer area can be divided in real time according to the condition of the acquired scene image. Firstly, the information amount of the acquired scene image is determined, and the embodiment of the application does not limit the type of the information amount, and can refer to the actual size of the scene image, namely the number of bytes actually provided.
In step 502, the required memory space for the scene image is determined based on the amount of information.
Here, the required memory space of the scene image is positively correlated with the information amount of the scene image, and the specific positive correlation relationship may be set according to the actual application scene, for example, may be a positive correlation function. After the information quantity of the scene image is determined, the required memory space corresponding to the information quantity can be determined according to the set positive correlation.
In step 503, when the required memory space is less than or equal to the available memory space in the memory, an image buffer is created in the memory according to the required memory space.
Here, when the required memory space is smaller than or equal to the available memory space in the memory, the image buffer may be created directly in the memory according to the required memory space, that is, the memory space occupied by the image buffer is equal to the required memory space.
In step 504, when the required memory space is greater than the available memory space in the memory, performing image degradation processing on the scene image according to the memory space difference to obtain a degraded scene image; wherein the memory space difference represents a difference between the required memory space and the available memory space; the manner of image degradation processing includes at least one of truncation processing and sharpness degradation processing.
When the required memory space is larger than the available memory space in the memory, if the image buffer is created in the memory directly according to the available memory space, the image buffer is likely to not support the image processing of the scene image, that is, the situation that the image processing speed is slow or even the image processing fails easily occurs. Therefore, in the embodiment of the present application, the memory space difference is obtained by subtracting the available memory space from the required memory space, and the degraded scene image is obtained by performing image degradation processing on the scene image according to the memory space difference, where the image degradation processing includes at least one of interception processing and sharpness degradation processing.
It should be noted that the degree of image degradation processing is positively correlated with the memory space difference, so as to ensure that the memory space required for degradation of the subsequently obtained degraded scene image is smaller than or equal to the available memory space. Taking an example in a interception processing mode, when the memory space difference is a first difference, intercepting a 2/3 area in the scene image to be used as a degraded scene image; when the memory space difference is the second difference, intercepting 1/3 area in the scene image to be used as the degraded scene image, wherein the second difference is larger than the first difference. For example, in a mode of resolution degradation processing, if the original resolution of the scene image is 1080P, when the memory space difference is the first difference, the resolution of the scene image is reduced from 1080P to 720P, so as to obtain a degraded scene image; and when the memory space difference is a second difference, reducing the definition of the scene image from 1080P to 480P to obtain a degraded scene image, wherein the second difference is larger than the first difference.
In step 505, an amount of degradation information for degrading the scene image is determined.
Here, the information amount of the degraded scene image is determined in the same principle as in step 501, and the information amount determined here is named as degraded information amount for the convenience of distinction.
In step 506, determining a degradation-required memory space for degrading the scene image according to the degradation information amount, and creating an image buffer in memory according to the degradation-required memory space; wherein the demotion required memory space is less than or equal to the available memory space.
Here, the same principle as step 502, the required memory space of the degraded scene image is determined according to the amount of degradation information, and for convenience of distinction, the determined required memory space is named as a degraded required memory space, where the degraded required memory space is smaller than or equal to the available memory space. Then, an image buffer is created in the memory according to the degradation required memory space, i.e. the memory space occupied by the image buffer is equal to the degradation required memory space.
In fig. 4D, step 102 shown in fig. 4A may be implemented by step 507 or step 508.
In step 507, the scene image is stored to an image buffer created from the required memory space.
When an image buffer is created by step 503, the scene image may be stored to an image buffer created from the required memory space.
In step 508, the degraded scene image is stored to an image buffer created from the degraded required memory space.
When the image buffer is created in step 506, the degraded scene image may be stored in the image buffer created according to the memory space of the degradation requirement, and in the subsequent step, the image processing is performed on the degraded scene image, so that the success rate of the image processing can be ensured.
It should be noted that, in the case that the image buffer is divided in real time, after the rendering process is performed on the target scene image, the image buffer storing the target scene image may be deleted immediately, or the image buffer storing the target scene image may be deleted after waiting for a set period of time (e.g. 1 minute), so as to reduce the waste of memory resources. Of course, in the embodiment of the present application, the image buffer may also be pre-divided and exist continuously.
As shown in fig. 4D, the embodiment of the present application may perform image degradation processing on a scene image when the information amount of the scene image is too large, and create an image buffer corresponding to the obtained degraded scene image, so as to improve the success rate of image processing.
In the following, an exemplary application of the embodiment of the present application in a practical application scenario will be described. The embodiment of the application can add a specific visual effect (corresponding to the target visual characteristics) in the 3D scene so as to meet the complex and changeable business requirements, wherein the visual effect is not limited, and can be, for example, a specific depth of field, luminescence, film particles, various types of anti-aliasing and the like, and in order to facilitate understanding, a screen failure special effect is taken as an example. By way of example, embodiments of the present application provide a schematic representation of a frame of a scene image of a 3D scene as shown in fig. 5A, the 3D scene including 3D objects of various shapes. By the image processing scheme provided by the embodiment of the application, the screen fault special effect can be added in the scene image shown in fig. 5A to obtain the target scene image shown in fig. 5B, so that the presentation effect in the human-computer interaction interface can be improved, and the human-computer interaction experience is improved.
Next, a process of adding a screen failure special effect in a 3D scene will be described from the viewpoint of the bottom implementation, and for ease of understanding, the following sections will be described.
1) A 3D scene is prepared.
Here, a created 3D scene including 3D objects may be acquired. For the case where a screen failure special effect needs to be added to the multimedia content, a 3D scene may be created, and an image in the multimedia content may be a video or an image, etc., as a texture (i.e., a map of an outer surface) of a 3D object in the 3D scene.
2) And (5) post-treatment.
The principle of post-processing is to store the image acquired by the camera (camera component) into an image buffer, apply specific visual effects (such as luminescence, color change, distortion, etc.) to the image in the image buffer through one or more colorants, and finally render the image in the image buffer to a human-computer interaction interface.
In embodiments of the present application, post-processing may be implemented through multiple post-processing channels (Pass). Wherein at least part of the post-processing channels comprise shaders, which refer to functions written in a graphics library shader language (Graphics Library Shader Language, GLSL), which may run in the GPU of the graphics card.
For ease of understanding, post-processing through a web graphic library (Web Graphics Library, webGL) is illustrated. WebGL is a 3D drawing protocol, through which hardware 3D accelerated rendering can be provided for Canvas, which is a part of the hypertext markup language (Hyper Text Markup Language, HTML) 5 allowing the scripting language to dynamically render bitmap images, facilitating a user to more smoothly present 3D scenes in a browser (human-machine interaction interface) with the help of a graphics card of an electronic device.
Embodiments of the present application provide a schematic diagram of post-processing by WebGL as shown in fig. 6, in which fig. 6 shows a plurality of shaders, each of which may include a vertex shader and a fragment shader (corresponding to the pixel shader above), where the vertex shader is used to describe vertex information (e.g., coordinates, colors, etc.), i.e., to restore an image to its original form, and once vertex shaders may be invoked for each vertex in the image; the fragment shader is used to perform pixel shading processing on pixels obtained by rasterization processing, and can be invoked once for each pixel in an image.
During post-processing, each vertex in the image may be processed by WebGL invoking a vertex shader and a fragment shader is invoked to process each pixel in the image. When the last shader processing is completed, the image may be rendered into the human-machine interface through WebGL.
3) An effect synthesizer.
An effect synthesizer (effect combiner) is a class in three.js for realizing post-processing effects, and the class manages a post-processing process chain for generating final visual effects, wherein three.js is a 3D engine running in a browser, and is a 3D graphic library formed by packaging and simplifying WebGL interfaces, and comprises various objects such as a camera, a light shadow, a material, and the like, and can be used for creating various 3D scenes in the Web.
The embodiment of the application provides a schematic diagram of a post-processing process chain as shown in fig. 7, wherein the post-processing process chain comprises a plurality of post-processing channels, the post-processing channels are traversed according to the sequence of the post-processing channels in the post-processing process of an original image, and corresponding processing is performed according to the traversed post-processing channels. When the last post-processing channel processing is completed, the obtained result (i.e. the new image in fig. 7) is rendered to the human-computer interaction interface. All the post-processing channels are processed in the image buffer area, so that implicit image processing can be performed, namely, a user cannot see the post-processing process and only can see the image finally rendered into the human-computer interaction interface, and user experience can be further improved.
4) And a render pass post-processing channel.
Here, the RenderPass post-processing channel may be used as a first post-processing channel of the post-processing process chain, where the RenderPass post-processing channel is used to copy the scene image in the field of view of the camera to the image buffer of the memory for use by subsequent other post-processing channels.
To facilitate understanding, the following pseudocode is provided:
Creating a WebGL renderer;
creating an effect synthesizer, and transmitting a WebGL renderer as a parameter into the effect synthesizer;
creating a render post-processing channel, and transmitting the 3D scene and the camera as parameters into the render post-processing channel;
adding a Renderpass post-processing channel into a post-processing chain;
creating a Glitchpass post-processing channel; wherein each GlitchPass post-processing lane includes a shader;
the GlitchPass post-treatment channel was added to the post-treatment process chain.
5) Glitchpass post-processing channel.
Here, the Glitchpass post-processing channel is used to add screen fault effects, such as applying screen fault effects to scene images stored in the image buffer by the RenderPass post-processing channel.
The fragment shader in the GlitchPass post-processing channel may include an RGB color conversion shader that is configured to numerically shift, for each pixel in the image, the channel value of the pixel corresponding to the target color channel, where the type of the target color channel is not limited, and may include, for example, a red channel and a blue channel. For example, for each pixel in an image, the pixel may be offset by an RGB color conversion shader in two random opposite directions by the channel value for the red channel and the channel value for the blue channel, respectively.
It should be noted that, the color channel refers to a channel that stores color information, and the image may include one or more color channels, and the number of color channels depends on the color mode of the application, for example, the RGB color mode includes three color channels, which are a red color channel, a green color channel, and a blue color channel, respectively. For a pixel in an image, the channel values of the pixel corresponding to all color channels are overlapped and mixed, so that the true color of the pixel can be obtained.
For ease of understanding, an RGB color mode will be described below as an example, in which the range of values of each color channel is [0, 255]. In the embodiment of the application, the channel value of the pixel corresponding to the red channel and the channel value of the pixel corresponding to the blue channel can be changed through the RGB color conversion shader, so that the effect of enabling the image to have color ghost, namely, the screen fault special effect is realized. The embodiment of the present application provides an image processed by an RGB color conversion shader as shown in fig. 8, in which red is shifted in the direction of the upper right corner and blue is shifted in the direction of the lower left corner, where "shift" here does not mean that a pixel is actually shifted, but means that the color change of the pixel visually looks like the pixel has been shifted after the RGB color conversion shader processing.
In addition, the embodiment of the application also provides a target scene image processed by the RGB color conversion shader as shown in fig. 9, and according to fig. 9, it can be determined that after passing through the RGB color conversion shader, the color of the pixel can be changed more severely and unordered.
In an embodiment of the present application, the fragment shader in the Glitchpass post-processing channel may also include a noise shader for appending white noise special effects after the RGB color conversion shader processing is complete. The noise points refer to random changes of brightness or color information in the image, are similar to snowflakes, and can be used for simulating snowflake effects of a television. As an example, the embodiment of the present application provides a target scene image processed by a noise shader as shown in fig. 10.
For ease of understanding, the following processing formulas of the RGB color conversion shader are provided:
{vec2 offset=amount*vec2(cos(angle),sin(angle));
vec4 cr=texture2D(tDiffuse,p+offset);
vec4 cga=texture2D(tDiffuse,p);
vec4 cb=texture2D(tDiffuse,p-offset);
gl_FragColor=vec4(cr.r,cga.g,cb.b,cga.a);}
where vec2 represents binary data and vec4 represents quaternary data; angle corresponds to the random angle above, and the value range can be [0,2 pi ]; the amounts are weighting parameters, and can be set according to actual application scenes, for example, random values in a value range [0,1 ]; offset corresponds to the offset coordinates above; tDiffuse represents an image to be processed, and p represents the coordinate of a certain pixel in the image to be processed, which is also a binary data; the meaning of the texture2D function is that data in the first parameter is extracted according to the second parameter, for example, texture2D (tDiffuse, p+offset) represents channel values of pixels in the tDiffuse, the coordinates of which correspond to p+offset, corresponding to each image channel; r represents a channel value corresponding to a red channel, g represents a channel value corresponding to a green channel, b represents a channel value corresponding to a blue channel, and a represents a channel value corresponding to a transparency channel; gl_fragcolor is a built-in variable of the fragment shader, representing the channel value of each image channel for a pixel after processing by the RGB color conversion shader.
For ease of understanding, the following processing formulas for the noise shader are also provided:
{vec4 snow=200.*amount*vec4(rand(vec2(xs*seed,ys*seed*50.))*0.2);
gl_FragColor=gl_FragColor+snow;}
where xs represents the abscissa of the pixel and ys represents the ordinate of the pixel; seed is a random bias value, which may be, for example, a random value within the value range [0,1 ]; the noise channel value corresponding to each image channel is represented by the arrow.
It should be noted that, the rand function is a self-defined random function, and the structure thereof can be as follows:
Float rand(vec2 co)
{
return fract(sin(dot(co.xy,vec2(s1,s2)))*s3);
}
wherein, the fraction function is a random function built in the shader; dot represents a point multiplication function; xy represents the abscissa and ordinate of the pixel; s1, s2 and s3 are predetermined parameters, and may be, for example, numbers greater than 0.
After the snorw is obtained, the snorw and the gl_fragcolor are subjected to addition processing, and the value of gl_fragcolor is updated according to the result of the addition processing. The principle of the addition processing may be a (a 1, a2, a3, a 4) +b (B1, B2, B3, B4) =c (a1+b1, a2+b2, a3+b3, a4+b4), that is, the addition processing of the channel values is performed individually for each image channel.
The embodiment of the application also provides a flow diagram of an image processing method of the three-dimensional scene shown in fig. 11, and the flow diagram will be described with reference to each step shown in fig. 11.
1) Creating a WebGL renderer, and executing image processing once by the WebGL renderer for each frame of scene image in the 3D scene presentation process.
2) Creating an effect synthesizer, and taking the WebGL renderer as a parameter to be input into the effect synthesizer. The effect synthesizer is used for managing a plurality of post-processing channels.
3) And adding a render post-processing channel, wherein the render post-processing channel is used for capturing scene images within the field of view of the camera and storing the captured scene images into an image buffer area of a memory.
4) A GlitchPass post-processing channel is added for applying RGB color conversion shaders and noise shaders to the scene image in the image buffer. The RGB color conversion shader is used for changing the channel value of the pixel corresponding to the red channel and the channel value of the pixel corresponding to the blue channel, namely, adding the screen fault special effect; the noise color device is used for changing the channel value of the pixels corresponding to each image channel, namely, adding snowflake noise.
5) And after being processed by all the post-processing channels, rendering the target scene image in the image buffer area to a human-computer interaction interface, so that screen fault special effects and noise special effects can be added for the 3D scene in real time.
It is worth to be noted that, in the embodiment of the present application, visual change conditions may be set for a 3D scene, and when the presented 3D scene meets the visual change conditions, a scene image is captured for image processing, so that the enthusiasm and visual experience of the user for participating in the interaction can be improved. Wherein the vision changing condition is not limited, for example, the vision changing condition may include at least one of: receiving triggering operation aiming at a 3D scene at a human-computer interaction interface; the presentation duration of the 3D scene meets the duration condition; the audio characteristic variation amplitude of the audio corresponding to the 3D scene meets the amplitude condition.
The embodiment of the application can at least realize the following technical effects: 1) The automatic image processing can be realized aiming at the 3D scene, the dependence on art production and the related manpower investment are reduced, and the learning cost is low; 2) The application range is wider, for example, the method is not only suitable for 3D scenes, but also suitable for multimedia contents (such as videos or images, etc.); 3) By means of the editability of the shader, image processing can be performed when visual change conditions are met, and interaction expression with a user can be enriched; 4) The method has less occupied computing resources and can be applied to various types of electronic equipment, such as smart phones.
Continuing with the description below of an exemplary architecture of the software modules implemented as the image processing device 455 of a three-dimensional scene provided by embodiments of the present application, in some embodiments, as shown in fig. 2, the software modules stored in the image processing device 455 of a three-dimensional scene of a memory 450 may include: the acquisition module 4551 is used for performing image acquisition processing on the three-dimensional scene to obtain a scene image of the three-dimensional scene; a storage module 4552, configured to store a scene image into an image buffer of a memory; a shading module 4553 for shading the scene image in the image buffer by at least one shading component to update the scene image to a target scene image having a target visual characteristic; and the rendering module 4554 is configured to perform rendering processing on the target scene image in the image buffer, so as to present the target scene image on the human-computer interaction interface.
In some embodiments, the shading components include a vertex shading component and a pixel shading component, the pixel shading component deploying a pixel shading policy corresponding to the target visual feature; coloring module 4553, also for: traversing at least one shading component and performing the following processing for the traversed shading component: obtaining vertex information in a three-dimensional scene through a vertex coloring assembly in the traversed coloring assembly, and performing vertex coloring treatment on a scene image in an image buffer area according to the vertex information to obtain a plurality of restored vertices; and rasterizing a plurality of vertexes through the pixel shading components in the traversed shading components, and performing pixel shading on pixels obtained through the rasterizing according to a pixel shading strategy so as to update the scene image in the image buffer.
In some embodiments, the coloring module 4553 is further to: for any one of the pixels obtained by the rasterization processing, at least one of the following processing modes is performed: carrying out numerical value offset processing on channel values of image channels corresponding to any one pixel; and carrying out noise adding processing on the channel value of the image channel corresponding to any pixel.
In some embodiments, the image channels comprise a plurality of color channels; coloring module 4553, also for: cosine processing is carried out on the random angle to obtain a cosine value, sine processing is carried out on the random angle to obtain a sine value, and offset coordinates are constructed according to the cosine value and the sine value; carrying out coordinate offset processing on any pixel according to the offset coordinates to obtain offset pixels; updating the channel value of the target color channel corresponding to any pixel according to the channel value of the target color channel corresponding to the offset pixel; wherein the target color channel comprises at least one of a plurality of color channels.
In some embodiments, the target color channel includes a red channel and a blue channel; coloring module 4553, also for: carrying out coordinate offset processing in a first direction on any pixel according to the offset coordinates to obtain a first offset pixel; carrying out coordinate offset processing in a second direction on any pixel according to the offset coordinates to obtain a second offset pixel; wherein the first direction is opposite to the second direction; updating the channel value of the red channel corresponding to any pixel according to the channel value of the red channel corresponding to the first offset pixel; and updating the channel value of the blue channel corresponding to any pixel according to the channel value of the blue channel corresponding to the second offset pixel.
In some embodiments, the coloring module 4553 is further to: random noise generation processing is carried out on any pixel, and a noise channel value corresponding to the image channel is obtained; and superposing the channel value of the image channel corresponding to any one pixel with the noise channel value.
In some embodiments, the coloring module 4553 is further to: performing the following processing by the pixel shading component of the traversed shading components: constructing a vertex region in the scene image according to the restored multiple vertices; pixels in the scene image that are covered by the vertex regions are determined as pixels that result from the rasterization process.
In some embodiments, the image processing apparatus 455 further comprises a creation module for: determining the information quantity of a scene image; determining a required memory space of a scene image according to the information quantity, and creating an image buffer area in a memory according to the required memory space; wherein, the required memory space is positively correlated with the information amount.
In some embodiments, the creation module is further to: when the required memory space is larger than the available memory space in the memory, performing image degradation processing on the scene image according to the memory space difference to obtain a degraded scene image; wherein the memory space difference represents a difference between the required memory space and the available memory space; the manner of image degradation processing includes at least one of truncation processing and sharpness degradation processing; determining a degradation information amount of the degraded scene image; determining a degradation required memory space of the degradation scene image according to the degradation information amount, and creating an image buffer area in a memory according to the degradation required memory space; wherein the demotion demand memory space is less than or equal to the available memory space; the storage module 4552 is further configured to: the degraded scene image is stored to an image buffer created from the degraded required memory space.
In some embodiments, the three-dimensional scene includes at least one three-dimensional object; the acquisition module 4551 is further configured to: determining a minimum space simultaneously comprising at least one three-dimensional object in the three-dimensional scene according to the coordinates of the at least one three-dimensional object in the three-dimensional scene; determining a field of view of the camera assembly based on the minimum space; scene images in the field of view in the three-dimensional scene are acquired by the camera assembly.
In some embodiments, the acquisition module 4551 is further configured to: when the three-dimensional scene meets the vision change condition, performing image acquisition processing on the three-dimensional scene to obtain a scene image of the three-dimensional scene; wherein the visual change condition comprises at least one of: receiving triggering operation aiming at a three-dimensional scene at a human-computer interaction interface; the presentation duration of the three-dimensional scene meets the duration condition; the audio characteristic variation amplitude of the audio corresponding to the three-dimensional scene meets the amplitude condition.
In some embodiments, the acquisition module 4551 is further configured to: any one of the following processes is performed: acquiring a created three-dimensional scene comprising three-dimensional objects; acquiring multimedia content to be processed, and creating a three-dimensional object corresponding to the image to be processed in the newly created three-dimensional scene; wherein the image to be processed is any one image in the multimedia content.
Embodiments of the present application provide a computer program product or computer program comprising computer instructions (i.e., executable instructions) stored in a computer readable storage medium. The processor of the electronic device reads the computer instructions from the computer readable storage medium, and the processor executes the computer instructions, so that the electronic device executes the image processing method of the three-dimensional scene according to the embodiment of the application.
Embodiments of the present application provide a computer-readable storage medium storing executable instructions that, when executed by a processor, cause the processor to perform a method provided by embodiments of the present application, for example, an image processing method of a three-dimensional scene as shown in fig. 4A, 4B, and 4D.
In some embodiments, the computer readable storage medium may be FRAM, ROM, PROM, EPROM, EEPROM, flash memory, magnetic surface memory, optical disk, or CD-ROM; but may be a variety of devices including one or any combination of the above memories.
In some embodiments, the executable instructions may be in the form of programs, software modules, scripts, or code, written in any form of programming language (including compiled or interpreted languages, or declarative or procedural languages), and they may be deployed in any form, including as stand-alone programs or as modules, components, subroutines, or other units suitable for use in a computing environment.
As an example, the executable instructions may, but need not, correspond to files in a file system, may be stored as part of a file that holds other programs or data, for example, in one or more scripts in a hypertext markup language (HTML, hyper Text Markup Language) document, in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub-programs, or portions of code).
As an example, executable instructions may be deployed to be executed on one electronic device or on multiple electronic devices located at one site or, alternatively, on multiple electronic devices distributed across multiple sites and interconnected by a communication network.
The above is merely an example of the present application and is not intended to limit the scope of the present application. Any modification, equivalent replacement, improvement, etc. made within the spirit and scope of the present application are included in the protection scope of the present application.

Claims (14)

1. A method of image processing of a three-dimensional scene, the method comprising:
performing image acquisition processing on a three-dimensional scene to obtain a scene image of the three-dimensional scene;
Storing the scene image into an image buffer area of a memory;
traversing at least one shading component, the shading component comprising a vertex shading component and a pixel shading component, the pixel shading component deploying a pixel shading policy corresponding to a target visual feature;
obtaining vertex information in the three-dimensional scene through the vertex coloring assembly in the traversed coloring assembly, and performing vertex coloring processing on the scene image in the image buffer area according to the vertex information to obtain a plurality of restored vertices;
rasterizing the plurality of vertexes by using a pixel shading component in the traversed shading components, and performing pixel shading on pixels obtained by the rasterizing according to the pixel shading strategy so as to update the scene image in the image buffer into a target scene image with the target visual characteristics;
rendering the target scene image in the image buffer area so as to present the target scene image on a human-computer interaction interface.
2. The method of claim 1, wherein said performing pixel shading processing on pixels obtained by said rasterization processing in accordance with said pixel shading policy comprises:
For any one of the pixels obtained by the rasterization processing, at least one of the following processing modes is performed:
performing numerical offset processing on the channel value of the image channel corresponding to any pixel;
and performing noise adding processing on the channel value of the image channel corresponding to any pixel.
3. The method of claim 2, wherein the image channel comprises a plurality of color channels; the performing a numerical offset process on the channel value of the image channel corresponding to the arbitrary pixel includes:
cosine processing is carried out on the random angle to obtain a cosine value, sine processing is carried out on the random angle to obtain a sine value, and offset coordinates are constructed according to the cosine value and the sine value;
performing coordinate offset processing on any pixel according to the offset coordinates to obtain offset pixels;
updating the channel value of the target color channel corresponding to any pixel according to the channel value of the target color channel corresponding to the offset pixel; wherein the target color channel comprises at least one of the plurality of color channels.
4. A method according to claim 3, wherein the target color channel comprises a red channel and a blue channel; and performing coordinate offset processing on any one pixel according to the offset coordinates to obtain offset pixels, including:
Carrying out coordinate offset processing in a first direction on any pixel according to the offset coordinates to obtain a first offset pixel;
carrying out coordinate offset processing in a second direction on any pixel according to the offset coordinates to obtain a second offset pixel; wherein the first direction is opposite to the second direction;
the updating the channel value of the target color channel corresponding to the arbitrary pixel according to the channel value of the target color channel corresponding to the offset pixel includes:
updating the channel value of the red channel corresponding to any pixel according to the channel value of the red channel corresponding to the first offset pixel;
and updating the channel value of the blue channel corresponding to any pixel according to the channel value of the blue channel corresponding to the second offset pixel.
5. The method according to claim 2, wherein the performing noise adding processing on the channel value of the image channel corresponding to the arbitrary pixel includes:
carrying out random noise generation processing on any pixel to obtain a noise channel value corresponding to the image channel;
And superposing the channel value of the image channel corresponding to any one pixel with the noise channel value.
6. The method of claim 1, wherein rasterizing the plurality of vertices by a pixel shading component of the traversed shading components comprises:
performing the following processing by the pixel shading component in the traversed shading components:
constructing a vertex region in the scene image according to the restored plurality of vertices;
and determining pixels covered by the vertex region in the scene image as pixels obtained through rasterization processing.
7. The method of any one of claims 1 to 6, wherein prior to storing the scene image in the image buffer of the memory, the method further comprises:
determining an information amount of the scene image;
determining a required memory space of the scene image according to the information quantity, and creating an image buffer area in the memory according to the required memory space;
wherein the required memory space is positively correlated with the information amount.
8. The method of claim 7, wherein after determining the required memory space for the scene image based on the information amount, the method further comprises:
When the required memory space is larger than the available memory space in the memory, performing image degradation processing on the scene image according to the memory space difference to obtain a degraded scene image;
wherein the memory space difference represents a difference between the required memory space and the available memory space; the image degradation processing mode comprises at least one of interception processing and definition degradation processing;
determining an amount of degradation information for the degraded scene image;
determining a degradation required memory space of the degradation scene image according to the degradation information amount, and creating an image buffer area in the memory according to the degradation required memory space; wherein the demotion required memory space is less than or equal to the available memory space;
the storing the scene image in the image buffer area of the memory includes:
and storing the degraded scene image into an image buffer area created according to the memory space required by degradation.
9. The method according to any one of claims 1 to 6, wherein the three-dimensional scene comprises at least one three-dimensional object; the image acquisition processing is performed on the three-dimensional scene to obtain a scene image of the three-dimensional scene, which comprises the following steps:
Determining a minimum space including the at least one three-dimensional object at the same time in the three-dimensional scene according to coordinates of the at least one three-dimensional object in the three-dimensional scene;
determining a field of view of the camera assembly from the minimum space;
and acquiring scene images in the field of view in the three-dimensional scene through the camera assembly.
10. The method according to any one of claims 1 to 6, wherein the performing image acquisition processing on the three-dimensional scene to obtain a scene image of the three-dimensional scene includes:
when the three-dimensional scene meets the visual change condition, performing image acquisition processing on the three-dimensional scene to obtain a scene image of the three-dimensional scene;
wherein the visual change condition comprises at least one of:
receiving triggering operation aiming at the three-dimensional scene at the man-machine interaction interface;
the presentation duration of the three-dimensional scene meets a duration condition;
the audio characteristic change amplitude of the audio corresponding to the three-dimensional scene meets the amplitude condition.
11. The method of any one of claims 1 to 6, wherein prior to the image acquisition processing of the three-dimensional scene, the method further comprises:
Any one of the following processes is performed:
acquiring a created three-dimensional scene comprising three-dimensional objects;
acquiring multimedia content to be processed, and creating a three-dimensional object corresponding to the image to be processed in the newly created three-dimensional scene; wherein the image to be processed is any one image of the multimedia content.
12. An image processing apparatus of a three-dimensional scene, the apparatus comprising:
the acquisition module is used for carrying out image acquisition processing on the three-dimensional scene to obtain a scene image of the three-dimensional scene;
the storage module is used for storing the scene image into an image buffer area of the memory;
a shading module for traversing at least one shading component, the shading component comprising a vertex shading component and a pixel shading component, the pixel shading component deploying a pixel shading policy corresponding to a target visual feature;
obtaining vertex information in the three-dimensional scene through the vertex coloring assembly in the traversed coloring assembly, and performing vertex coloring processing on the scene image in the image buffer area according to the vertex information to obtain a plurality of restored vertices;
rasterizing the plurality of vertexes by using a pixel shading component in the traversed shading components, and performing pixel shading on pixels obtained by the rasterizing according to the pixel shading strategy so as to update the scene image in the image buffer into a target scene image with the target visual characteristics;
And the rendering module is used for rendering the target scene image in the image buffer area so as to present the target scene image on a human-computer interaction interface.
13. An electronic device, comprising:
a memory for storing executable instructions;
a processor for implementing the image processing method of a three-dimensional scene according to any one of claims 1 to 11 when executing executable instructions stored in said memory.
14. A computer readable storage medium storing executable instructions for implementing the method of image processing of a three-dimensional scene according to any one of claims 1 to 11 when executed by a processor.
CN202110528438.7A 2021-05-14 2021-05-14 Image processing method and device of three-dimensional scene and electronic equipment Active CN113192173B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110528438.7A CN113192173B (en) 2021-05-14 2021-05-14 Image processing method and device of three-dimensional scene and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110528438.7A CN113192173B (en) 2021-05-14 2021-05-14 Image processing method and device of three-dimensional scene and electronic equipment

Publications (2)

Publication Number Publication Date
CN113192173A CN113192173A (en) 2021-07-30
CN113192173B true CN113192173B (en) 2023-09-19

Family

ID=76981870

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110528438.7A Active CN113192173B (en) 2021-05-14 2021-05-14 Image processing method and device of three-dimensional scene and electronic equipment

Country Status (1)

Country Link
CN (1) CN113192173B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116703689B (en) * 2022-09-06 2024-03-29 荣耀终端有限公司 Method and device for generating shader program and electronic equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9928637B1 (en) * 2016-03-08 2018-03-27 Amazon Technologies, Inc. Managing rendering targets for graphics processing units
CN110152291A (en) * 2018-12-13 2019-08-23 腾讯科技(深圳)有限公司 Rendering method, device, terminal and the storage medium of game picture
CN110211218A (en) * 2019-05-17 2019-09-06 腾讯科技(深圳)有限公司 Picture rendering method and device, storage medium and electronic device
CN112381918A (en) * 2020-12-03 2021-02-19 腾讯科技(深圳)有限公司 Image rendering method and device, computer equipment and storage medium
CN112529995A (en) * 2020-12-28 2021-03-19 Oppo(重庆)智能科技有限公司 Image rendering calculation method and device, storage medium and terminal

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11132829B2 (en) * 2017-07-31 2021-09-28 Ecole polytechnique fédérale de Lausanne (EPFL) Method for voxel ray-casting of scenes on a whole screen

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9928637B1 (en) * 2016-03-08 2018-03-27 Amazon Technologies, Inc. Managing rendering targets for graphics processing units
CN110152291A (en) * 2018-12-13 2019-08-23 腾讯科技(深圳)有限公司 Rendering method, device, terminal and the storage medium of game picture
CN110211218A (en) * 2019-05-17 2019-09-06 腾讯科技(深圳)有限公司 Picture rendering method and device, storage medium and electronic device
CN112381918A (en) * 2020-12-03 2021-02-19 腾讯科技(深圳)有限公司 Image rendering method and device, computer equipment and storage medium
CN112529995A (en) * 2020-12-28 2021-03-19 Oppo(重庆)智能科技有限公司 Image rendering calculation method and device, storage medium and terminal

Also Published As

Publication number Publication date
CN113192173A (en) 2021-07-30

Similar Documents

Publication Publication Date Title
US20230053462A1 (en) Image rendering method and apparatus, device, medium, and computer program product
CN112215934B (en) Game model rendering method and device, storage medium and electronic device
CN107358649B (en) Processing method and device of terrain file
CN112381918A (en) Image rendering method and device, computer equipment and storage medium
CN113076152B (en) Rendering method and device, electronic equipment and computer readable storage medium
CN110866967B (en) Water ripple rendering method, device, equipment and storage medium
GB2546720A (en) Method of and apparatus for graphics processing
CN117390322A (en) Virtual space construction method and device, electronic equipment and nonvolatile storage medium
CN113192173B (en) Image processing method and device of three-dimensional scene and electronic equipment
CN114494024B (en) Image rendering method, device and equipment and storage medium
CN113470153B (en) Virtual scene rendering method and device and electronic equipment
CN115937389A (en) Shadow rendering method, device, storage medium and electronic equipment
CN115965731A (en) Rendering interaction method, device, terminal, server, storage medium and product
CN113132799B (en) Video playing processing method and device, electronic equipment and storage medium
CN117372602B (en) Heterogeneous three-dimensional multi-object fusion rendering method, equipment and system
US20230316626A1 (en) Image rendering method and apparatus, computer device, and computer-readable storage medium
CN117707676A (en) Window rendering method, device, equipment, storage medium and program product
CN117611703A (en) Barrage character rendering method, barrage character rendering device, barrage character rendering equipment, storage medium and program product
CN115686202A (en) Three-dimensional model interactive rendering method across Unity/Optix platform
CN114049425B (en) Illumination simulation method, device, equipment and storage medium in image
CN113487708B (en) Flow animation implementation method based on graphics, storage medium and terminal equipment
CN111882639B (en) Picture rendering method, device, equipment and medium
CN117876568A (en) Custom data injection method and device for streaming transmission process
CN117710180A (en) Image rendering method and related equipment
CN115845363A (en) Rendering method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40047939

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant