CN113192173A - Image processing method and device for three-dimensional scene and electronic equipment - Google Patents

Image processing method and device for three-dimensional scene and electronic equipment Download PDF

Info

Publication number
CN113192173A
CN113192173A CN202110528438.7A CN202110528438A CN113192173A CN 113192173 A CN113192173 A CN 113192173A CN 202110528438 A CN202110528438 A CN 202110528438A CN 113192173 A CN113192173 A CN 113192173A
Authority
CN
China
Prior art keywords
image
scene
processing
pixel
channel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110528438.7A
Other languages
Chinese (zh)
Other versions
CN113192173B (en
Inventor
袁佳平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Chengdu Co Ltd
Original Assignee
Tencent Technology Chengdu Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Chengdu Co Ltd filed Critical Tencent Technology Chengdu Co Ltd
Priority to CN202110528438.7A priority Critical patent/CN113192173B/en
Publication of CN113192173A publication Critical patent/CN113192173A/en
Application granted granted Critical
Publication of CN113192173B publication Critical patent/CN113192173B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/02Non-photorealistic rendering
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/60Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/80Shading
    • G06T15/83Phong shading
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/30Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by output arrangements for receiving control signals generated by the game device
    • A63F2300/308Details of the user interface

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application provides an image processing method and device for a three-dimensional scene, electronic equipment and a computer-readable storage medium; the method comprises the following steps: carrying out image acquisition processing on the three-dimensional scene to obtain a scene image of the three-dimensional scene; storing the scene image to an image buffer area of a memory; performing a rendering process on the scene image in the image buffer by at least one rendering component to update the scene image to a target scene image having target visual characteristics; and rendering the target scene image in the image buffer area so as to present the target scene image on a human-computer interaction interface. By the method and the device, the target visual characteristics can be accurately and quickly added into the three-dimensional scene.

Description

Image processing method and device for three-dimensional scene and electronic equipment
Technical Field
The present disclosure relates to computer technologies, and in particular, to an image processing method and apparatus for a three-dimensional scene, an electronic device, and a computer-readable storage medium.
Background
With the rapid development of computer technology, a new modeling technology, namely a three-dimensional modeling technology, appears in the fields of game production, animation production, Virtual Reality (VR), and the like, wherein a three-dimensional (3-Dimension, 3D) is a three-dimensional coordinate system formed by adding a new direction vector to a planar two-dimensional coordinate system. A three-dimensional scene with stereoscopic impression and sense of reality can be obtained through three-dimensional modeling, and a good presentation effect can be obtained.
Complex and varied business requirements may arise in real business related to three-dimensional scenes, such as adding specific visual features (i.e., visual effects) to the three-dimensional scene. In view of this, in the solutions provided in the related art, it is common to re-model, i.e., re-manually fabricate a three-dimensional scene, the visual features being added by modelers as needed. However, the scheme needs to consume a large amount of time cost and labor cost, and cannot adapt to complicated and variable business requirements.
Disclosure of Invention
The embodiment of the application provides an image processing method and device for a three-dimensional scene, electronic equipment and a computer readable storage medium, which can accurately and quickly add target visual features into the three-dimensional scene and adapt to complex and variable business requirements.
The technical scheme of the embodiment of the application is realized as follows:
the embodiment of the application provides an image processing method of a three-dimensional scene, which comprises the following steps:
carrying out image acquisition processing on a three-dimensional scene to obtain a scene image of the three-dimensional scene;
storing the scene image to an image buffer area of a memory;
performing, by at least one rendering component, a rendering process on the scene image in the image buffer to update the scene image to a target scene image having target visual characteristics;
rendering the target scene image in the image buffer area so as to present the target scene image on a human-computer interaction interface.
An embodiment of the present application provides an image processing apparatus for a three-dimensional scene, including:
the acquisition module is used for acquiring and processing images of a three-dimensional scene to obtain a scene image of the three-dimensional scene;
the storage module is used for storing the scene image to an image buffer area of a memory;
a rendering module for rendering the scene image in the image buffer by at least one rendering component to update the scene image to a target scene image having target visual characteristics;
and the rendering module is used for rendering the target scene image in the image buffer area so as to present the target scene image on a human-computer interaction interface.
An embodiment of the present application provides an electronic device, including:
a memory for storing executable instructions;
and the processor is used for realizing the image processing method of the three-dimensional scene provided by the embodiment of the application when the executable instructions stored in the memory are executed.
The embodiment of the application provides a computer-readable storage medium, which stores executable instructions for causing a processor to execute the executable instructions, so as to implement the image processing method for a three-dimensional scene provided by the embodiment of the application.
The embodiment of the application has the following beneficial effects:
on one hand, the target visual characteristics can be accurately added into the three-dimensional scene (the scene image) through the coloring component, if the added target visual characteristics need to be changed, the coloring component is correspondingly adjusted, and the complex and changeable business requirements can be met; on the other hand, compared with the solutions provided by the related art, the embodiment of the application can reduce user operations and can also reduce the consumption of computing resources of the electronic device.
Drawings
FIG. 1 is a block diagram of an architecture of an image processing system for a three-dimensional scene according to an embodiment of the present disclosure;
fig. 2 is a schematic architecture diagram of a terminal device provided in an embodiment of the present application;
FIG. 3 is a schematic diagram of a three-dimensional engine provided by an embodiment of the present application;
fig. 4A is a schematic flowchart of an image processing method for a three-dimensional scene according to an embodiment of the present application;
fig. 4B is a schematic flowchart of an image processing method of a three-dimensional scene according to an embodiment of the present application;
FIG. 4C is a schematic flow chart illustrating a pixel shading process according to an embodiment of the present disclosure;
fig. 4D is a schematic flowchart of an image processing method for a three-dimensional scene according to an embodiment of the present application;
FIG. 5A is a schematic diagram of a scene image of a three-dimensional scene provided in an embodiment of the present application;
FIG. 5B is a schematic diagram of a target scene image to which a screen fault effect has been added according to an embodiment of the present application;
FIG. 6 is a schematic diagram of post-processing provided by embodiments of the present application;
FIG. 7 is a schematic diagram of a post-processing chain provided by an embodiment of the present application;
FIG. 8 is a schematic diagram of an image processed by an RGB color conversion shader according to an embodiment of the present application;
FIG. 9 is a schematic diagram of a target scene image processed by an RGB color conversion shader according to an embodiment of the present application;
FIG. 10 is a diagram illustrating an image of a target scene processed by a noise shader according to an embodiment of the present disclosure;
fig. 11 is a schematic flowchart of an image processing method of a three-dimensional scene according to an embodiment of the present application.
Detailed Description
In order to make the objectives, technical solutions and advantages of the present application clearer, the present application will be described in further detail with reference to the attached drawings, the described embodiments should not be considered as limiting the present application, and all other embodiments obtained by a person of ordinary skill in the art without creative efforts shall fall within the protection scope of the present application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is understood that "some embodiments" may be the same subset or different subsets of all possible embodiments, and may be combined with each other without conflict.
In the following description, references to the terms "first", "second", and the like, are only to distinguish similar objects and do not denote a particular order, but rather the terms "first", "second", and the like may be used interchangeably with the order specified, where permissible, to enable embodiments of the present application described herein to be practiced otherwise than as specifically illustrated or described herein. In the following description, the term "plurality" referred to means at least two.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the present application only and is not intended to be limiting of the application.
Before further detailed description of the embodiments of the present application, terms and expressions referred to in the embodiments of the present application will be described, and the terms and expressions referred to in the embodiments of the present application will be used for the following explanation.
1) Three-dimensional scene: based on a scene in a three-dimensional space constructed by a three-dimensional modeling technology, an object (i.e., a three-dimensional object) in the three-dimensional scene may be described by three-dimensional coordinates, wherein the three-dimensional coordinates may refer to coordinates in a three-dimensional coordinate system including an x-axis, a y-axis, and a z-axis. In some embodiments, the three-dimensional scene may be a virtual scene (or called three-dimensional virtual scene), and the virtual scene is a scene different from the real world output by an electronic device, and a visual perception of the virtual scene can be formed by naked eyes or assistance of a specific device. The virtual scene may be a simulation environment of a real world, a semi-simulation semi-fictional virtual environment, or a pure fictional virtual environment.
2) A three-dimensional engine: a set of codes (instructions) designed for an electronic device that outputs a three-dimensional scene, which can be recognized by the electronic device, is used to control how the three-dimensional scene is made and output. From another perspective, the three-dimensional engine may refer to a three-dimensional scene development environment that encapsulates hardware operations and image algorithms. In embodiments of the present application, image processing may be implemented using a camera component and a shading component in a three-dimensional engine.
3) Coloring components: the system is also called as a Shader (Shader), and refers to an editable program for performing shading processing on an image, can realize the related computation of 3D graphics, and can add various visual features (visual effects) to the image through a shading component due to the editability of the shading component without the limitation of a fixed rendering pipeline of a display card. In the embodiment of the present application, the shading components may include a vertex shading component and a pixel shading component, wherein the vertex shading component is mainly responsible for operations such as geometric relationships of vertices, and the pixel shading component is mainly responsible for calculations such as pixel colors. In the embodiment of the present application, the shading component may be run in a Graphics Processing Unit (GPU) of an electronic device, where the GPU is also called a display core, a visual processor, and a display chip, and is a processor dedicated to perform Graphics-related operations in the electronic device (such as a personal computer, a workstation, a game machine, a tablet computer, and a smart phone).
4) Post-Processing (Post-Processing): the method refers to a processing process before an image is presented in a human-computer interaction interface, and in the process, the image in an image buffer area of a Memory (Memory) can be colored so as to apply a target visual characteristic meeting a business requirement to the image. The image buffer refers to a buffer area partitioned for a three-dimensional scene in an internal memory, and is used for storing an image (i.e., a scene image) acquired from the three-dimensional scene. In addition, the embodiments of the present application do not limit the types of the target visual features, such as specific depth of field, luminescence, film grain, and various types of anti-aliasing.
5) Rendering (Render): in the embodiment of the present application, the process of rendering an image to a human-computer interaction interface (such as a human-computer interaction interface provided by a browser) to present the image in the human-computer interaction interface is referred to.
6) Strategy: the electronic device may analyze the policy according to the set logic and perform corresponding operations to implement corresponding functions, for example, the electronic device may perform pixel rendering processing according to the pixel rendering policy to implement a pixel rendering function. In the embodiment of the present application, the pixel coloring policy may be manually set by a relevant person or automatically set by Artificial Intelligence (AI). The embodiment of the present application does not limit the specific form of the pixel coloring policy, and for example, the pixel coloring policy may be a code that is directly recognized and executed by an electronic device.
7) Pixel (Pixel): refers to elements in the image that are not continuously segmentable.
8) An image channel: for describing pixels in an image, in the embodiment of the present application, the image channel may include at least one of a color channel and a transparency channel (also referred to as Alpha channel). Wherein, the color channels are used to describe the colors of the pixels, and there are differences between the color channels in different color spaces (also called color modes), for example, the color channels in the RGB color space include a Red (Red) channel, a Green (Green) channel, and a Blue (Blue) channel; also for example, the color channels in the HSL color space include a Hue (Hue) channel, a Saturation (Saturation) channel, and a brightness (Lightness) channel. The transparency channel is used to describe the transparency of the pixel.
The embodiment of the application provides an image processing method and device for a three-dimensional scene, electronic equipment and a computer readable storage medium, which can accurately and quickly add target visual features into the three-dimensional scene and adapt to complex and variable business requirements. An exemplary application of the electronic device provided in the embodiment of the present application is described below, and the electronic device provided in the embodiment of the present application may be implemented as various types of terminal devices, and may also be implemented as a server.
Referring to fig. 1, fig. 1 is an architectural diagram of an image processing system 100 for a three-dimensional scene provided in an embodiment of the present application, a terminal device 400 is connected to a server 200 through a network 300, and the server 200 is connected to a database 500, where the network 300 may be a wide area network or a local area network, or a combination of the two.
In some embodiments, taking the electronic device as a terminal device as an example, the image processing method for a three-dimensional scene provided in the embodiments of the present application may be implemented by the terminal device. For example, the terminal device 400 may calculate data required for display through a graphics computing hardware (e.g., GPU), complete loading, parsing and rendering of display data (e.g., target scene image), and output an image capable of forming a visual perception of a three-dimensional scene through a graphics output hardware (e.g., screen), for example, render the target scene image on a display screen of a smartphone.
For example, the terminal device 400 performs image acquisition processing on a three-dimensional scene to obtain a scene image, and stores the scene image in an image buffer included in a memory of the terminal device 400, where the relevant data of the three-dimensional scene may be obtained by the terminal device 400 from the outside (such as the server 200, the database 500, or a block chain), or may be generated in the terminal device 400. The terminal device 400 performs a rendering process on the scene image in the image buffer through at least one rendering component to update the scene image to a target scene image with target visual characteristics. Finally, the terminal device 400 renders the target scene image in the image buffer to present the target scene image on the human-computer interaction interface.
In some embodiments, taking the electronic device as a server as an example, the image processing method for a three-dimensional scene provided in the embodiments of the present application may also be cooperatively implemented by a terminal device and the server. For example, the server 200 performs calculation of three-dimensional scene-related display data and transmits the same to the terminal device 400, and the terminal device 400 relies on graphics computing hardware to complete loading, parsing and rendering of the display data and relies on graphics output hardware to output images to form visual perception.
For example, the server 200 performs image acquisition processing on a three-dimensional scene to obtain a scene image, where the server 200 may obtain relevant data of the three-dimensional scene from the terminal device 400, the database 500, a distributed file system or a blockchain of the server 200 itself, and the like, which is not limited herein. The server 200 stores the scene image into an image buffer included in a memory of the server 200, and performs rendering processing on the scene image in the image buffer through at least one rendering component, so as to update the scene image into a target scene image with target visual characteristics. Then, the server 200 sends the target scene image to the terminal device 400, and the terminal device 400 may perform rendering processing on the received target scene image to present the target scene image on the human-computer interaction interface.
In some embodiments, the electronic device may perform image acquisition processing on the three-dimensional scene when the three-dimensional scene satisfies the visual change condition, to obtain a scene image of the three-dimensional scene. Here, the visual change condition is not limited, and for example, a trigger operation for a three-dimensional scene may be received in the human-computer interaction interface. For example, as shown in fig. 1, a three-dimensional scene and a visual change option are being presented in a human-computer interaction interface of the terminal device 400, and when the terminal device 400 receives a trigger operation for the visual change option, the trigger operation is taken as a trigger operation for the three-dimensional scene being presented, in this case, the terminal device 400 may perform image capture processing on the three-dimensional scene being presented, or may transmit related data of the three-dimensional scene being presented to the server 200, so that the server 200 performs image capture processing on the three-dimensional scene being presented. The type of the trigger operation is not limited, and may be, for example, a contact operation, such as a click operation or a long-time press operation; also for example, the operation may be a non-contact operation such as a voice input operation or a gesture input operation. After a series of processing is performed by the terminal device 400 or the server 200, finally, the terminal device 400 presents a target scene image on the human-computer interaction interface, where the target scene image has a target visual characteristic, that is, a visual change is realized. Therefore, the image processing method can improve the appropriateness and the reasonability of the image processing time in a man-machine interaction mode and can fully meet the requirements of users.
In some embodiments, various results (such as a three-dimensional scene, a scene image, a target scene image, and the like) involved in the image processing process may be stored in the blockchain, and since the blockchain has a non-falsification characteristic, the accuracy of data in the blockchain can be ensured. The electronic device may send a query request to the blockchain to query data stored in the blockchain, for example, when a target scene image needs to be presented, the terminal device may query the target scene image stored in the blockchain and perform rendering processing.
In some embodiments, the terminal device 400 or the server 200 may implement the image processing method for a three-dimensional scene provided by the embodiment of the present application by running a computer program, such as the client 410 shown in fig. 1. For example, the computer program may be a native program or a software module in an operating system; can be a local (Native) Application (APP), i.e. a program that needs to be installed in an operating system to run, such as a browser Application, a short video Application, a military simulation program or a game Application; or may be an applet, i.e. a program that can be run only by downloading it to the browser environment; but also an applet that can be embedded in any APP, which applet can be run or shut down by user control. In general, the computer programs described above may be any form of application, module or plug-in. As for the game application, it may be any one of First-Person shooter (FPS) game, Third-Person shooter (TPS) game, Multiplayer Online Battle Arena (MOBA) game, and Multiplayer gunfight live game, and the like, which is not limited in this respect.
In some embodiments, a server (e.g., the server 200 shown in fig. 1) may be an independent physical server, may also be a server cluster or a distributed system formed by a plurality of physical servers, and may also be a cloud server that provides basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a Network service, cloud communication, a middleware service, a domain name service, a security service, a Content Delivery Network (CDN), and a big data and artificial intelligence platform, where the cloud service may be an image processing service for a terminal device to call. The terminal device (such as the terminal device 400 shown in fig. 1) may be a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart television, a smart watch, and the like, but is not limited thereto. The terminal device and the server may be directly or indirectly connected through wired or wireless communication, and the embodiment of the present application is not limited.
In some embodiments, the database (e.g., database 500 shown in FIG. 1) and the server (e.g., server 200 shown in FIG. 1) may be provided independently. In some embodiments, the database and the server may also be integrated, that is, the database may be regarded as existing inside the server, and integrated with the server, and the server may provide the data management function of the database.
Taking the electronic device provided in the embodiment of the present application as an example for illustration, it can be understood that, for the case where the electronic device is a server, parts (such as the user interface, the presentation module, and the input processing module) in the structure shown in fig. 2 may be default. Referring to fig. 2, fig. 2 is a schematic structural diagram of a terminal device 400 provided in an embodiment of the present application, where the terminal device 400 shown in fig. 2 includes: at least one processor 410, memory 450, at least one network interface 420, and a user interface 430. The various components in the terminal device 400 are coupled together by a bus system 440. It is understood that the bus system 440 is used to enable communications among the components. The bus system 440 includes a power bus, a control bus, and a status signal bus in addition to a data bus. For clarity of illustration, however, the various buses are labeled as bus system 440 in fig. 2.
The Processor 410 may be an integrated circuit chip having Signal processing capabilities, such as a general purpose Processor, a Digital Signal Processor (DSP), or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like, wherein the general purpose Processor may be a microprocessor or any conventional Processor, or the like.
The user interface 430 includes one or more output devices 431, including one or more speakers and/or one or more visual displays, that enable the presentation of media content. The user interface 430 also includes one or more input devices 432, including user interface components that facilitate user input, such as a keyboard, mouse, microphone, touch screen display, camera, other input buttons and controls.
The memory 450 may be removable, non-removable, or a combination thereof. Exemplary hardware devices include solid state memory, hard disk drives, optical disk drives, and the like. Memory 450 optionally includes one or more storage devices physically located remote from processor 410.
The memory 450 includes either volatile memory or nonvolatile memory, and may include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read Only Memory (ROM), and the volatile Memory may be a Random Access Memory (RAM). The memory 450 described in embodiments herein is intended to comprise any suitable type of memory.
In some embodiments, memory 450 is capable of storing data, examples of which include programs, modules, and data structures, or a subset or superset thereof, to support various operations, as exemplified below.
An operating system 451, including system programs for handling various basic system services and performing hardware-related tasks, such as a framework layer, a core library layer, a driver layer, etc., for implementing various basic services and handling hardware-based tasks;
a network communication module 452 for communicating to other electronic devices via one or more (wired or wireless) network interfaces 420, exemplary network interfaces 420 including: bluetooth, wireless compatibility authentication (WiFi), and Universal Serial Bus (USB), etc.;
a presentation module 453 for enabling presentation of information (e.g., user interfaces for operating peripherals and displaying content and information) via one or more output devices 431 (e.g., display screens, speakers, etc.) associated with user interface 430;
an input processing module 454 for detecting one or more user inputs or interactions from one of the one or more input devices 432 and translating the detected inputs or interactions.
In some embodiments, the image processing apparatus for a three-dimensional scene provided by the embodiments of the present application may be implemented in software, and fig. 2 illustrates the image processing apparatus 455 for a three-dimensional scene stored in the memory 450, which may be software in the form of programs and plug-ins, and includes the following software modules: an acquisition module 4551, a storage module 4552, a shading module 4553 and a rendering module 4554, which are logical and thus may be arbitrarily combined or further split depending on the functions implemented. The functions of the respective modules will be explained below.
Referring to fig. 3, fig. 3 is a schematic diagram of a three-dimensional engine provided in an embodiment of the present application, where the three-dimensional scene is a game virtual scene, the three-dimensional engine may be a game engine. As shown in fig. 3, the three-dimensional engine includes, but is not limited to, a rendering component (e.g., renderer), an editing component (e.g., editor for editing/producing a three-dimensional scene), an underlying algorithm, scene management (for managing different three-dimensional scenes), sound effects (for managing audio corresponding to a three-dimensional scene), a script engine, a camera component, and a shading component, wherein the shading component may include a vertex shading component and a pixel shading component. The image processing method for a three-dimensional scene provided in the embodiment of the present application may be implemented by invoking relevant components of the three-dimensional engine shown in fig. 3 by respective modules in the image processing apparatus 455 for a three-dimensional scene shown in fig. 2, and is exemplified below.
For example, the acquiring module 4551 is configured to invoke a camera component in a three-dimensional engine to perform image acquisition processing on a three-dimensional scene, so as to obtain a scene image of the three-dimensional scene; the storage module 4552 is configured to invoke a camera component in the three-dimensional engine to store a scene image into an image buffer corresponding to the camera component, where a correspondence between the camera component and the image buffer may be established in advance; the rendering module 4553 is configured to invoke a rendering component in the three-dimensional engine to perform rendering processing on the scene image in the image buffer, so as to update the scene image to a target scene image with target visual characteristics; the rendering module 4554 is configured to invoke a rendering component in the three-dimensional engine to perform rendering processing on the target scene image in the image buffer, so as to present the target scene image on the human-computer interaction interface.
Of course, the above examples do not limit the embodiments of the present application, and the calling relationship of each component included in the three-dimensional engine and each module in the image processing apparatus 455 of the three-dimensional scene to the component in the three-dimensional engine may be adjusted according to the actual application scene.
The image processing method for a three-dimensional scene provided by the embodiment of the present application will be described in conjunction with exemplary applications and implementations of the electronic device provided by the embodiment of the present application.
Referring to fig. 4A, fig. 4A is a schematic flowchart of an image processing method of a three-dimensional scene according to an embodiment of the present application, and will be described with reference to the steps shown in fig. 4A.
In step 101, image acquisition processing is performed on a three-dimensional scene to obtain a scene image of the three-dimensional scene.
For example, image acquisition processing may be performed on a three-dimensional scene being presented in a human-computer interaction interface, so as to obtain a scene image in a current visual field range (which refers to a visual field range when being presented) or a specific visual field range; for example, when the three-dimensional scene is not displayed, the scene image may be obtained by performing image capturing processing on the three-dimensional scene according to a specific view field range, and the view field range used in the image capturing processing may be set in advance or may be automatically determined.
The embodiment of the present application does not limit the manner of image acquisition processing, and for example, the image acquisition processing may be implemented by a screen capture manner, or may be implemented by a camera assembly of a three-dimensional engine.
In some embodiments, the three-dimensional scene includes at least one three-dimensional object; the image acquisition processing of the three-dimensional scene can be realized in such a way as to obtain a scene image of the three-dimensional scene: determining a minimum space simultaneously including at least one three-dimensional object in the three-dimensional scene according to the coordinates of the at least one three-dimensional object in the three-dimensional scene; determining a field of view of the camera assembly based on the minimum space; an image of a scene in a three-dimensional scene within a field of view is acquired by a camera assembly.
In the embodiment of the application, the view field range for image acquisition processing can be automatically determined. For example, if the three-dimensional scene includes at least one three-dimensional object, then in order to ensure that each three-dimensional object can be captured during the image capture process, the coordinates of each three-dimensional object may be determined by the three-dimensional engine, and the minimum space (also referred to as the minimum three-dimensional space) in the three-dimensional scene that includes all three-dimensional objects at the same time may be determined according to the coordinates of each three-dimensional object.
And adjusting the visual field range of the camera assembly of the three-dimensional engine according to the determined minimum space so that the adjusted visual field range can at least cover the minimum space. An image of the scene in the three-dimensional scene within the adjusted field of view is then captured by a camera assembly of the three-dimensional engine. Therefore, the accuracy of the determined view range can be improved, and the acquired scene image is ensured to include all three-dimensional objects; meanwhile, the visual field range does not need to be manually set and tested, so that the labor cost can be saved, and the waste of computing resources caused in the repeated testing process of the visual field range can be avoided.
In some embodiments, the image acquisition processing on the three-dimensional scene described above may be implemented in such a way as to obtain a scene image of the three-dimensional scene: when the three-dimensional scene meets the visual change condition, carrying out image acquisition processing on the three-dimensional scene to obtain a scene image of the three-dimensional scene; wherein the visual change condition comprises at least one of: receiving a trigger operation aiming at a three-dimensional scene at a human-computer interaction interface; the presentation time length of the three-dimensional scene meets a time length condition; the audio characteristic variation amplitude of the audio corresponding to the three-dimensional scene meets the amplitude condition.
In the embodiment of the application, the visual change condition set for the three-dimensional scene can be acquired, and the image acquisition processing on the three-dimensional scene is triggered when the three-dimensional scene meets the visual change condition. The visual change condition is not limited, and may include at least one of the following three types, for example, and will be described separately.
1) Receiving a trigger operation aiming at the three-dimensional scene in a human-computer interaction interface. For example, in the process of presenting a three-dimensional scene through a human-computer interaction interface, a trigger operation for a certain region of the human-computer interaction interface is received, where the region may be any one region or a preset region.
2) The presentation time of the three-dimensional scene meets the time length condition. For example, since a three-dimensional scene is stereoscopic, the three-dimensional scene is generally required to be presented from different angles (view ranges), similar to the process of video playing, in the process of presenting the three-dimensional scene, when the presentation time length that has been presented satisfies the time length condition, image acquisition processing on the three-dimensional scene being presented is triggered. The duration condition may be set according to an actual application scenario, for example, the presented duration meets a certain time point (e.g., 30 th second), and for example, the presented duration falls within a certain time period (e.g., a time period between 30 th second and 40 th second).
3) The audio characteristic variation amplitude of the audio corresponding to the three-dimensional scene meets the amplitude condition. In order to improve the presentation effect of the three-dimensional scene, audio corresponding to the three-dimensional scene is configured in some scenes to play a role of background music, so that the presentation effect is improved. For the situation, in the process of presenting the three-dimensional scene, when it is detected that the audio feature variation amplitude of the audio being played meets the amplitude condition, image acquisition processing is triggered to be performed on the three-dimensional scene being presented. The audio characteristic variation amplitude may refer to a variation amplitude of any audio characteristic, such as frequency or decibel; the amplitude condition may be set according to the characteristics of the audio features. For example, when the audio characteristic is decibels, the amplitude condition may be set such that the amplitude of change in decibels between the current time and the previous time is greater than the amplitude threshold.
By the aid of the method, flexibility of image processing can be improved, good interaction with a user is achieved, and human-computer interaction experience is improved.
In some embodiments, before step 101, further comprising: any one of the following processes is performed: acquiring a created three-dimensional scene comprising a three-dimensional object; acquiring multimedia content to be processed, and creating a three-dimensional object corresponding to an image to be processed in a newly-built three-dimensional scene; wherein, the image to be processed is any one image in the multimedia content.
In the embodiment of the present application, a created three-dimensional scene including a three-dimensional object may be acquired, and image processing may be performed on the three-dimensional scene.
In addition, Multimedia content to be processed may also be obtained, where Multimedia (Multimedia) content refers to content including at least one media form, such as text, sound, image, and the like, in this embodiment, the Multimedia content includes at least an image, and for example, the Multimedia content may be an image or a video. For multimedia content, a three-dimensional scene can be newly created, and a three-dimensional object corresponding to an image to be processed is created in the three-dimensional scene. The image to be processed is an image to be processed in the multimedia content, and may be, for example, an image of each frame in a video; the image to be processed can be added to the surface of the three-dimensional object in the form of texture (mapping), so that the image to be processed can be acquired in the subsequent image acquisition processing. By the method, the application range of the embodiment of the application can be widened, namely the method is not only suitable for three-dimensional scenes, but also suitable for multimedia contents of non-three-dimensional scenes.
In step 102, the scene image is stored in an image buffer of the memory.
Here, the image buffer is a buffer area partitioned in the memory, and is used for storing (e.g., temporarily storing) the scene image obtained by the image capturing process. In the embodiment of the application, the memory space occupied by the image buffer area can be preset, and the image buffer area is divided in the memory in advance according to the set memory space; or, the required memory space may be determined in real time according to the condition of the acquired scene image, and an image buffer area is partitioned in the memory in real time according to the required memory space, which will be described later.
In step 103, the scene image in the image buffer is rendered by at least one rendering component to update the scene image to a target scene image with target visual characteristics.
For example, the three-dimensional engine includes at least one shading component for adding a target visual feature to an image of a scene. The type of the target visual feature is not limited in the embodiment of the present application, and may be, for example, specific depth of field, luminescence, film particles, various types of anti-aliasing, screen failure special effects, and the like, and may be determined according to actual business requirements. According to the preset target visual characteristics, the at least one coloring component can be edited correspondingly, so that the edited at least one coloring component can realize the function of adding the target visual characteristics.
Here, the scene image in the image buffer is subjected to a rendering process by at least one rendering component (i.e., the edited at least one rendering component) to update the scene image to a target scene image having target visual characteristics. It should be noted that, since the image buffer is located in the memory, the process of the rendering process is invisible to the user, i.e. the rendering process can be implemented implicitly and quickly.
In step 104, the target scene image in the image buffer is rendered to present the target scene image on the human-computer interaction interface.
For example, the target scene image in the image buffer area is rendered to the human-computer interaction interface to present the target scene image with the target visual characteristics in the human-computer interaction interface, and compared with the original scene image, the presentation effect can be improved by presenting the target scene image, so that the actual business requirements are met.
It should be noted that after the rendering process, the target scene image in the image buffer may be deleted immediately, or the target scene image in the image buffer may be deleted after waiting for a set time period (e.g., 1 minute), so as to reduce the waste of memory resources.
As shown in fig. 4A, in the embodiment of the present application, at least one coloring component is used for performing coloring processing, so that a target visual feature can be quickly and accurately added to a scene image, and if the added target visual feature needs to be changed, the coloring component is adjusted accordingly, so as to adapt to complex and changeable service requirements; meanwhile, the embodiment of the application can realize automatic image processing, reduce user operation and avoid waste of computing resources caused by remanufacturing of the three-dimensional scene.
In some embodiments, referring to fig. 4B, fig. 4B is a flowchart illustrating an image processing method of a three-dimensional scene provided in an embodiment of the present application, and step 103 shown in fig. 4A may be implemented by steps 201 to 202, which will be described in conjunction with the steps.
In step 201, vertex information in the three-dimensional scene is obtained through the vertex shading components in the traversed shading components, and vertex shading processing is performed on the scene image in the image buffer according to the vertex information, so as to obtain a plurality of restored vertices.
In this embodiment of the present application, traversal processing may be performed on at least one shading component according to the sequence of the at least one shading component, and shading processing may be performed through the traversed shading component, and of course, when the number of the shading components is only one, shading processing may be performed directly through the shading component.
The shading component can comprise a vertex shading component and a pixel shading component, and correspondingly, the shading process can also comprise a vertex shading process and a pixel shading process. In the process of traversing processing, vertex information in the three-dimensional scene is obtained through a vertex coloring component in the traversed coloring components, and vertex coloring processing is carried out on a scene image in the image buffer area according to the vertex information to obtain a plurality of restored vertexes. The vertex is an element used for describing a three-dimensional object in a coordinate system of the three-dimensional scene, and the vertex information may include at least one of coordinates of the vertex and color information of the vertex, and may also include other information; the vertex shading process is used for restoring a plurality of vertexes of the three-dimensional scene in the scene image, namely mapping the vertexes.
In step 202, the plurality of vertices are rasterized by the pixel shading components in the traversed shading components, and the pixels obtained by the rasterization are subjected to pixel shading according to a pixel shading policy, so as to update the scene image in the image buffer.
After the plurality of restored vertices are obtained by the vertex shading component of the traversed shading components, since the scene image is not described by the vertices but has pixels as indivisible elements, the plurality of restored vertices are rasterized (Rasterization) by the pixel shading component of the traversed shading component, and pixels requiring pixel shading processing are obtained. It is worth noting that rasterization processing can also be implemented by the vertex shading component of the traversed to shading components.
For the pixels obtained by rasterization, pixel rendering is performed by the pixel rendering component in the traversed rendering component, and the process of pixel rendering is also the process of updating the scene image in the image buffer. The pixel coloring strategy used by the pixel coloring processing can be pre-deployed in each pixel coloring component, and the pixel coloring strategy has a corresponding relation with the target visual feature, that is, the pixel coloring strategy is used for adding the target visual feature.
It should be noted that the pixel rendering strategies deployed by different pixel rendering components may be the same or different, depending on the actual application scenario.
In some embodiments, rasterization of multiple vertices by pixel shading components in traversed shading components as described above may be implemented in such a way that: executing the following processing by the pixel coloring component in the traversed coloring component: constructing a vertex area in the scene image according to the restored multiple vertexes; pixels covered by the vertex area in the scene image are determined as pixels obtained by the rasterization processing.
For a plurality of vertices restored in the scene image, a vertex region may be constructed in the scene image by a pixel rendering component in the traversed rendering component according to the plurality of restored vertices, and pixels covered by the vertex region in the scene image are used as pixels obtained through rasterization.
For example, the restored vertices may be read sequentially in a unit of a set number, where the number of vertices read at a time is equal to the set number. Connecting the vertexes read each time to obtain a vertex area, and taking the pixels covered by the vertex area in the scene image as pixels obtained through rasterization processing. The set number is an integer greater than 2, and may be set according to an actual application scenario, for example, the set number may be 3, and after connecting 3 vertices read each time, a triangle vertex area may be obtained.
When determining whether a pixel in the scene image is covered by the vertex region, the determination may be performed by using an algorithm such as a Linear Expression Evaluation (LEE) algorithm or a Scan Line (Scan Line) algorithm, which is not limited herein. By the method, effective rasterization processing can be realized, namely, vertexes in the three-dimensional scene can be accurately converted into pixels in the scene image.
As shown in fig. 4B, in the embodiment of the present application, the vertex shading component performs vertex shading, and the pixel shading component performs pixel shading, so that the target visual feature can be added to the scene image quickly and accurately.
In some embodiments, referring to fig. 4C, fig. 4C is a schematic flowchart of a pixel coloring process provided in an embodiment of the present application, and the process of the pixel coloring process shown in fig. 4B may be implemented by at least one of step 301 and step 302, which will be described in conjunction with each step.
In step 301, a numerical shift process is performed on the channel value of the image channel corresponding to any one pixel.
For example, when the target visual feature to be added includes a screen fault special effect, for each pixel obtained by the rasterization processing, the channel value of the image channel corresponding to the pixel may be subjected to a numerical shift processing. The image channel may include at least one of a color channel and a transparency channel.
In fig. 4C, step 301 shown may be implemented by steps 401 to 403, and will be described with reference to each step.
In step 401, cosine processing is performed on the random angle to obtain a cosine value, sine processing is performed on the random angle to obtain a sine value, and an offset coordinate is constructed according to the cosine value and the sine value.
The embodiment of the application provides an example mode of numerical value offset processing. First, a random angle is generated, for example, a random value is generated in an angle value range of [0, 2 pi ] as the random angle, but the angle value range is not limited thereto and may be set according to an actual application scenario. And performing cosine processing on the generated random angle to obtain a cosine value, performing sine processing on the random angle to obtain a sine value, and constructing an offset coordinate according to the cosine value and the sine value.
In some embodiments, in order to further improve randomness, the cosine values may be weighted by a weighting parameter to obtain weighted cosine values, the sine values are weighted by the weighting parameter to obtain weighted sine values, and offset coordinates are constructed according to the weighted cosine values and the weighted sine values. The weighting parameter may be set according to an actual application scenario, and may be a random value generated in a value range [0, 1], for example.
In step 402, coordinate offset processing is performed on any one pixel according to the offset coordinates, so as to obtain an offset pixel.
For convenience of explanation, a pixel to be subjected to the numerical value shift processing is named as a pixel P1, and the pixel P1 may be subjected to the coordinate shift processing according to the shift coordinates to obtain a shifted pixel P2.
For example, the offset coordinates and the coordinates of the pixel P1 may be subjected to superimposition processing (addition processing), and the pixel corresponding to the coordinates obtained by the superimposition processing may be regarded as the offset pixel P2; alternatively, the offset coordinate may be subtracted from the coordinate of the pixel P1, and the pixel corresponding to the obtained coordinate may be regarded as the offset pixel P2. Of course, the manner of the coordinate offset processing is not limited thereto.
In step 403, updating the channel value of the target color channel corresponding to any one pixel according to the channel value of the target color channel corresponding to the offset pixel; wherein the target color channel comprises at least one of a plurality of color channels.
Here, the object of the numerical shift processing may be at least one of a plurality of color channels, for example, the color channels in the RGB color space include a red channel, a green channel, and a blue channel, and the target color channel may include at least one of the red channel, the green channel, and the blue channel, and may be set according to an actual application scenario.
After obtaining the offset pixel P2, the channel value of the pixel P1 corresponding to the target color channel may be replaced with the channel value of the offset pixel P2 corresponding to the target color channel to implement the update process of the channel values. In this way, the effect of pixel movement, namely the addition of a screen failure special effect, can be visually formed.
It is to be noted that, when the number of target color channels includes a plurality of target color channels, the update processing of the channel value may be performed individually for each target color channel. For example, in the case where the target color channel includes a red channel and a blue channel, the channel value of the pixel P1 corresponding to the red channel may be replaced with the channel value of the offset pixel P2 corresponding to the red channel, while the channel value of the pixel P1 corresponding to the blue channel may be replaced with the channel value of the offset pixel P2 corresponding to the blue channel.
In some embodiments, the target color channel includes a red channel and a blue channel; the coordinate offset processing on any one pixel according to the offset coordinates can be realized in such a way that the offset pixel is obtained: carrying out coordinate offset processing in a first direction on any pixel according to the offset coordinate to obtain a first offset pixel; carrying out coordinate offset processing in a second direction on any pixel according to the offset coordinate to obtain a second offset pixel; wherein the first direction is opposite to the second direction; the above-mentioned updating process of the channel value of the target color channel corresponding to any one pixel according to the channel value of the target color channel corresponding to the offset pixel can be realized in such a way that: updating the channel value of the red channel corresponding to any one pixel according to the channel value of the red channel corresponding to the first offset pixel; and updating the channel value of the blue channel corresponding to any one pixel according to the channel value of the blue channel corresponding to the second offset pixel.
For example, the coordinate shift processing in the first direction may refer to superimposition processing of coordinates, and the coordinate shift processing in the second direction may refer to subtraction processing of coordinates; alternatively, the coordinate shift processing in the first direction may refer to subtraction processing of coordinates, and the coordinate shift processing in the second direction may refer to superimposition processing of coordinates. The former case is exemplified for ease of understanding.
For example, the offset coordinates and the coordinates of the pixel P1 are superimposed, and the pixel corresponding to the coordinates obtained by the superimposition processing is defined as the first offset pixel P21(ii) a At the same time, the offset coordinates are subtracted from the coordinates of the pixel P1, and the pixel corresponding to the obtained coordinates is regarded as a second offset pixel P22
In the case where the target color channel includes a red channel and a blue channel, the channel value of the pixel P1 corresponding to the red channel is replaced with a first offset pixel P21Corresponding to the channel value of the red channel, while replacing the channel value of the pixel P1 corresponding to the blue channel with a second offset pixel P22Corresponding to the channel value of the blue channel. Therefore, in the scene image obtained through the numerical value offset processing, the effects of red moving to the upper right corner and blue moving to the lower left corner can be formed, namely, the screen failure special effect is added.
In step 302 shown in fig. 4C, noise addition processing is performed on the channel value of the image channel corresponding to any one pixel.
For example, when the target visual feature to be added includes a snowflake noise special effect, for each pixel obtained by the rasterization process, the noise addition process may be performed on the channel value of the image channel corresponding to the pixel, where the image channel may also include at least one of a color channel and a transparency channel.
In fig. 4C, step 302 is shown to be implemented by steps 404 to 405, which will be described in conjunction with each step.
In step 404, a random noise generation process is performed on any one pixel to obtain a noise channel value corresponding to the image channel.
Similarly, taking the pixel P1 as an example, random noise generation processing may be performed on the pixel P1 to obtain a noise channel value of the corresponding image channel. For example, a random value may be generated in a value range corresponding to the image channel to serve as a noise channel value; for another example, a random arithmetic process may be performed based on the coordinates of the pixel P1 to obtain a noise channel value corresponding to the image channel.
The image channel corresponding to the noise adding processing is not limited in the embodiment of the application, for example, the image channel can only correspond to one image channel, such as a transparency channel; for example, the color filter may correspond to a plurality of image channels, for example, a transparency channel and all color channels. For the latter case, a noise channel value is generated for each image channel corresponding to the noise addition process.
In step 405, the channel value of the image channel corresponding to any one pixel and the noise channel value are superimposed.
For example, for each image channel corresponding to the noise addition processing, the channel value of the image channel corresponding to the pixel P1 and the noise channel value corresponding to the image channel are subjected to superposition processing (addition processing) to update the channel value of the image channel corresponding to the pixel P1. Therefore, the aim of adding the snowflake noise special effect can be fulfilled.
It should be noted that, in the embodiment of the present application, the screen failure effect and the snowflake noise effect may also be added at the same time.
As shown in fig. 4C, the embodiment of the present application provides an example manner of a numerical offset process and a noise addition process, and can quickly and accurately add corresponding target visual features.
In some embodiments, referring to fig. 4D, fig. 4D is a flowchart illustrating an image processing method of a three-dimensional scene provided in an embodiment of the present application, and before step 102 illustrated in fig. 4A, in step 501, an information amount of a scene image may also be determined.
In the embodiment of the application, the image buffer area can be divided in real time according to the condition of the acquired scene image. First, an information amount of an acquired scene image is determined, and a type of the information amount is not limited in the embodiment of the present application, and may refer to an actual size of the scene image, that is, an actual number of bytes.
In step 502, the required memory space of the scene image is determined according to the amount of information.
Here, the required memory space of the scene image is positively correlated with the information amount of the scene image, and the specific positive correlation may be set according to the actual application scene, for example, may be a positive correlation function. After the information quantity of the scene image is determined, the required memory space corresponding to the information quantity can be determined according to the set positive correlation.
In step 503, when the required memory space is less than or equal to the available memory space in the memory, an image buffer is created in the memory according to the required memory space.
Here, when the required memory space is less than or equal to the available memory space in the memory, the image buffer area may be created in the memory directly according to the required memory space, that is, the memory space occupied by the image buffer area is equal to the required memory space.
In step 504, when the required memory space is larger than the available memory space in the memory, performing image degradation processing on the scene image according to the memory space difference to obtain a degraded scene image; wherein the memory space difference represents a difference between the required memory space and the available memory space; the image degradation processing mode comprises at least one of interception processing and definition degradation processing.
When the required memory space is larger than the available memory space in the memory, if the image buffer area is created in the memory directly according to the available memory space, the image buffer area is likely to be unable to support image processing on the scene image, that is, the situation that the image processing speed is slow or even the image processing fails easily occurs. Therefore, in the embodiment of the present application, the memory space difference is obtained by subtracting the available memory space from the required memory space, and the image degradation processing is performed on the scene image according to the memory space difference to obtain a degraded scene image, where the image degradation processing mode includes at least one of an interception processing and a sharpness degradation processing.
It is worth mentioning that the degree of image degradation processing is positively correlated with the difference in memory space to ensure that the memory space required for degradation of the subsequently obtained degraded scene image is less than or equal to the available memory space. Taking an example of an intercepting processing mode, when the memory space difference is a first difference, intercepting an area 2/3 in the scene image to serve as a degraded scene image; and when the memory space difference is a second difference, intercepting an area 1/3 in the scene image to be used as a degraded scene image, wherein the second difference is larger than the first difference. Taking a mode of sharpness degradation processing as an example, if the original sharpness of the scene image is 1080P, when the memory space difference is the first difference, the sharpness of the scene image is reduced from 1080P to 720P, and a degraded scene image is obtained; and when the memory space difference is a second difference, reducing the definition of the scene image from 1080P to 480P to obtain a degraded scene image, wherein the second difference is larger than the first difference.
In step 505, an amount of degradation information to degrade a scene image is determined.
Here, as in the principle of step 501, the amount of information of the degraded scene image is determined, and the amount of information determined here is named the amount of degraded information for the sake of convenience of distinction.
In step 506, determining a memory space required for degrading the image of the degraded scene according to the degradation information amount, and creating an image buffer area in the memory according to the memory space required for degrading; wherein the memory space required for degradation is less than or equal to the available memory space.
Here, as the principle of step 502, the required memory space of the degraded scene image is determined according to the degraded information amount, and for convenience of distinction, the determined required memory space is named as degraded required memory space, where the degraded required memory space is less than or equal to the available memory space. Then, an image buffer area is created in the memory according to the memory space required for degradation, that is, the memory space occupied by the image buffer area is equal to the memory space required for degradation.
In fig. 4D, step 102 shown in fig. 4A may be implemented by step 507 or step 508.
In step 507, the scene image is stored to an image buffer created according to the required memory space.
When an image buffer is created by step 503, the scene image may be stored to the image buffer created according to the required memory space.
In step 508, the degraded scene image is stored to an image buffer created according to the memory space required for degradation.
When the image buffer is created in step 506, the degraded scene image can be stored in the image buffer created according to the memory space required for degradation, and the image processing is performed on the degraded scene image in the subsequent steps, so that the success rate of the image processing can be ensured.
It should be noted that, for the case that the image buffer is divided in real time, after the rendering processing is performed on the target scene image, the image buffer storing the target scene image may be deleted immediately, or after waiting for a set time period (e.g., 1 minute), the image buffer storing the target scene image may be deleted, so as to reduce the waste of memory resources. Of course, in the embodiment of the present application, the image buffer may also be pre-divided and continuously present.
As shown in fig. 4D, in the embodiment of the present application, when the information amount of the scene image is too large, image degradation processing may be performed on the scene image, and an image buffer corresponding to the obtained degraded scene image is created, so that the success rate of image processing may be improved.
Next, an exemplary application of the embodiments of the present application in an actual application scenario will be described. In the embodiment of the present application, a specific visual effect (corresponding to the above target visual feature) may be added to a 3D scene, so as to meet a complex and variable business requirement, where the visual effect is not limited, for example, the visual effect may be a specific depth of field, light emission, film particles, various types of anti-aliasing, and for convenience of understanding, a screen failure special effect is described as an example hereinafter. By way of example, embodiments of the present application provide a schematic diagram of one scene image of a 3D scene as shown in fig. 5A, the 3D scene including 3D objects of various shapes. By the image processing scheme provided by the embodiment of the application, the screen fault special effect can be added in the scene image shown in fig. 5A to obtain the target scene image shown in fig. 5B, so that the presentation effect in the human-computer interaction interface can be improved, and the human-computer interaction experience is improved.
Next, the process of adding the screen failure special effect in the 3D scene is explained from the perspective of the underlying implementation, and for ease of understanding, the explanation will be made from several parts as follows.
1) A 3D scene is prepared.
Here, the created 3D scene including the 3D object may be acquired. For the case that a screen failure special effect needs to be added to the multimedia content, a 3D scene may be created, and an image in the multimedia content may be used as a texture (i.e., a map of an outer surface) of a 3D object in the 3D scene, where the multimedia content may be a video or an image, etc.
2) And (5) post-processing.
The principle of post-processing is to store the image collected by the camera (camera assembly) in an image buffer, apply a specific visual effect (such as lighting, color change, distortion, etc.) to the image in the image buffer through one or more shaders, and finally render the image in the image buffer to the human-computer interaction interface.
In the embodiment of the present application, post-processing may be implemented by a plurality of post-processing channels (Pass). At least part of the post-processing channels comprise shaders, and the shaders refer to functions written by Graphics Library Shader Language (GLSL) and can run in a GPU of the Graphics card.
For ease of understanding, the following post-processing by a Web Graphics Library (WebGL) will be described as an example. WebGL is a 3D drawing protocol, hardware 3D accelerated rendering can be provided for Canvas through WebGL, a user can conveniently and smoothly display a 3D scene in a browser (a human-computer interaction interface) through a display card of electronic equipment, and the Canvas is a part of Hyper Text Markup Language (HTML) 5 and allows a script Language to dynamically render bitmap images.
The embodiment of the present application provides a schematic diagram of post-processing by WebGL as shown in fig. 6, where fig. 6 shows a plurality of shaders, each shader may include a vertex shader and a fragment shader (corresponding to the above pixel shader), where the vertex shader is used to describe vertex information (such as coordinates, colors, etc.), that is, to restore an image to an original form, and the vertex shader may be invoked once for each vertex in the image; the fragment shader is used for performing pixel shading processing on pixels obtained through rasterization processing, and the fragment shader can be called once for each pixel in the image.
During post-processing, a vertex shader may be invoked by WebGL to process each vertex in the image and a fragment shader to process each pixel in the image. When the last shader processing is completed, the image can be rendered into the human-computer interaction interface through WebGL.
3) An effect synthesizer.
An effect synthesizer (effect composer) is a class used for realizing post-processing effects in three.js, which manages a post-processing process chain for generating final visual effects, wherein three.js is a 3D engine running in a browser, and is a 3D graphics library formed by packaging and simplifying a WebGL interface, and includes various objects such as a camera, light and shadow, materials and the like, and can be used for creating various 3D scenes in Web.
The embodiment of the present application provides a schematic diagram of a post-processing process chain as shown in fig. 7, where the post-processing process chain includes a plurality of post-processing channels, and in the post-processing process of an original image, the plurality of post-processing channels are traversed according to the sequence of the post-processing channels, and corresponding processing is performed according to the traversed post-processing channels. When the last post-processing channel processing is completed, the resulting result (i.e., the new image in FIG. 7) is rendered to the human-machine interface. Wherein, all post processing channels are handled in the image buffer, so, can carry out recessive image processing, and the user can't see the post processing process, can only see the image of finally rendering to in the human-computer interaction interface, can further promote user experience.
4) Render pass post-processing channels.
Here, the RenderPass post-processing channel may be used as a first post-processing channel of the post-processing process chain, and the RenderPass post-processing channel is used for copying the scene images in the field of view of the camera to an image buffer area in the memory for use by other subsequent post-processing channels.
For ease of understanding, the following pseudo code is provided:
creating a WebGL renderer;
creating an effect synthesizer, and transmitting the WebGL renderer as a parameter into the effect synthesizer;
creating a render post-processing channel, and transmitting a 3D scene and a camera into the render post-processing channel as parameters;
adding a RenderPass post-processing channel into a post-processing process chain;
creating a GlitchPass post-processing channel; wherein each GlitchPass post-processing channel comprises a shader;
the GlitchPass post-processing channel was added to the post-processing chain.
5) GlitchPass post-processing channels.
Here, the GlitchPass post-processing channel is used to add screen-fault effects, such as applying a screen-fault effect to the scene images stored in the image buffer by the RenderPass post-processing channel.
The fragment shader in the GlitchPass post-processing channel may include an RGB color conversion shader configured to perform, for each pixel in the image, a numerical shift processing on a channel value of a target color channel corresponding to the pixel, where the type of the target color channel is not limited, and may include, for example, a red color channel and a blue color channel. For example, for each pixel in the image, the pixel may be shifted by the RGB color conversion shader to two random opposite directions by the channel value corresponding to the red channel and the channel value corresponding to the blue channel.
It should be noted that a color channel refers to a channel for storing color information, and an image may include one or more color channels, the number of which depends on the applied color mode, for example, an RGB color mode includes three color channels, i.e., a red color channel, a green color channel, and a blue color channel. For a pixel in an image, the channel values of all color channels corresponding to the pixel are superposed and mixed, and then the real color of the pixel can be obtained.
For ease of understanding, the RGB color mode is described as an example, and in the RGB color mode, the value range of each color channel is [0, 255 ]. In the embodiment of the present application, the channel value of the pixel corresponding to the red channel and the channel value corresponding to the blue channel may be changed by the RGB color conversion shader, so as to achieve an effect of making the image have a color ghost, that is, a screen failure special effect. The embodiment of the present application provides an image processed by the RGB color conversion shader shown in fig. 8, in which red moves to the upper right corner and blue moves to the lower left corner, where "move" herein does not mean that the pixel is actually moved, but means that the color change of the pixel after the RGB color conversion shader processing looks as if the pixel is moved.
In addition, the embodiment of the present application further provides a target scene image obtained by processing with the RGB color conversion shader as shown in fig. 9, and it can be determined according to fig. 9 that the color change of the pixel can be more drastic and disordered after passing through the RGB color conversion shader.
In an embodiment of the present application, the fragment shader in the GlitchPass post-processing channel may further include a noise shader, and the noise shader is configured to add a white noise effect after the RGB color conversion shader completes processing. The noise point refers to the random change of brightness or color information in the image, is similar to snowflake, and can be used for simulating the snowflake effect of the television. By way of example, embodiments of the present application provide an image of a target scene processed by a noise shader as shown in fig. 10.
For ease of understanding, the following processing formulas for the RGB color conversion shader are provided:
{vec2 offset=amount*vec2(cos(angle),sin(angle));
vec4 cr=texture2D(tDiffuse,p+offset);
vec4 cga=texture2D(tDiffuse,p);
vec4 cb=texture2D(tDiffuse,p-offset);
gl_FragColor=vec4(cr.r,cga.g,cb.b,cga.a);}
wherein vec2 represents binary data, vec4 represents quaternary data; the angle corresponds to the random angle, and the value range can be [0, 2 pi ]; the amount is a weighting parameter, and can be set according to an actual application scenario, for example, a random value within a value range [0, 1 ]; offset corresponds to the offset coordinate above; tDiffuse represents an image to be processed, p represents the coordinate of a certain pixel in the image to be processed, and is also binary data; the meaning of texture2D function is to extract data in the first parameter from the second parameter, for example texture2D (tfiffuse, p + offset) means to extract the channel values of the image channels corresponding to the pixels whose coordinates meet p + offset in tfiffuse; r represents a channel value corresponding to the red channel, g represents a channel value corresponding to the green channel, b represents a channel value corresponding to the blue channel, and a represents a channel value corresponding to the transparency channel; the gl _ FragColor is a built-in variable of the fragment shader, and represents a channel value of each image channel corresponding to a pixel after being processed by the RGB color conversion shader.
For ease of understanding, the following noisy shader processing equations are also provided:
{vec4 snow=200.*amount*vec4(rand(vec2(xs*seed,ys*seed*50.))*0.2);
gl_FragColor=gl_FragColor+snow;}
wherein xs represents the abscissa of the pixel and ys represents the ordinate of the pixel; seed is a random offset value, which may be, for example, a random value within a value range [0, 1 ]; snow represents the noise channel value corresponding to each image channel.
It should be noted that the rand function is a self-defined random function, and the structure of the rand function can be as follows:
Float rand(vec2 co)
{
return fract(sin(dot(co.xy,vec2(s1,s2)))*s3);
}
wherein, the fract function is a random function built in the shader; dot represents a point-by-point function; xy represents the abscissa and ordinate of the pixel; s1, s2, and s3 are predetermined parameters, and may be set to a number greater than 0, for example.
After the snow is obtained, the snow and the gl _ FragColor are subjected to addition processing, and the value of the gl _ FragColor is updated according to the result of the addition processing. The principle of the addition processing may be a (a1, a2, a3, a4) + B (B1, B2, B3, B4) ═ C (a1+ B1, a2+ B2, a3+ B3, a4+ B4), that is, the addition processing of the channel values is performed separately for each image channel.
The embodiment of the present application further provides a flowchart of an image processing method of a three-dimensional scene as shown in fig. 11, which will be described with reference to the steps shown in fig. 11.
1) And creating a WebGL renderer, and executing image processing once through the WebGL renderer aiming at each frame of scene image in the presentation process of the 3D scene.
2) And creating an effect synthesizer, and transmitting the WebGL renderer as a parameter into the effect synthesizer. The effect synthesizer is used for managing a plurality of post-processing channels.
3) Adding a RenderPass post-processing channel, wherein the RenderPass post-processing channel is used for capturing scene images positioned in the visual field range of the camera and storing the captured scene images into an image buffer area of an internal memory.
4) A GlitchPass post-processing channel is added that is used to apply an RGB color conversion shader and a noise shader to the scene image in the image buffer. The RGB color conversion shader is used for changing the channel value of a red channel corresponding to a pixel and the channel value of a blue channel corresponding to the pixel, namely adding a screen fault special effect; the noise shader is used to change the channel value of each image channel corresponding to a pixel, i.e., to add snowflake noise.
5) And after the image is processed by all post-processing channels, rendering the target scene image in the image buffer area to a man-machine interaction interface, so that a screen failure special effect and a noise special effect can be added to the 3D scene in real time.
It is worth noting that in the embodiment of the application, a visual change condition can be set for the 3D scene, and when the presented 3D scene meets the visual change condition, the scene image is captured for image processing, so that the enthusiasm and the visual experience of the user for participating in interaction can be improved. The visual change condition is not limited, for example, the visual change condition may include at least one of the following: receiving a trigger operation aiming at a 3D scene at a human-computer interaction interface; the presentation duration of the 3D scene meets a duration condition; the audio characteristic variation amplitude of the audio corresponding to the 3D scene meets the amplitude condition.
The embodiment of the application can at least realize the following technical effects: 1) the automatic image processing can be realized aiming at the 3D scene, the dependence on art manufacturing and related labor input are reduced, and the learning cost is low; 2) the application range is wide, for example, the method is not only suitable for 3D scenes, but also suitable for multimedia contents (such as videos or images); 3) by means of the editability of the shader, the image processing can be carried out when the visual change condition is met, and the interactive performance with a user can be enriched; 4) the method occupies less computing resources and can be applied to various types of electronic equipment, such as smart phones.
Continuing with the exemplary structure of the image processing apparatus 455 for three-dimensional scenes provided by the embodiments of the present application implemented as software modules, in some embodiments, as shown in fig. 2, the software modules stored in the image processing apparatus 455 for three-dimensional scenes of the memory 450 may include: the acquisition module 4551 is configured to perform image acquisition processing on a three-dimensional scene to obtain a scene image of the three-dimensional scene; a storage module 4552, configured to store the scene image in an image buffer of a memory; a rendering module 4553, configured to perform rendering processing on the scene image in the image buffer through at least one rendering component to update the scene image to a target scene image with target visual characteristics; and the rendering module 4554 is configured to perform rendering processing on the target scene image in the image buffer to present the target scene image on the human-computer interaction interface.
In some embodiments, the shading components include a vertex shading component and a pixel shading component, the pixel shading component being deployed with a pixel shading policy corresponding to the target visual feature; a coloring module 4553, further configured to: traversing at least one shading component, and executing the following processing aiming at the traversed shading component: vertex information in a three-dimensional scene is obtained through a vertex coloring component in the traversed coloring components, and vertex coloring processing is carried out on a scene image in an image buffer area according to the vertex information to obtain a plurality of restored vertexes; and rasterizing the multiple vertexes through a pixel coloring component in the traversed coloring component, and performing pixel coloring processing on pixels obtained through rasterization according to a pixel coloring strategy so as to update the scene image in the image buffer area.
In some embodiments, the coloring module 4553 is further configured to: for any one pixel obtained by rasterization processing, at least one of the following processing modes is executed: carrying out numerical value offset processing on a channel value of an image channel corresponding to any one pixel; and carrying out noise point adding processing on the channel value of the image channel corresponding to any one pixel.
In some embodiments, the image channels comprise a plurality of color channels; a coloring module 4553, further configured to: cosine processing is carried out on the random angle to obtain a cosine value, sine processing is carried out on the random angle to obtain a sine value, and an offset coordinate is constructed according to the cosine value and the sine value; carrying out coordinate offset processing on any pixel according to the offset coordinate to obtain an offset pixel; updating the channel value of the target color channel corresponding to any one pixel according to the channel value of the target color channel corresponding to the offset pixel; wherein the target color channel comprises at least one of a plurality of color channels.
In some embodiments, the target color channel includes a red channel and a blue channel; a coloring module 4553, further configured to: carrying out coordinate offset processing in a first direction on any pixel according to the offset coordinate to obtain a first offset pixel; carrying out coordinate offset processing in a second direction on any pixel according to the offset coordinate to obtain a second offset pixel; wherein the first direction is opposite to the second direction; updating the channel value of the red channel corresponding to any one pixel according to the channel value of the red channel corresponding to the first offset pixel; and updating the channel value of the blue channel corresponding to any one pixel according to the channel value of the blue channel corresponding to the second offset pixel.
In some embodiments, the coloring module 4553 is further configured to: random noise point generation processing is carried out on any pixel to obtain a noise point channel value of a corresponding image channel; and overlapping the channel value of the image channel corresponding to any pixel with the noise channel value.
In some embodiments, the coloring module 4553 is further configured to: executing the following processing by the pixel coloring component in the traversed coloring component: constructing a vertex area in the scene image according to the restored multiple vertexes; pixels covered by the vertex area in the scene image are determined as pixels obtained by the rasterization processing.
In some embodiments, the image processing device 455 further comprises a creation module for: determining the information content of a scene image; determining a required memory space of the scene image according to the information amount, and creating an image buffer area in a memory according to the required memory space; wherein the required memory space is positively correlated with the information content.
In some embodiments, the creation module is further to: when the required memory space is larger than the available memory space in the memory, performing image degradation processing on the scene image according to the memory space difference to obtain a degraded scene image; wherein the memory space difference represents a difference between the required memory space and the available memory space; the image degradation processing mode comprises at least one of interception processing and definition degradation processing; determining an amount of degradation information for degrading a scene image; determining a memory space required for degrading the scene image according to the degrading information quantity, and creating an image buffer area in the memory according to the memory space required for degrading; wherein the memory space required for degradation is less than or equal to the available memory space; the storage module 4552 is further configured to: and storing the degraded scene image into an image buffer area created according to the memory space required by degradation.
In some embodiments, the three-dimensional scene includes at least one three-dimensional object; the acquisition module 4551 is further configured to: determining a minimum space simultaneously including at least one three-dimensional object in the three-dimensional scene according to the coordinates of the at least one three-dimensional object in the three-dimensional scene; determining a field of view of the camera assembly based on the minimum space; an image of a scene in a three-dimensional scene within a field of view is acquired by a camera assembly.
In some embodiments, the acquisition module 4551 is further configured to: when the three-dimensional scene meets the visual change condition, carrying out image acquisition processing on the three-dimensional scene to obtain a scene image of the three-dimensional scene; wherein the visual change condition comprises at least one of: receiving a trigger operation aiming at a three-dimensional scene at a human-computer interaction interface; the presentation time length of the three-dimensional scene meets a time length condition; the audio characteristic variation amplitude of the audio corresponding to the three-dimensional scene meets the amplitude condition.
In some embodiments, the acquisition module 4551 is further configured to: any one of the following processes is performed: acquiring a created three-dimensional scene comprising a three-dimensional object; acquiring multimedia content to be processed, and creating a three-dimensional object corresponding to an image to be processed in a newly-built three-dimensional scene; wherein, the image to be processed is any one image in the multimedia content.
Embodiments of the present application provide a computer program product or computer program comprising computer instructions (i.e., executable instructions) stored in a computer readable storage medium. The processor of the electronic device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the electronic device executes the image processing method of the three-dimensional scene described in the embodiment of the present application.
Embodiments of the present application provide a computer-readable storage medium storing executable instructions, which when executed by a processor, will cause the processor to perform the methods provided by embodiments of the present application, for example, the image processing method of a three-dimensional scene as shown in fig. 4A, 4B and 4D.
In some embodiments, the computer-readable storage medium may be memory such as FRAM, ROM, PROM, EPROM, EEPROM, flash, magnetic surface memory, optical disk, or CD-ROM; or may be various devices including one or any combination of the above memories.
In some embodiments, executable instructions may be written in any form of programming language (including compiled or interpreted languages), in the form of programs, software modules, scripts or code, and may be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
By way of example, executable instructions may correspond, but do not necessarily have to correspond, to files in a file system, and may be stored in a portion of a file that holds other programs or data, such as in one or more scripts in a hypertext Markup Language (HTML) document, in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub-programs, or portions of code).
As an example, executable instructions may be deployed to be executed on one electronic device or on multiple electronic devices located at one site or distributed across multiple sites and interconnected by a communication network.
The above description is only an example of the present application, and is not intended to limit the scope of the present application. Any modification, equivalent replacement, and improvement made within the spirit and scope of the present application are included in the protection scope of the present application.

Claims (15)

1. A method of image processing of a three-dimensional scene, the method comprising:
carrying out image acquisition processing on a three-dimensional scene to obtain a scene image of the three-dimensional scene;
storing the scene image to an image buffer area of a memory;
performing, by at least one rendering component, a rendering process on the scene image in the image buffer to update the scene image to a target scene image having target visual characteristics;
rendering the target scene image in the image buffer area so as to present the target scene image on a human-computer interaction interface.
2. The method of claim 1, wherein the shading components comprise a vertex shading component and a pixel shading component, and wherein the pixel shading component is deployed with a pixel shading policy corresponding to the target visual feature;
the rendering, by at least one rendering component, the scene image in the image buffer, comprising:
traversing the at least one shading component, and executing the following processing aiming at the traversed shading component:
vertex information in the three-dimensional scene is obtained through the vertex coloring assemblies in the traversed coloring assemblies, and vertex coloring processing is carried out on the scene images in the image buffer area according to the vertex information to obtain a plurality of restored vertexes;
and rasterizing the plurality of vertexes through a pixel coloring component in the traversed coloring component, and performing pixel coloring processing on pixels obtained through rasterization according to the pixel coloring strategy so as to update the scene image in the image buffer area.
3. The method according to claim 2, wherein the performing pixel shading processing on the pixels obtained by the rasterization processing according to the pixel shading policy comprises:
for any one pixel obtained by the rasterization processing, at least one of the following processing modes is executed:
carrying out numerical value offset processing on the channel value of the image channel corresponding to any one pixel;
and carrying out noise point adding processing on the channel value of the image channel corresponding to any one pixel.
4. The method of claim 3, wherein the image channels comprise a plurality of color channels; the performing numerical offset processing on the channel value of the image channel corresponding to any one pixel includes:
cosine processing is carried out on a random angle to obtain a cosine value, sine processing is carried out on the random angle to obtain a sine value, and an offset coordinate is constructed according to the cosine value and the sine value;
carrying out coordinate offset processing on any pixel according to the offset coordinate to obtain an offset pixel;
updating the channel value of the target color channel corresponding to the any one pixel according to the channel value of the target color channel corresponding to the offset pixel; wherein the target color channel comprises at least one of the plurality of color channels.
5. The method of claim 4, wherein the target color channel comprises a red channel and a blue channel; the performing coordinate offset processing on the arbitrary pixel according to the offset coordinate to obtain an offset pixel includes:
carrying out coordinate offset processing in a first direction on any pixel according to the offset coordinate to obtain a first offset pixel;
carrying out coordinate offset processing in a second direction on any pixel according to the offset coordinate to obtain a second offset pixel; wherein the first direction is opposite the second direction;
the updating, according to the channel value of the offset pixel corresponding to the target color channel, the channel value of the arbitrary pixel corresponding to the target color channel includes:
updating the channel value of the red channel corresponding to the any one pixel according to the channel value of the red channel corresponding to the first offset pixel;
and updating the channel value of the blue channel corresponding to the any one pixel according to the channel value of the blue channel corresponding to the second offset pixel.
6. The method according to claim 3, wherein said performing noise addition processing on the channel value of said image channel corresponding to said any one pixel comprises:
carrying out random noise point generation processing on any one pixel to obtain a noise point channel value corresponding to the image channel;
and overlapping the channel value of the image channel corresponding to any one pixel with the noise channel value.
7. The method of claim 2, wherein rasterizing the plurality of vertices by a pixel shading component of the traversed shading components comprises:
performing, by a pixel shading component of the traversed shading components:
constructing a vertex area in the scene image according to the plurality of restored vertexes;
and determining pixels covered by the vertex area in the scene image as pixels obtained through rasterization processing.
8. The method of any one of claims 1 to 7, wherein before storing the scene image in an image buffer of an internal memory, the method further comprises:
determining the information content of the scene image;
determining the required memory space of the scene image according to the information amount, and creating an image buffer area in the memory according to the required memory space;
wherein the required memory space is positively correlated with the information content.
9. The method of claim 8, wherein after determining the required memory space of the scene image based on the amount of information, the method further comprises:
when the required memory space is larger than the available memory space in the memory, performing image degradation processing on the scene image according to the memory space difference to obtain a degraded scene image;
wherein the memory space difference represents a difference between the required memory space and the available memory space; the image degradation processing mode comprises at least one of interception processing and definition degradation processing;
determining an amount of degradation information for the degraded scene image;
determining a memory space required by degradation of the degraded scene image according to the degradation information amount, and creating an image buffer area in the memory according to the memory space required by degradation; wherein the downgrade demand memory space is less than or equal to the available memory space;
the storing the scene image to an image buffer area of a memory includes:
and storing the degraded scene image to an image buffer area created according to the memory space required by degradation.
10. The method of any one of claims 1 to 7, wherein the three-dimensional scene comprises at least one three-dimensional object; the image acquisition processing of the three-dimensional scene to obtain the scene image of the three-dimensional scene includes:
determining a minimum space in the three-dimensional scene that simultaneously includes the at least one three-dimensional object according to coordinates of the at least one three-dimensional object in the three-dimensional scene;
determining a field of view of the camera assembly based on the minimum space;
capturing, by the camera assembly, an image of a scene of the three-dimensional scene that is within the field of view.
11. The method according to any one of claims 1 to 7, wherein the image acquisition processing of the three-dimensional scene to obtain the scene image of the three-dimensional scene comprises:
when the three-dimensional scene meets the visual change condition, carrying out image acquisition processing on the three-dimensional scene to obtain a scene image of the three-dimensional scene;
wherein the visual change condition comprises at least one of:
receiving a triggering operation aiming at the three-dimensional scene at the human-computer interaction interface;
the presentation time length of the three-dimensional scene meets a time length condition;
and the audio characteristic variation amplitude of the audio corresponding to the three-dimensional scene meets an amplitude condition.
12. The method of any of claims 1 to 7, wherein prior to the image acquisition processing of the three-dimensional scene, the method further comprises:
any one of the following processes is performed:
acquiring a created three-dimensional scene comprising a three-dimensional object;
acquiring multimedia content to be processed, and creating a three-dimensional object corresponding to an image to be processed in a newly-built three-dimensional scene; wherein the image to be processed is any one image in the multimedia content.
13. An apparatus for image processing of a three-dimensional scene, the apparatus comprising:
the acquisition module is used for acquiring and processing images of a three-dimensional scene to obtain a scene image of the three-dimensional scene;
the storage module is used for storing the scene image to an image buffer area of a memory;
a rendering module for rendering the scene image in the image buffer by at least one rendering component to update the scene image to a target scene image having target visual characteristics;
and the rendering module is used for rendering the target scene image in the image buffer area so as to present the target scene image on a human-computer interaction interface.
14. An electronic device, comprising:
a memory for storing executable instructions;
a processor for implementing the method of image processing of a three-dimensional scene of any one of claims 1 to 12 when executing executable instructions stored in the memory.
15. A computer-readable storage medium storing executable instructions for implementing the method of image processing of a three-dimensional scene according to any one of claims 1 to 12 when executed by a processor.
CN202110528438.7A 2021-05-14 2021-05-14 Image processing method and device of three-dimensional scene and electronic equipment Active CN113192173B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110528438.7A CN113192173B (en) 2021-05-14 2021-05-14 Image processing method and device of three-dimensional scene and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110528438.7A CN113192173B (en) 2021-05-14 2021-05-14 Image processing method and device of three-dimensional scene and electronic equipment

Publications (2)

Publication Number Publication Date
CN113192173A true CN113192173A (en) 2021-07-30
CN113192173B CN113192173B (en) 2023-09-19

Family

ID=76981870

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110528438.7A Active CN113192173B (en) 2021-05-14 2021-05-14 Image processing method and device of three-dimensional scene and electronic equipment

Country Status (1)

Country Link
CN (1) CN113192173B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116703689A (en) * 2022-09-06 2023-09-05 荣耀终端有限公司 Method and device for generating shader program and electronic equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9928637B1 (en) * 2016-03-08 2018-03-27 Amazon Technologies, Inc. Managing rendering targets for graphics processing units
CN110152291A (en) * 2018-12-13 2019-08-23 腾讯科技(深圳)有限公司 Rendering method, device, terminal and the storage medium of game picture
CN110211218A (en) * 2019-05-17 2019-09-06 腾讯科技(深圳)有限公司 Picture rendering method and device, storage medium and electronic device
US20200372697A1 (en) * 2017-07-31 2020-11-26 Ecole Polytechnique Federale De Lausanne A method for voxel ray-casting of scenes on a whole screen
CN112381918A (en) * 2020-12-03 2021-02-19 腾讯科技(深圳)有限公司 Image rendering method and device, computer equipment and storage medium
CN112529995A (en) * 2020-12-28 2021-03-19 Oppo(重庆)智能科技有限公司 Image rendering calculation method and device, storage medium and terminal

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9928637B1 (en) * 2016-03-08 2018-03-27 Amazon Technologies, Inc. Managing rendering targets for graphics processing units
US20200372697A1 (en) * 2017-07-31 2020-11-26 Ecole Polytechnique Federale De Lausanne A method for voxel ray-casting of scenes on a whole screen
CN110152291A (en) * 2018-12-13 2019-08-23 腾讯科技(深圳)有限公司 Rendering method, device, terminal and the storage medium of game picture
CN110211218A (en) * 2019-05-17 2019-09-06 腾讯科技(深圳)有限公司 Picture rendering method and device, storage medium and electronic device
CN112381918A (en) * 2020-12-03 2021-02-19 腾讯科技(深圳)有限公司 Image rendering method and device, computer equipment and storage medium
CN112529995A (en) * 2020-12-28 2021-03-19 Oppo(重庆)智能科技有限公司 Image rendering calculation method and device, storage medium and terminal

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116703689A (en) * 2022-09-06 2023-09-05 荣耀终端有限公司 Method and device for generating shader program and electronic equipment
CN116703689B (en) * 2022-09-06 2024-03-29 荣耀终端有限公司 Method and device for generating shader program and electronic equipment

Also Published As

Publication number Publication date
CN113192173B (en) 2023-09-19

Similar Documents

Publication Publication Date Title
WO2022116759A1 (en) Image rendering method and apparatus, and computer device and storage medium
US20230053462A1 (en) Image rendering method and apparatus, device, medium, and computer program product
TWI668577B (en) Rendering apparatus, rendering method thereof, program and recording medium
WO2023231537A1 (en) Topographic image rendering method and apparatus, device, computer readable storage medium and computer program product
CN112316433A (en) Game picture rendering method, device, server and storage medium
CN106713968A (en) Live broadcast data display method and device
CN114494024B (en) Image rendering method, device and equipment and storage medium
CN111142967B (en) Augmented reality display method and device, electronic equipment and storage medium
CN113192173B (en) Image processing method and device of three-dimensional scene and electronic equipment
CN112889079A (en) Platform and method for collaborative generation of content
CN113470153A (en) Rendering method and device of virtual scene and electronic equipment
CN117390322A (en) Virtual space construction method and device, electronic equipment and nonvolatile storage medium
CN116958344A (en) Animation generation method and device for virtual image, computer equipment and storage medium
CN115965731A (en) Rendering interaction method, device, terminal, server, storage medium and product
Döllner Geovisualization and real-time 3D computer graphics
CN117372602B (en) Heterogeneous three-dimensional multi-object fusion rendering method, equipment and system
CN116450017B (en) Display method and device for display object, electronic equipment and medium
CN114049425B (en) Illumination simulation method, device, equipment and storage medium in image
WO2023142756A1 (en) Live broadcast interaction method, device, and system
CN113318444B (en) Role rendering method and device, electronic equipment and storage medium
WO2022135050A1 (en) Rendering method, device, and system
CN117876568A (en) Custom data injection method and device for streaming transmission process
CN118096982A (en) Construction method and system of fault inversion training platform
CN118134812A (en) Image display switching method and device, electronic equipment and storage medium
CN117745892A (en) Particle generation performance control method, device, storage medium, and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40047939

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant