CN111754607A - Picture processing method and device, electronic equipment and computer readable storage medium - Google Patents

Picture processing method and device, electronic equipment and computer readable storage medium Download PDF

Info

Publication number
CN111754607A
CN111754607A CN201910237542.3A CN201910237542A CN111754607A CN 111754607 A CN111754607 A CN 111754607A CN 201910237542 A CN201910237542 A CN 201910237542A CN 111754607 A CN111754607 A CN 111754607A
Authority
CN
China
Prior art keywords
special effect
video frame
texture
frame data
opengl
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910237542.3A
Other languages
Chinese (zh)
Inventor
邵翔宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Mobile Software Co Ltd
Original Assignee
Beijing Xiaomi Mobile Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Mobile Software Co Ltd filed Critical Beijing Xiaomi Mobile Software Co Ltd
Priority to CN201910237542.3A priority Critical patent/CN111754607A/en
Publication of CN111754607A publication Critical patent/CN111754607A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Generation (AREA)

Abstract

The disclosure relates to a picture processing method and device, an electronic device and a computer readable storage medium, wherein the method comprises the following steps: acquiring an original picture and at least one special effect picture for forming a target dynamic special effect; respectively converting the original picture into an original texture and converting the special effect picture into a special effect texture by an OpenGL ES program; determining a relative position relationship among a selected special effect texture, the selected special effect texture and the original texture in each video frame data, so that the OpenGL ES program can superpose and draw the original texture and the special effect texture into corresponding video frame data; and previewing and displaying the drawn video frame data.

Description

Picture processing method and device, electronic equipment and computer readable storage medium
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to an image processing method and apparatus, an electronic device, and a computer-readable storage medium.
Background
Pictures can often contain more information than simple text. However, the content contained in the picture is in a static state, so that the displayed display effect may not meet the actual requirement of the user. Therefore, in order to make the picture more generated and beautiful, a processing scheme for adding a dynamic special effect to the picture is proposed in the related art.
Disclosure of Invention
The present disclosure provides a picture processing method and apparatus, an electronic device, and a computer-readable storage medium, to solve the deficiencies in the related art.
According to a first aspect of the embodiments of the present disclosure, there is provided an image processing method, including:
acquiring an original picture and at least one special effect picture for forming a target dynamic special effect;
respectively converting the original picture into an original texture and converting the special effect picture into a special effect texture by an OpenGL ES program;
determining a relative position relationship among a selected special effect texture, the selected special effect texture and the original texture in each video frame data, so that the OpenGL ES program can superpose and draw the original texture and the special effect texture into corresponding video frame data;
and previewing and displaying the drawn video frame data.
Optionally, the performing preview display on the drawn video frame data includes:
respectively acquiring a system display handle, a rendering buffer area and the graphics context of the OpenGL ES program;
executing instructions of the OpenGL ES program based on the graphics context to draw a video frame; the video frame data obtained by drawing is output to the rendering buffer area;
and previewing and displaying the video frame data in the rendering buffer area through the system display handle.
Optionally, the method further includes:
and generating the drawn video frame data into a video file.
Optionally, the generating the drawn video frame data into a video file includes:
respectively acquiring an off-screen buffer area, a graphics context of the OpenGL ES program and an encoder;
executing instructions of the OpenGL ES program based on the graphics context to render video frame data; the video frame data obtained by drawing is output to the off-screen buffer area;
and generating the video frame data in the off-screen buffer into a video file through the encoder.
According to a second aspect of the embodiments of the present disclosure, there is provided a picture processing method, including:
acquiring an original picture and at least one special effect picture for forming a target dynamic special effect;
respectively converting the original picture into an original texture and converting the special effect picture into a special effect texture by an OpenGL ES program;
determining a relative position relationship among a selected special effect texture, the selected special effect texture and the original texture in each video frame data, so that the OpenGL ES program can superpose and draw the original texture and the special effect texture into corresponding video frame data;
and generating the drawn video frame data into a video file.
According to a third aspect of the embodiments of the present disclosure, there is provided a picture processing apparatus including:
the device comprises an acquisition unit, a processing unit and a display unit, wherein the acquisition unit is used for acquiring an original picture and at least one special effect picture used for forming a target dynamic special effect;
the conversion unit is used for respectively converting the original image into an original texture and converting the special effect image into a special effect texture through an OpenGL ES program;
the determining unit is used for determining the selected special effect texture in each video frame data and the relative position relationship between the selected special effect texture and the original texture, so that the OpenGL ES program can superpose and draw the original texture and the special effect texture into corresponding video frame data;
and the display unit is used for previewing and displaying the drawn video frame data.
Optionally, the display unit is specifically configured to:
respectively acquiring a system display handle, a rendering buffer area and the graphics context of the OpenGL ES program;
executing instructions of the OpenGL ES program based on the graphics context to draw a video frame; the video frame data obtained by drawing is output to the rendering buffer area;
and previewing and displaying the video frame data in the rendering buffer area through the system display handle.
Optionally, the method further includes:
and a generation unit which generates the drawn video frame data into a video file.
Optionally, the generating unit is specifically configured to:
respectively acquiring an off-screen buffer area, a graphics context of the OpenGL ES program and an encoder;
executing instructions of the OpenGL ES program based on the graphics context to render video frame data; the video frame data obtained by drawing is output to the off-screen buffer area;
and generating the video frame data in the off-screen buffer into a video file through the encoder.
According to a fourth aspect of the embodiments of the present disclosure, there is provided a picture processing apparatus including:
the device comprises an acquisition unit, a processing unit and a display unit, wherein the acquisition unit is used for acquiring an original picture and at least one special effect picture used for forming a target dynamic special effect;
the conversion unit is used for respectively converting the original image into an original texture and converting the special effect image into a special effect texture through an OpenGL ES program;
the determining unit is used for determining the selected special effect texture in each video frame data and the relative position relationship between the selected special effect texture and the original texture, so that the OpenGL ES program can superpose and draw the original texture and the special effect texture into corresponding video frame data;
and a generation unit which generates the drawn video frame data into a video file.
According to a fifth aspect of embodiments of the present disclosure, there is provided an electronic apparatus including:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to implement the method of any of the embodiments of the first or second aspects described above.
According to a sixth aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium having stored thereon computer instructions, characterized in that the instructions, when executed by a processor, implement the steps of the method according to any of the embodiments of the first or second aspect described above.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
Fig. 1 is a flowchart illustrating a picture processing method according to an exemplary embodiment.
Fig. 2 is a flow chart illustrating another picture processing method according to an example embodiment.
FIG. 3 is a flow diagram illustrating a preview presentation of a dynamic special effect in accordance with an exemplary embodiment.
FIG. 4 is a flowchart illustrating a method for saving a dynamic special effect as a video file according to an example embodiment.
Fig. 5-6 are block diagrams illustrating a picture processing apparatus according to an exemplary embodiment.
Fig. 7 is a block diagram illustrating another picture processing apparatus according to an example embodiment.
Fig. 8 is a schematic structural diagram illustrating an apparatus for picture processing according to an exemplary embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It is to be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present application. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
Fig. 1 is a flowchart illustrating a picture processing method according to an exemplary embodiment, and the method, as shown in fig. 1, applied to an electronic device may include the following steps:
in step 102, an original picture and at least one effect picture for forming a target dynamic effect are acquired.
In one embodiment, the original picture is used to form the background, and the special effect picture is used to be superimposed on the original picture to form the corresponding target dynamic special effect. According to actual requirements, one or more special effect pictures can be selected, which is not limited by the present disclosure.
In step 104, the original picture is converted into an original texture and the special effect picture is converted into a special effect texture by an OpenGL ES program.
In one embodiment, the Texture (Texture) may be presented as a two-dimensional image, by converting the picture into a Texture, so that OpenGL ES may map the Texture onto the target object, such as mapping the above-mentioned original Texture and special effect Texture into the video frame data, thereby presenting an overlay display effect of the original picture and the special effect picture by overlaying the original Texture and the special effect Texture.
In step 106, the selected special effect texture in each video frame data, and the relative position relationship between the selected special effect texture and the original texture are determined, so that the OpenGL ES program superimposes the original texture and the special effect texture to draw corresponding video frame data.
In one embodiment, the selected special effect texture and the original texture can be superimposed based on the relative position relationship by defining the special effect texture selected by each video frame data and the relative position relationship between the special effect texture and the original texture; then, the video frame data are respectively configured and formed, so that the required target dynamic special effect is finally presented.
In step 108, the video frame data obtained by drawing is displayed in a preview manner.
In an embodiment, the original picture and the special effect picture are synthesized into corresponding video frame data through OpenGL ES for displaying, and compared with a case that the original picture and the special effect picture are synthesized into an overlapped picture by using Canvas (Canvas), the video frame data are relatively clear and coherent, the time duration for generating the video frame data is relatively shorter, and a better visual presentation effect can be realized.
In one embodiment, a system display handle (EGLDisplay), a rendering buffer (EGLSurface), and a graphics context (EGLContext) of the OpenGL ES program may be obtained separately; executing instructions of the OpenGL ES program based on the graphics context to draw a video frame; the video frame data obtained by drawing is output to the rendering buffer area; and then, performing preview display on the video frame data in the rendering buffer through the system display handle.
In an embodiment, the rendered video frame data may also be generated as a video file. For example, an off-screen buffer, a graphics context of the OpenGL ES program, and an encoder may be obtained separately; executing instructions of the OpenGL ES program based on the graphics context to render video frame data; the video frame data obtained by drawing is output to the off-screen buffer area; and generating the video frame data in the off-screen buffer into a video file through the encoder. Compared with a mode of synthesizing each intermediate picture into a GIF dynamic picture in the related technology, the method can greatly shorten time consumption and reduce the file volume under the condition of the same frame number, or obtain higher frame number under the same time consumption or the same file volume so as to improve the definition and the fluency of the dynamic special effect. Meanwhile, the video file can meet the format requirements of more social platforms, and a user can share the video file on the social platforms conveniently.
Fig. 2 is a flowchart illustrating another picture processing method according to an exemplary embodiment, and as shown in fig. 2, the method applied to an electronic device may include the following steps:
in step 202, an original picture and at least one effect picture for forming a target dynamic effect are acquired.
In one embodiment, the original picture is used to form the background, and the special effect picture is used to be superimposed on the original picture to form the corresponding target dynamic special effect. According to actual requirements, one or more special effect pictures can be selected, which is not limited by the present disclosure.
In step 204, the original picture is converted into an original texture and the special effect picture is converted into a special effect texture by an OpenGL ES program.
In an embodiment, the texture may be presented as a two-dimensional image, and by converting the image into the texture, OpenGLES may map the texture onto the target object, for example, map the original texture and the special effect texture into video frame data, so as to present an overlapping display effect of the original image and the special effect image by overlapping the original texture and the special effect texture.
In step 206, the selected special effect texture in each video frame data, and the relative position relationship between the selected special effect texture and the original texture are determined, so that the OpenGL ES program superimposes the original texture and the special effect texture to render corresponding video frame data.
In one embodiment, the selected special effect texture and the original texture can be superimposed based on the relative position relationship by defining the special effect texture selected by each video frame data and the relative position relationship between the special effect texture and the original texture; then, the video frame data are respectively configured and formed, so that the required target dynamic special effect is finally presented.
In step 208, the rendered video frame data is generated as a video file.
In an embodiment, an off-screen buffer, a graphics context of the OpenGL ES program, and an encoder may be obtained separately; executing instructions of the OpenGL ES program based on the graphics context to render video frame data; the video frame data obtained by drawing is output to the off-screen buffer area; and generating the video frame data in the off-screen buffer into a video file through the encoder. Compared with a mode of synthesizing each intermediate picture into a GIF dynamic picture in the related technology, the method can greatly shorten time consumption and reduce the file volume under the condition of the same frame number, or obtain higher frame number under the same time consumption or the same file volume so as to improve the definition and the fluency of the dynamic special effect. Meanwhile, the video file can meet the format requirements of more social platforms, and a user can share the video file on the social platforms conveniently.
In an embodiment, a preview display may be performed on the rendered video frame data. For example, a system display handle (EGLDisplay), a rendering buffer (EGLSurface), and a graphics context (EGLContext) of the OpenGL ES program may be obtained, respectively; executing instructions of the OpenGL ES program based on the graphics context to draw a video frame; the video frame data obtained by drawing is output to the rendering buffer area; and then, performing preview display on the video frame data in the rendering buffer through the system display handle. The original picture and the special effect picture are synthesized into corresponding video frame data through OpenGL ES for displaying, compared with the mode that the original picture and the special effect picture are synthesized into the superposed picture through Canvas (Canvas), the video frame data are relatively clear and coherent, the time length for generating the video frame data is relatively short, and a better visual presentation effect can be achieved.
Fig. 3 is a flowchart illustrating a preview presentation of a dynamic special effect according to an exemplary embodiment, and as shown in fig. 3, the method applied to an electronic device may include the following steps:
in step 302, EGLDisplay is initialized and obtained.
In an embodiment, OpenGL may be considered an API for operating an electronic device GPU, which may send instructions to the GPU via a driver for controlling the running state of a graphics rendering pipeline state machine. Accordingly, EGL may serve as an interface between OpenGL and the windowing system of the electronic device operating system.
In one embodiment, the present disclosure may be implemented based on OpenGL ES, and may be applied to embedded systems such as mobile phones, tablets, game machines, and the like; OpenGL ES is a subset of OpenGL, or a clipping version of OpenGL, and removes complicated and unnecessary primitives such as quadrangles and polygons compared to the complete OpenGL. Of course, the present disclosure is not limited to having to employ OpenGL or OpenGL ES. In the related art, OpenGL ESs includes multiple versions of OpenGL ES1.0/1.1, OpenGL ES2.0, etc., and the disclosure is described in terms of OpenGL ES2.0, which is distinguished from OpenGL ES1.0/1.1 by using programmable pipeline technology, thereby allowing developers to programmatically control vertex shaders (vertex shaders) and Fragment shaders (Fragment shaders).
In one embodiment, EGLDisplay provides EGL with a generic data type for associating system physical screens to represent display device handles or IDs, which can be understood as display windows on the front end of the system.
In step 304, the EGL is initialized.
In one embodiment, the retrieved EGLDisplay is used to initialize EGL.
In step 306, EGLConfig is selected.
In one embodiment, EGLConfig is a configuration attribute of the render target framebuffer, i.e. configuration information for describing eglsurce described below. EGLConfig contains a number of attributes (attributes) that are used to determine the format and capabilities of the framebuffer, which can be read, for example, by eglgettconfigattribute ().
In one embodiment, based on the properties of the EGL, an EGLConfig close to the demand can be obtained; of course, EGLConfig can be selected according to the requirement.
In step 308, EGLSurface is obtained.
In one embodiment, the surface of the surface View that the system front-end display window is displaying may be obtained, and then the surface is initialized and the window allocated by the operating system is obtained as the EGLSurface. EGLSurface is actually a framebuffer, which is the rendering destination for the OpenGL ES instructions in EGLContext described below.
In step 310, EGLContext is initialized and obtained.
In one embodiment, the OpenGL ES pipeline (pipeline) may be understood as a state machine, including current states of color, texture coordinates, transformation matrix, rendering mode, etc., which are applied to primitives such as vertex coordinates submitted by the OpenGL API program to form pixels in the eglsource frame buffer. In an OpenGL programming interface, EGLContext is the above state machine, and an OpenGL API program is used to provide primitives and set states to EGL Context. In other words, EGLContext is actually the execution environment of an OpenGL ES instruction.
In step 312, EGLDisplay, EGLSurface and EGLContext are bound.
In an embodiment, by binding EGLDisplay, EGLSurface and EGLContext, EGLSurface is used as a rendering destination after an OpenGL ES instruction in EGLContext is executed, and EGLDisplay is used as a front-end display of EGLSurface, so that the present disclosure can implement preview display of video frames accordingly.
In step 314, the OpenGL ES program is obtained, binding the vertex shader and fragment shader.
In one embodiment, in OpenGL ES2.0 version, control of vertex shaders and fragment shaders is supported. The vertex shader can be used for realizing vertex transformation, normal vector calculation, texture coordinate conversion, illumination and material application and the like, and the fragment shader can be used for realizing texture environment and color summation, fog, Alpha test and the like. The binding vertex shader and the fragment shader can be applied to the processing process of video frames in the following process.
In one embodiment, the OpenGL ES program is based on the execution environment provided by EGLContext for executing instructions related to the OpenGL ES program, thereby implementing the following steps.
In step 316, the picture is converted to a texture.
In an embodiment, the image is converted into the corresponding texture, for example, the original image is converted into the original texture, and the special effect image is converted into the special effect texture, so that the video frame containing the target special effect is generated by selecting and overlaying the original texture and the special effect texture in the subsequent steps, and multiple intermediate images are avoided to shorten the processing time.
In step 318, the viewport and projection matrix are set.
In an embodiment, the Viewport (Viewport) is the destination of the final rendered results display. In general, the viewport may be a rectangular area with a length unit of pixels, and the position and size of the viewport are defined by a viewport coordinate system. The viewport coordinate system is a Cartesian rectangular coordinate system, the origin of the viewport coordinate system is positioned at the lower left corner of the window, the horizontal axis (x) is positive to the right, and the vertical axis (y) is positive upwards. A viewport is one of the state variables maintained inside OpenGL, and may remain unchanged during the rendering process of each frame, or may implement one or more changes, depending on the settings for the viewport.
In an embodiment, by setting the projection matrix, in a subsequent processing step, geometric transformation operations such as translation, scaling, rotation and the like can be performed on the original texture or the special effect texture based on matrix change, so as to obtain a required display effect.
In step 320, the position of the desired overlay picture is updated.
In an embodiment, the original picture (actually corresponding original texture) is displayed as a background and the special effect picture (actually corresponding special effect texture) is displayed in an overlapping manner on the background, so that the original picture can be referred to as a background picture and the special effect picture can be referred to as an overlapping picture. The background picture fills the entire viewport, while the overlay picture occupies at least a portion of the area of the viewport. By adjusting and updating the position of the superposed picture, the position of the corresponding special effect texture can be determined subsequently, so that a video frame meeting the target special effect can be obtained.
In step 322, the values of the model view matrix for each overlay picture are calculated.
In an embodiment, the values of the model view matrix of each overlay picture are respectively determined according to the position updating operations performed on the overlay pictures in the above steps, such as translation, scaling, rotation, and the like performed on the overlay pictures.
In step 324, OpenGL ES is configured.
In an embodiment, configuring OpenGL ES may include: specifying RGBA values when clearing the color buffer, turning on color mixing, specifying the mixing mode as (gles20.gl _ SRC _ ALPHA, gles20.gl _ ONE _ minute _ SRC _ ALPHA) to turn on support for transparency, etc., which is not limited by the present disclosure.
In step 326, the superimposed video frame is rendered.
In an embodiment, a call to the OpenGL API may be used to implement a rendering operation. Through the drawing operation, the original texture and the corresponding special effect texture can be drawn in an overlapping mode based on the overlapping pictures contained in each video frame set in the step, the position information of each overlapping picture and the like, and therefore the video frame meeting the special effect requirement is obtained.
In an embodiment, the color mixing is also turned off while the superimposed video frame is being rendered.
In step 328, the rendered video frame is displayed.
In one embodiment, the eglseparation and the EGLSurface buffer addresses are exchanged by executing the eglsepampbuffers instruction, so that the video frames buffered in the EGLSurface are displayed for previewing.
In an embodiment, by repeating the steps 320 to 328, each video frame can be sequentially and respectively drawn and displayed, so that a user can view a video formed by each video frame in the viewport, and the video can present a target dynamic special effect.
Fig. 4 is a flowchart illustrating a method for saving a dynamic special effect as a video file according to an exemplary embodiment, and the method applied to an electronic device, as shown in fig. 4, may include the following steps:
in step 402, EGLDisplay is initialized and obtained.
In step 404, the EGL is initialized.
In step 406, EGLConfig is selected.
In an embodiment, the steps 402 to 406 can refer to the steps 302 to 306, which are not described herein again.
In step 408, EGLContext is initialized and obtained.
In an embodiment, step 408 may refer to step 310 described above, and is not described herein again.
In step 410, an OpenGL ES program is obtained, binding a vertex shader and a fragment shader.
In step 412, the picture is converted to a texture.
In step 414, the viewport and projection matrix are set.
In step 416, the position of the desired overlay picture is updated.
In step 418, the values of the model view matrix for each overlay picture are calculated.
In an embodiment, the steps 410 to 418 can refer to the steps 314 to 322, which are not described herein again.
In step 420, MediaCodec is obtained.
In one embodiment, a usable encoder, the MediaCodec described above, is initialized and obtained according to a predetermined video coding technology standard. For example, the creation operation for the encoder may be implemented by calling createEncoderByType of mediaCodec.
In step 422, MediaFormat is obtained and set.
In one embodiment, the encoder format, i.e. MediaFormat as described above, is initialized and obtained according to a preset video coding technology standard and aspect. And setting the following attributes of MediaFormat: color format, target bit rate, target frame rate, key frame interval time, etc., which are not limited by this disclosure.
In step 424, MediaFormat is configured for MediaCodec.
In step 426, the input source is obtained via MediaCodec.
In one embodiment, the input source of MediaCodec is a surface, such as the surface of surfaview that is being displayed by the system front-end display window.
In step 428, MediaCodec is started.
In step 430, EGLSurface is obtained via the input source.
In one embodiment, with the input source surface obtained in step 426, the surface may be initialized and an EGL-assigned off-screen buffer, i.e., a pbuffer type EGLSurface, may be obtained.
In step 432, MediaMuxer is initialized and obtained.
In one embodiment, the MediaMuxer (mixer or compositor) is built based on the electronic device's own media library for performing the compositing work of video, such as the mixing process of audio data and video data. In constructing the MediaMuxer object, a video output path and an output format are required, and after MediaFormat of audio data and video data is determined, addTrack can be added to the MediaMuxer by calling addTrack, thereby implementing the above-described synthesizing operation.
In step 434, the data to be processed in MediaCodec is output to MediaMuxer.
In step 436, EGLDisplay, EGLSurface, and EGLContext are bound.
In an embodiment, EGLSurface is configured as a rendering destination after EGLContext executes OpenGL ES instructions, and the rendered data becomes the data to be processed in MediaCodec.
In step 438, the viewport is set, crop test enabled and cropped.
In step 440, OpenGL ES is configured.
In an embodiment, step 440 may refer to step 324 described above, and is not described herein again.
In step 442, the superimposed video frame is rendered.
In an embodiment, step 442 may refer to step 326 described above, and is not described herein again.
In an embodiment, the color mixing and cropping test is also turned off while the superimposed video frame is being rendered.
In step 444, the video frame is saved as a video file.
In an embodiment, the generated video file can be displayed in a preview mode. For example, the eglsepapbuffers instruction may be executed to exchange the buffer addresses of EGLDisplay and EGLSurface; initializing a viewport according to the width and height of the EGLSurface, and binding the EGLDisplay, the EGLSurface and the EGLContext, so that the OpenGL ES instruction executed under the EGLContext takes the EGLSurface as a rendering destination, and the EGLDisplay is displayed as the front end of the EGLSurface, thereby previewing and displaying the video frame buffered in the EGLSurface at the front end.
Corresponding to the embodiment of the image processing method, the disclosure also provides an embodiment of an image processing device.
Fig. 5 is a block diagram illustrating a picture processing apparatus according to an exemplary embodiment. Referring to fig. 5, the apparatus includes:
an acquisition unit 51 that acquires an original picture and at least one special effect picture for forming a target dynamic special effect;
a conversion unit 52, which converts the original image into an original texture and converts the special effect image into a special effect texture respectively through an OpenGL ES program;
a determining unit 53, configured to determine a special effect texture selected from each piece of video frame data, and a relative position relationship between the selected special effect texture and the original texture, so that the OpenGL ES program superimposes and draws the original texture and the special effect texture into corresponding video frame data;
the presentation unit 54 performs preview presentation of the drawn video frame data.
Optionally, the display unit 54 is specifically configured to:
respectively acquiring a system display handle, a rendering buffer area and the graphics context of the OpenGL ES program;
executing instructions of the OpenGL ES program based on the graphics context to draw a video frame; the video frame data obtained by drawing is output to the rendering buffer area;
and previewing and displaying the video frame data in the rendering buffer area through the system display handle.
As shown in fig. 6, fig. 6 is a block diagram of another image processing apparatus according to an exemplary embodiment, which is based on the foregoing embodiment shown in fig. 5, and further includes:
the generating unit 55 generates the drawn video frame data as a video file.
Optionally, the generating unit 55 is specifically configured to:
respectively acquiring an off-screen buffer area, a graphics context of the OpenGL ES program and an encoder;
executing instructions of the OpenGL ES program based on the graphics context to render video frame data; the video frame data obtained by drawing is output to the off-screen buffer area;
and generating the video frame data in the off-screen buffer into a video file through the encoder.
Fig. 7 is a block diagram illustrating a picture processing apparatus according to an example embodiment. Referring to fig. 7, the apparatus includes:
an acquisition unit 71 that acquires an original picture and at least one special effect picture for forming a target dynamic special effect;
a conversion unit 72, which converts the original image into an original texture and converts the special effect image into a special effect texture respectively through an OpenGL ES program;
a determining unit 73, configured to determine a special effect texture selected from each piece of video frame data, and a relative position relationship between the selected special effect texture and the original texture, so that the OpenGL ES program superimposes and draws the original texture and the special effect texture to obtain corresponding video frame data;
the generating unit 74 generates the drawn video frame data as a video file.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
For the device embodiments, since they substantially correspond to the method embodiments, reference may be made to the partial description of the method embodiments for relevant points. The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules can be selected according to actual needs to achieve the purpose of the disclosed solution. One of ordinary skill in the art can understand and implement it without inventive effort.
Correspondingly, the present disclosure further provides an image processing apparatus, including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to implement the picture processing method as in any one of the above embodiments, for example, the method may include: acquiring an original picture and at least one special effect picture for forming a target dynamic special effect; respectively converting the original picture into an original texture and converting the special effect picture into a special effect texture by an OpenGL ES program; determining a relative position relationship among a selected special effect texture, the selected special effect texture and the original texture in each video frame data, so that the OpenGL ES program can superpose and draw the original texture and the special effect texture into corresponding video frame data; and previewing and displaying the drawn video frame data.
Accordingly, the present disclosure also provides a terminal including a memory, and one or more programs, where the one or more programs are stored in the memory, and configured to be executed by one or more processors, where the one or more programs include instructions for implementing the picture processing method as described in any of the above embodiments, such as the method may include: acquiring an original picture and at least one special effect picture for forming a target dynamic special effect; respectively converting the original picture into an original texture and converting the special effect picture into a special effect texture by an OpenGL ES program; determining a relative position relationship among a selected special effect texture, the selected special effect texture and the original texture in each video frame data, so that the OpenGL ES program can superpose and draw the original texture and the special effect texture into corresponding video frame data; and previewing and displaying the drawn video frame data.
Correspondingly, the present disclosure further provides an image processing apparatus, including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to implement the picture processing method as in any one of the above embodiments, for example, the method may include: acquiring an original picture and at least one special effect picture for forming a target dynamic special effect; respectively converting the original picture into an original texture and converting the special effect picture into a special effect texture by an OpenGL ES program; determining a relative position relationship among a selected special effect texture, the selected special effect texture and the original texture in each video frame data, so that the OpenGL ES program can superpose and draw the original texture and the special effect texture into corresponding video frame data; and generating the drawn video frame data into a video file.
Accordingly, the present disclosure also provides a terminal including a memory, and one or more programs, where the one or more programs are stored in the memory, and configured to be executed by one or more processors, where the one or more programs include instructions for implementing the picture processing method as described in any of the above embodiments, such as the method may include: acquiring an original picture and at least one special effect picture for forming a target dynamic special effect; respectively converting the original picture into an original texture and converting the special effect picture into a special effect texture by an OpenGL ES program; determining a relative position relationship among a selected special effect texture, the selected special effect texture and the original texture in each video frame data, so that the OpenGL ES program can superpose and draw the original texture and the special effect texture into corresponding video frame data; and generating the drawn video frame data into a video file.
Fig. 8 is a block diagram illustrating an apparatus 800 for picture processing according to an example embodiment. For example, the apparatus 800 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Referring to fig. 8, the apparatus 800 may include one or more of the following components: processing component 802, memory 804, power component 806, multimedia component 808, audio component 810, input/output (I/O) interface 812, sensor component 814, and communication component 816.
The processing component 802 generally controls overall operation of the device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 802 may include one or more processors 820 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interaction between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operations at the apparatus 800. Examples of such data include instructions for any application or method operating on device 800, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
Power components 806 provide power to the various components of device 800. The power components 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the apparatus 800.
The multimedia component 808 includes a screen that provides an output interface between the device 800 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the device 800 is in an operating mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the apparatus 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 also includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 814 includes one or more sensors for providing various aspects of state assessment for the device 800. For example, the sensor assembly 814 may detect the open/closed status of the device 800, the relative positioning of components, such as a display and keypad of the device 800, the sensor assembly 814 may also detect a change in the position of the device 800 or a component of the device 800, the presence or absence of user contact with the device 800, the orientation or acceleration/deceleration of the device 800, and a change in the temperature of the device 800. Sensor assembly 814 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate communications between the apparatus 800 and other devices in a wired or wireless manner. The apparatus 800 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, 4G LTE, 5G NR (New Radio), or a combination thereof. In an exemplary embodiment, the communication component 816 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium comprising instructions, such as the memory 804 comprising instructions, executable by the processor 820 of the device 800 to perform the above-described method is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (12)

1. An image processing method, comprising:
acquiring an original picture and at least one special effect picture for forming a target dynamic special effect;
respectively converting the original picture into an original texture and converting the special effect picture into a special effect texture by an OpenGL ES program;
determining a relative position relationship among a selected special effect texture, the selected special effect texture and the original texture in each video frame data, so that the OpenGL ES program can superpose and draw the original texture and the special effect texture into corresponding video frame data;
and previewing and displaying the drawn video frame data.
2. The method of claim 1, wherein the performing the preview presentation of the rendered video frame data comprises:
respectively acquiring a system display handle, a rendering buffer area and the graphics context of the OpenGL ES program;
executing instructions of the OpenGL ES program based on the graphics context to draw a video frame; the video frame data obtained by drawing is output to the rendering buffer area;
and previewing and displaying the video frame data in the rendering buffer area through the system display handle.
3. The method of claim 1, further comprising:
and generating the drawn video frame data into a video file.
4. The method of claim 3, wherein generating the rendered video frame data into a video file comprises:
respectively acquiring an off-screen buffer area, a graphics context of the OpenGL ES program and an encoder;
executing instructions of the OpenGL ES program based on the graphics context to render video frame data; the video frame data obtained by drawing is output to the off-screen buffer area;
and generating the video frame data in the off-screen buffer into a video file through the encoder.
5. An image processing method, comprising:
acquiring an original picture and at least one special effect picture for forming a target dynamic special effect;
respectively converting the original picture into an original texture and converting the special effect picture into a special effect texture by an OpenGL ES program;
determining a relative position relationship among a selected special effect texture, the selected special effect texture and the original texture in each video frame data, so that the OpenGL ES program can superpose and draw the original texture and the special effect texture into corresponding video frame data;
and generating the drawn video frame data into a video file.
6. A picture processing apparatus, comprising:
the device comprises an acquisition unit, a processing unit and a display unit, wherein the acquisition unit is used for acquiring an original picture and at least one special effect picture used for forming a target dynamic special effect;
the conversion unit is used for respectively converting the original image into an original texture and converting the special effect image into a special effect texture through an OpenGL ES program;
the determining unit is used for determining the selected special effect texture in each video frame data and the relative position relationship between the selected special effect texture and the original texture, so that the OpenGL ES program can superpose and draw the original texture and the special effect texture into corresponding video frame data;
and the display unit is used for previewing and displaying the drawn video frame data.
7. The device of claim 6, wherein the presentation unit is specifically configured to:
respectively acquiring a system display handle, a rendering buffer area and the graphics context of the OpenGL ES program;
executing instructions of the OpenGL ES program based on the graphics context to draw a video frame; the video frame data obtained by drawing is output to the rendering buffer area;
and previewing and displaying the video frame data in the rendering buffer area through the system display handle.
8. The apparatus of claim 6, further comprising:
and a generation unit which generates the drawn video frame data into a video file.
9. The apparatus according to claim 8, wherein the generating unit is specifically configured to:
respectively acquiring an off-screen buffer area, a graphics context of the OpenGL ES program and an encoder;
executing instructions of the OpenGL ES program based on the graphics context to render video frame data; the video frame data obtained by drawing is output to the off-screen buffer area;
and generating the video frame data in the off-screen buffer into a video file through the encoder.
10. A picture processing apparatus, comprising:
the device comprises an acquisition unit, a processing unit and a display unit, wherein the acquisition unit is used for acquiring an original picture and at least one special effect picture used for forming a target dynamic special effect;
the conversion unit is used for respectively converting the original image into an original texture and converting the special effect image into a special effect texture through an OpenGL ES program;
the determining unit is used for determining the selected special effect texture in each video frame data and the relative position relationship between the selected special effect texture and the original texture, so that the OpenGL ES program can superpose and draw the original texture and the special effect texture into corresponding video frame data;
and a generation unit which generates the drawn video frame data into a video file.
11. An electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the method of any one of claims 1-5.
12. A computer-readable storage medium having stored thereon computer instructions, which when executed by a processor, perform the steps of the method according to any one of claims 1-5.
CN201910237542.3A 2019-03-27 2019-03-27 Picture processing method and device, electronic equipment and computer readable storage medium Pending CN111754607A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910237542.3A CN111754607A (en) 2019-03-27 2019-03-27 Picture processing method and device, electronic equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910237542.3A CN111754607A (en) 2019-03-27 2019-03-27 Picture processing method and device, electronic equipment and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN111754607A true CN111754607A (en) 2020-10-09

Family

ID=72670983

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910237542.3A Pending CN111754607A (en) 2019-03-27 2019-03-27 Picture processing method and device, electronic equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN111754607A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112700507A (en) * 2020-12-28 2021-04-23 统信软件技术有限公司 Picture display method, computing device and storage medium
CN113724355A (en) * 2021-08-03 2021-11-30 北京百度网讯科技有限公司 Chart drawing method and device for video and electronic equipment
CN114900736A (en) * 2022-03-28 2022-08-12 网易(杭州)网络有限公司 Video generation method and device and electronic equipment
CN116563098A (en) * 2022-06-20 2023-08-08 广州视源电子科技股份有限公司 Image processing method, device and equipment
WO2023185392A1 (en) * 2022-04-02 2023-10-05 北京字跳网络技术有限公司 Method and apparatus for generating special effect icon, device and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106534880A (en) * 2016-11-28 2017-03-22 深圳Tcl数字技术有限公司 Video synthesis method and device
US9928637B1 (en) * 2016-03-08 2018-03-27 Amazon Technologies, Inc. Managing rendering targets for graphics processing units
EP3337176A1 (en) * 2016-12-15 2018-06-20 HTC Corporation Method, processing device, and computer system for video preview
CN108876883A (en) * 2018-05-24 2018-11-23 武汉斗鱼网络科技有限公司 Texture creation method, device, equipment and storage medium based on OpenGLES
CN109218802A (en) * 2018-08-23 2019-01-15 Oppo广东移动通信有限公司 Method for processing video frequency, device, electronic equipment and computer-readable medium
CN109379627A (en) * 2018-11-27 2019-02-22 Oppo广东移动通信有限公司 Method for processing video frequency, device, electronic equipment and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9928637B1 (en) * 2016-03-08 2018-03-27 Amazon Technologies, Inc. Managing rendering targets for graphics processing units
CN106534880A (en) * 2016-11-28 2017-03-22 深圳Tcl数字技术有限公司 Video synthesis method and device
WO2018094814A1 (en) * 2016-11-28 2018-05-31 深圳Tcl数字技术有限公司 Video synthesizing method and device
EP3337176A1 (en) * 2016-12-15 2018-06-20 HTC Corporation Method, processing device, and computer system for video preview
CN108876883A (en) * 2018-05-24 2018-11-23 武汉斗鱼网络科技有限公司 Texture creation method, device, equipment and storage medium based on OpenGLES
CN109218802A (en) * 2018-08-23 2019-01-15 Oppo广东移动通信有限公司 Method for processing video frequency, device, electronic equipment and computer-readable medium
CN109379627A (en) * 2018-11-27 2019-02-22 Oppo广东移动通信有限公司 Method for processing video frequency, device, electronic equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
乔少杰等: "基于OpenGL的快速图像渲染方法", 《计算机应用研究》, vol. 25, no. 5 *
于平;: "基于GPU加速的辐射度光照算法的研究及应用", 国外电子测量技术, no. 11 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112700507A (en) * 2020-12-28 2021-04-23 统信软件技术有限公司 Picture display method, computing device and storage medium
CN113724355A (en) * 2021-08-03 2021-11-30 北京百度网讯科技有限公司 Chart drawing method and device for video and electronic equipment
CN113724355B (en) * 2021-08-03 2024-05-07 北京百度网讯科技有限公司 Chart drawing method and device for video and electronic equipment
CN114900736A (en) * 2022-03-28 2022-08-12 网易(杭州)网络有限公司 Video generation method and device and electronic equipment
WO2023185392A1 (en) * 2022-04-02 2023-10-05 北京字跳网络技术有限公司 Method and apparatus for generating special effect icon, device and storage medium
CN116563098A (en) * 2022-06-20 2023-08-08 广州视源电子科技股份有限公司 Image processing method, device and equipment

Similar Documents

Publication Publication Date Title
CN111754607A (en) Picture processing method and device, electronic equipment and computer readable storage medium
CN110675310B (en) Video processing method and device, electronic equipment and storage medium
CN109068166B (en) Video synthesis method, device, equipment and storage medium
CN110377264B (en) Layer synthesis method, device, electronic equipment and storage medium
KR102474088B1 (en) Method and device for compositing an image
US10629167B2 (en) Display apparatus and control method thereof
EP2245598B1 (en) Multi-buffer support for off-screen surfaces in a graphics processing system
WO2015192713A1 (en) Image processing method and device, mobile terminal, and computer storage medium
CN113313802B (en) Image rendering method, device and equipment and storage medium
CN109448050B (en) Method for determining position of target point and terminal
EP3828832B1 (en) Display control method, display control device and computer-readable storage medium
WO2021189995A1 (en) Video rendering method and apparatus, electronic device, and storage medium
CN112348929A (en) Rendering method and device of frame animation, computer equipment and storage medium
US10325569B2 (en) Method and apparatus for coding image information for display
CN115546410A (en) Window display method and device, electronic equipment and storage medium
CN111338743B (en) Interface processing method and device and storage medium
EP3236423A1 (en) Method and device for compositing an image
CN113625983A (en) Image display method, image display device, computer equipment and storage medium
KR20090087503A (en) Post-render graphics transparency
CN108184054B (en) Preprocessing method and preprocessing device for images shot by intelligent terminal
CN112445318A (en) Object display method and device, electronic equipment and storage medium
WO2024016103A1 (en) Image display method and apparatus
CN115469944A (en) Interface display method, interface display device and storage medium
CN115546352A (en) Special effect adding method and device, computer device and medium
CN116342799A (en) Graphic display method, graphic display device, electronic equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination