CN113542620A - Special effect processing method and device and electronic equipment - Google Patents

Special effect processing method and device and electronic equipment Download PDF

Info

Publication number
CN113542620A
CN113542620A CN202110761080.2A CN202110761080A CN113542620A CN 113542620 A CN113542620 A CN 113542620A CN 202110761080 A CN202110761080 A CN 202110761080A CN 113542620 A CN113542620 A CN 113542620A
Authority
CN
China
Prior art keywords
target object
scene
texture
service end
special effect
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110761080.2A
Other languages
Chinese (zh)
Other versions
CN113542620B (en
Inventor
谢高喜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202110761080.2A priority Critical patent/CN113542620B/en
Publication of CN113542620A publication Critical patent/CN113542620A/en
Application granted granted Critical
Publication of CN113542620B publication Critical patent/CN113542620B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2621Cameras specially adapted for the electronic generation of special effects during image pickup, e.g. digital cameras, camcorders, video cameras having integrated special effects capability
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/64Circuits for processing colour signals
    • H04N9/74Circuits for processing colour signals for obtaining special effects

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Generation (AREA)

Abstract

The invention provides a special effect processing method, a special effect processing device and electronic equipment, and relates to the technical field of artificial intelligence such as augmented reality, computer vision, deep learning and the like, wherein the specific implementation scheme is as follows: initializing an Augmented Reality (AR) scene in response to an initialization instruction sent by a service end; introducing a target object to be specially processed into an AR scene; determining the input texture set by the service end as the source texture of the target object; according to the special effect set by the service end, carrying out special effect processing on the source texture of the target object in the AR scene to obtain the output texture of the target object; and returning the output texture to the service end. By converting the AR resources to be processed by the service end into the format of texture data for processing, the service end can perform secondary processing on the processed AR resources and can adapt to video call schemes of more cloud manufacturers.

Description

Special effect processing method and device and electronic equipment
Technical Field
The disclosure relates to the technical field of artificial intelligence such as augmented reality, computer vision, deep learning and the like, in particular to a special effect processing method and device and electronic equipment.
Background
AR (Augmented Reality) is a technology for calculating the position and angle of a camera image in real time and adding corresponding images, videos, and 3D models, and the purpose of the technology is to overlap a virtual world on a screen in the real world and perform interaction. The AR resources can be subjected to special effect processing according to the requirements of business parties, such as face special effects, beautification special effects, human body special effects, environment special effects and the like, the AR special effects are widely applied to video shooting scenes, and a very convenient creation tool is provided for video shooting users.
Disclosure of Invention
The disclosure provides a special effect processing method and device and electronic equipment.
An embodiment of a first aspect of the present disclosure provides a special effect processing method, including:
initializing an Augmented Reality (AR) scene in response to an initialization instruction sent by a service end;
importing a target object to be specially processed into the AR scene;
determining the input texture set by a service end as the source texture of the target object;
according to the special effect set by the service end, carrying out special effect processing on the source texture of the target object in the AR scene to obtain the output texture of the target object;
and returning the output texture to the service end.
To achieve the above object, an embodiment of a second aspect of the present disclosure provides a special effect processing apparatus, including:
the initialization module is used for responding to an initialization instruction sent by the service end and initializing an Augmented Reality (AR) scene;
the import module is used for importing a target object to be specially processed into the AR scene;
the input module is used for determining the input texture set by the service end as the source texture of the target object;
the output module is used for carrying out special effect processing on the source texture of the target object in the AR scene according to a special effect set by a service end to obtain an output texture of the target object;
and the feedback module is used for returning the output texture to the service end.
To achieve the above object, an embodiment of a third aspect of the present disclosure provides an electronic device, including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of the first aspect.
To achieve the above object, a fourth aspect of the present disclosure provides a non-transitory computer-readable storage medium storing computer instructions for causing a computer to perform the method according to the first aspect.
To achieve the above object, an embodiment of a fifth aspect of the present disclosure provides a computer program product, which includes a computer program, and when the computer program is executed by a processor, the computer program implements the method according to the first aspect.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
fig. 1 is a schematic flowchart of a special effect processing method according to an embodiment of the present disclosure;
FIG. 2 is a block diagram illustrating a technical implementation of a special effect SDK post-processing scheme according to an embodiment of the present disclosure;
fig. 3 is a schematic flowchart of another special effect processing method provided in the embodiment of the present disclosure;
fig. 4 is a schematic flow chart of a face special effect processing method according to an embodiment of the present disclosure;
FIG. 5 is a flow chart of the conversion and processing of the size and format of a face picture in a video memory and a memory;
fig. 6 is a schematic structural diagram of a special effect processing apparatus according to an embodiment of the present disclosure;
fig. 7 is a schematic block diagram of an example electronic device 700 provided by embodiments of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
The following describes a special effect processing method, an apparatus, and an electronic device according to an embodiment of the present disclosure with reference to the drawings.
Fig. 1 is a schematic flow chart of a special effect processing method according to an embodiment of the present disclosure.
As shown in fig. 1, the special effect processing method may include the steps of:
step 101, initializing an Augmented Reality (AR) scene in response to an initialization instruction sent by a service end.
Here, initializing an AR scene is to set an AR rendering module and a context in the OpenGL system to initial states, where the context is established in a certain thread in the rendering module. OpenGL, also known as Open Graphics Library, is an Open Graphics Library, is a cross-language, cross-platform application programming interface for rendering 2D and 3D vector Graphics, and is composed of nearly 350 different function calls for drawing from simple Graphics bits to complex three-dimensional scenes.
Step 102, importing a target object to be specially processed in an AR scene.
Optionally, as a possible implementation manner, the identifier of the shared context sent by the service end may be received, and the shared context is introduced into the foreground of the augmented reality AR scene according to the identifier of the shared context, so as to obtain the target object as the foreground in the augmented reality AR scene.
In this embodiment, the shared context (shared context) may be obtained by calling eglccreatecontext function to transfer to an established context (eglcontext) in a thread in the OpenGL rendering component, where the eglcontext includes the size, structure, texture (texture), and the like of the source video resource, and the corresponding video resource is assigned a unique resource ID (resource number), that is, the identifier, by the OpenGL system. When processing video resources, the service end only needs to provide the resource ID to specify the sharing context.
The shared context is led into the foreground of the augmented reality AR scene, so that a virtual image corresponding to the video resource can be obtained, namely a target object of the foreground can be presented on a screen and can be observed by naked eyes.
It can be understood that if the quality of the video resource provided by the service end is not good, there may be a problem of frame loss. In view of the foregoing problem, in a possible implementation manner of the embodiment of the present disclosure, before processing a video resource, frame insertion processing may also be performed on the video resource, and an implementation manner thereof is as follows: acquiring a resource video from a service end; under the condition that the frame rate of the resource video is lower than a set threshold value, performing frame interpolation processing on the resource video; and importing the processed resource video into the background of the augmented reality AR scene to obtain a target object serving as the background in the augmented reality AR scene. Therefore, under the condition that the frame rate of the video resource is low, the video resource is subjected to frame interpolation processing, and the quality of the video resource can be improved.
Here, the threshold is set for the video resource of the frame that is not lost, and the determination should be made according to the specific situation when the video resource is specifically applied.
Optionally, as a possible implementation manner, after the resource video is obtained from the service end, at least one of size, playing direction and format adjustment may be performed on the resource video sent by the service end according to a set target size, a set target playing direction and/or a set target format, so as to meet an actual requirement.
Here, the purpose of adjusting the size, the playing direction, and the format of the resource video provided by the service end is to consider that the specification of the resource video provided by the service end may not meet the requirement of the special effect SDK processing module, so that the specification parameter of the resource video to be processed needs to be adjusted to the standard format to adapt to the requirement of the special effect rendering module.
And 103, determining the input texture set by the service end as the source texture of the target object.
In this embodiment, the source texture is texture data included in the shared context corresponding to the target object, and the texture data is used as an input texture of the rendering module. Here, the input texture may be an initial texture selected by a service party or a special effect texture generated after a last special effect process.
And 104, performing special effect processing on the source texture of the target object in the AR scene according to the special effect set by the service end to obtain the output texture of the target object.
Optionally, as a possible implementation manner, a texture rendering function corresponding to the set special effect may be called to perform texture rendering on the source texture of the target object in the augmented reality AR scene, so as to obtain an output texture of the target object.
In this embodiment, the special effect is preset in the rendering module and included in a special effect SDK (Software Development Kit), and when rendering the corresponding special effect, only the corresponding special effect needs to be selected from the special effect SDK according to a requirement of the service end. The special effect SDK also comprises a special effect rendering function, and the source texture is subjected to special effect rendering processing through the special effect rendering function, so that special effect texture data is generated. It is noted that the special effect texture data is output in the form of a texture id (texture) as an output texture. Outputting the texture in the form of a texture ID has at least the following advantages: 1. the convenient business party performs secondary processing, such as beautifying, according to the texture ID; 2. the method can adapt to more video call schemes of cloud manufacturers, and is convenient for carrying out stream pushing and pulling operation on the AR special effect result.
And 105, returning the output texture to the service end.
Optionally, as a possible implementation manner, after the output texture is obtained, the target object in the augmented reality AR scene may be drawn according to the output texture, so as to obtain special effect image information of the augmented reality AR scene; and returning special effect image information to the service end.
Optionally, as a possible implementation manner, when the target object is not in the locked state, the first thread in the thread pool may be scheduled to execute the step of processing the source texture of the target object in the augmented reality AR scene, and the target object is locked until the execution of the first thread is completed.
Optionally, as another possible implementation manner, when the target object is not in the locked state, a second thread in the thread pool may be scheduled to perform the step of drawing the target object in the augmented reality AR scene, and lock the target object until the second thread is completely executed.
Here, when the texture ID is called in the OpenGL system, in order to prevent program crash, the process may be performed independently in different threads, and each thread needs to work in a locked state to play a role of protection and isolation.
In this embodiment, after obtaining the output texture, in order to obtain the AR video after the special effect rendering, the output texture may be further processed, where the output texture includes two threads, a first thread is used to load the output texture, a second thread is used to draw, and an output result after drawing is the special effect AR video corresponding to the target object.
As an example, fig. 2 is a block diagram of a technical implementation of a special effect SDK post-processing scheme according to an embodiment of the present disclosure. As shown in fig. 2, the special effect SDK post-processing module may include an interface layer, a Controller (Controller) of the AR development and debugging platform, a renderer (Render), and an AR core rendering component. The interface layer is used for interacting with the service side post-processing module, the controller is used for executing the method provided by the embodiment, and the renderer is used for calling the AR core rendering component to perform rendering operation under the control of the controller.
In the interaction process shown in fig. 2, after the service side post-processing module sends an instruction to initialize the environment of the special effect SDK post-processing module, the shared context and the input texture are provided to the special effect SDK post-processing module, the controller in the special effect SDK post-processing module introduces the shared context into the initialized augmented reality AR scene to obtain the target object, and controls the renderer to call the AR core rendering component to render the set special effect based on the input texture of the target object to obtain the output texture, and returns the output texture to the service side post-processing module. And a View (View) component of the service side post-processing module performs subsequent processing according to the output texture.
In summary, the embodiment of the present disclosure performs input and output processing in the form of texture data when rendering the target object, and converts the target object into a shared context for multiple calls, thereby implementing more flexible and general AR video resource processing process. At the initial processing stage of the AR video, the frame insertion processing is carried out on the video, so that the problem of frame loss possibly existing in the video is solved. In addition, the output texture is in a texture ID form, so that a service party can conveniently perform secondary processing on the result of the special effect SDK processing, the scheme can be adapted to video call schemes of more cloud manufacturers, and push-pull streaming operation on the AR special effect result is facilitated.
The special effect processing method of the embodiment of the disclosure initializes an Augmented Reality (AR) scene in response to an initialization instruction sent by a service end; introducing a target object to be specially processed into an AR scene; determining the input texture set by the service end as the source texture of the target object; according to the special effect set by the service end, carrying out special effect processing on the source texture of the target object in the AR scene to obtain the output texture of the target object; and returning the output texture to the service end. The AR resources to be processed at the service end are converted into the texture data format for processing, so that the service end can perform secondary processing on the processed AR resources, can adapt to video call schemes of more cloud manufacturers, and solves the problem that the processed AR video resources may have frame loss.
Based on the previous embodiment, this embodiment provides a possible implementation manner of another special effect processing method, and fig. 3 is a schematic flow diagram of another special effect processing method provided in the embodiment of the present disclosure.
As shown in fig. 3, the special effect processing method may include the steps of:
step 301, initializing an augmented reality AR scene in response to an initialization instruction sent by a service end.
The step 101 in the above embodiment can be referred to as an embodiment, and the principle is the same, which is not described herein again.
Step 302, acquiring a resource video from a service end, and performing frame insertion processing on the resource video under the condition that the frame rate of the resource video is lower than a set threshold value.
In this embodiment, in order to improve the video output quality, the video resource provided by the service party may be subjected to frame insertion processing by using a corresponding function in the SDK before being processed, in consideration of the problem that the video resource may have a frame loss. For example, when the frame rate of the video resource is lower than the set standard frame rate, the frame rate of the video resource may be restored to the standard frame rate, and then the restored video resource may be further processed.
Step 303, importing the processed resource video into the background of the AR scene to obtain the target object serving as the background in the AR scene.
In this embodiment, the target object of the background is a target object that does not include context information, but is only a model that can include video resource information.
And 304, receiving the identifier of the shared context sent by the service end, and importing the shared context into the foreground of the AR scene according to the identifier of the shared context to obtain a target object serving as the foreground in the AR scene.
In this embodiment, the shared context sent by the service end is received, and is guided into the model in the previous step 303 according to the shared context identifier containing the video resource information, so that the target object serving as the foreground in the augmented reality AR scene is obtained. The target object of the foreground can be presented on the screen as a virtual image that can be observed by naked eyes. The obtaining manner of the shared context has been given in step 102 in the previous embodiment, and the principle is the same, which is not described herein again.
And 305, calling a texture rendering function corresponding to the set special effect, and performing texture rendering on the source texture of the target object in the AR scene to obtain the output texture of the target object.
Step 306, returning the output texture to the service end.
The specific implementation manner of the step 305 and the step 306 may refer to the step 104 and the step 105 in the above embodiment, and the principle is the same, which is not described herein again.
The special effect processing method of the embodiment of the disclosure initializes an Augmented Reality (AR) scene in response to an initialization instruction sent by a service end; acquiring a resource video from a service end, and performing frame interpolation processing on the resource video under the condition that the frame rate of the resource video is lower than a set threshold; importing the processed resource video in the background of the AR scene to obtain a target object serving as the background in the AR scene; receiving a shared context identifier sent by a service end, and importing the shared context into a foreground of an AR scene according to the shared context identifier to obtain a target object serving as the foreground in the AR scene; calling a texture rendering function corresponding to the set special effect, and performing texture rendering on the source texture of the target object in the AR scene to obtain the output texture of the target object; and returning the output texture to the service end. The AR resources to be processed at the service end are converted into the texture data format for processing, so that the service end can perform secondary processing on the processed AR resources, can adapt to video call schemes of more cloud manufacturers, and solves the problem that the processed AR video resources may have frame loss.
As an example, a special effect is taken as a special effect of a human face for exemplary explanation, and as shown in fig. 4, a schematic flow chart of a human face special effect processing method provided in an embodiment of the present disclosure is shown.
In this embodiment, a face image captured by a camera is first converted into camera texture data, and then the camera texture data is stored in a cache texture module and loaded into a Processing algorithm module of a Graphics Processing Unit (GPU), where the module performs size and format conversion on the face image, where functions executed by components in the module may be as shown in table 1.
TABLE 1 Algorithm Components contained in processing Algorithm Module
Algorithm component Data size Data format
Image tracking assembly 640*360 YUV
SLAM assembly
1280*720 YUV
Face recognition tracking assembly 320*180 BGR
Human body identification tracking assembly 320*180 BGR
Background segmentation component 192*192 BGR
Further, the Processing result of the Processing algorithm module can be read back to a Central Processing Unit (CPU) and provided to the face algorithm calculation module, and then the Processing result of the face algorithm calculation module is fed back to the background texture rendering module, and then the obtained rendering texture data is imported into the face model and further fused with the AR special effect for output, and the service side is responsible for uploading to the screen.
As an example, the size and format of the face picture may be converted and processed in a video memory and a memory as shown in fig. 5, where a video memory GPU stores rendering data to be extracted, and a memory temporarily stores operation data in a processor CPU. In the video memory part, the face picture with the resolution of 1280 × 720 in the color space RGBA needs to be adjusted to the resolution of 320 × 180 that the face recognition and tracking component can process, and the data format is in the form of BGR, and then the converted result is read back to the memory. By converting the size and the format of the image, the time consumption and the overall power consumption of the algorithm can be effectively reduced.
It should be noted that, in the technical solution of the present disclosure, the acquisition of the face picture, the acquisition, the storage, the application, the analysis of the personal information of the user, and the like all conform to the regulations of the relevant laws and regulations, and do not violate the good customs of the public order.
In order to implement the above embodiments, the present disclosure also provides a special effect processing apparatus.
Fig. 6 is a schematic structural diagram of a special effect processing apparatus according to an embodiment of the present disclosure.
As shown in fig. 6, the special effect processing apparatus 600 may include: an initialization module 610, an import module 620, an input module 630, an output module 640, and a feedback module 650.
The initialization module 610 is configured to initialize an augmented reality AR scene in response to an initialization instruction sent by the service end.
And an importing module 620, configured to import a target object to be specially processed in the AR scene.
The input module 630 is configured to determine an input texture set by the service end as a source texture of the target object.
And the output module 640 is configured to perform special effect processing on the source texture of the target object in the AR scene according to the special effect set by the service end, so as to obtain an output texture of the target object.
And a feedback module 650, configured to return the output texture to the service end.
Further, as a possible implementation manner, the output module 640 may include:
and the rendering unit is used for calling a texture rendering function corresponding to the set special effect, performing texture rendering on the source texture of the target object in the AR scene, and obtaining the output texture of the target object.
And the drawing unit is used for drawing the target object in the AR scene according to the output texture to obtain the special effect image information of the AR scene.
And the feedback unit is used for returning the special effect image information to the service end.
The execution unit is used for scheduling a first thread in the thread pool to execute the step of processing the source texture of the target object in the AR scene when the target object is not in the locked state, and locking the target object until the first thread is executed; and when the target object is not in the locked state, scheduling a second thread in the thread pool to perform the step of drawing the target object in the AR scene and locking the target object until the second thread is completely executed.
As a possible implementation manner, the importing module 620 is configured to: receiving an identifier of a shared context sent by a service end; and according to the identifier of the shared context, introducing the shared context into the foreground of the AR scene to obtain a target object serving as the foreground in the AR scene.
As a possible implementation manner, the initialization module 610 is configured to: acquiring a resource video from a service end; under the condition that the frame rate of the resource video is lower than a set threshold value, performing frame interpolation processing on the resource video; and importing the processed resource video into the background of the AR scene to obtain a target object serving as the background in the AR scene.
As a possible implementation manner, the initialization module 610 is further configured to: and according to the set target size, the target playing direction and/or the target format, at least one of size, playing direction and format adjustment is carried out on the resource video sent by the service end.
The special effect processing device of the embodiment initializes an Augmented Reality (AR) scene in response to an initialization instruction sent by a service end; introducing a target object to be specially processed into an AR scene; determining the input texture set by the service end as the source texture of the target object; according to the special effect set by the service end, carrying out special effect processing on the source texture of the target object in the AR scene to obtain the output texture of the target object; and returning the output texture to the service end. The AR resources to be processed at the service end are converted into the texture data format for processing, so that the service end can perform secondary processing on the processed AR resources, can adapt to video call schemes of more cloud manufacturers, and solves the problem that the processed AR video resources may have frame loss.
It should be noted that the explanation of the foregoing embodiment of the special effect processing method is also applicable to the special effect processing apparatus of this embodiment, and the implementation principle thereof is similar and will not be described herein again.
The present disclosure also provides an electronic device, a readable storage medium, and a computer program product according to embodiments of the present disclosure.
FIG. 7 illustrates a schematic block diagram of an example electronic device 700 that can be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 7, the device 700 includes a computing unit 701, which can perform various appropriate actions and processes in accordance with a computer program stored in a ROM (Read-Only Memory) 702 or a computer program loaded from a storage unit 708 into a RAM (Random Access Memory) 703. In the RAM 703, various programs and data required for the operation of the device 700 can also be stored. The computing unit 701, the ROM 702, and the RAM 703 are connected to each other by a bus 704. An I/O (Input/Output) interface 705 is also connected to the bus 704.
Various components in the device 700 are connected to the I/O interface 705, including: an input unit 706 such as a keyboard, a mouse, or the like; an output unit 707 such as various types of displays, speakers, and the like; a storage unit 708 such as a magnetic disk, optical disk, or the like; and a communication unit 709 such as a network card, modem, wireless communication transceiver, etc. The communication unit 709 allows the device 700 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
Computing unit 701 may be a variety of general purpose and/or special purpose processing components with processing and computing capabilities. Some examples of the computing unit 701 include, but are not limited to, a CPU, a GPU, various dedicated AI (Artificial Intelligence) computing chips, various computing units running machine learning model algorithms, a DSP (Digital Signal Processor), and any suitable Processor, controller, microcontroller, and the like. The calculation unit 701 executes the respective methods and processes described above, such as the special effect processing method. For example, in some embodiments, the special effects processing method may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 708. In some embodiments, part or all of a computer program may be loaded onto and/or installed onto device 700 via ROM 702 and/or communications unit 709. When the computer program is loaded into the RAM 703 and executed by the computing unit 701, one or more steps of the methods described above may be performed. Alternatively, in other embodiments, the computing unit 701 may be configured to perform the aforementioned special effects processing method by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be realized in digital electronic circuitry, Integrated circuitry, FPGAs (Field Programmable Gate arrays), ASICs (Application-Specific Integrated circuits), ASSPs (Application Specific Standard products), SOCs (System On Chip, System On a Chip), CPLDs (Complex Programmable Logic devices), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a RAM, a ROM, an EPROM (Electrically Programmable Read-Only-Memory) or flash Memory, an optical fiber, a CD-ROM (Compact Disc Read-Only-Memory), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a Display device (e.g., a CRT (Cathode Ray Tube) or LCD (Liquid Crystal Display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: LAN (Local Area Network), WAN (Wide Area Network), internet, and blockchain Network.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The Server can be a cloud Server, also called a cloud computing Server or a cloud host, and is a host product in a cloud computing service system, so as to solve the defects of high management difficulty and weak service expansibility in the traditional physical host and VPS service ("Virtual Private Server", or simply "VPS"). The server may also be a server of a distributed system, or a server incorporating a blockchain.
It should be noted that artificial intelligence is a subject for studying a computer to simulate some human thinking processes and intelligent behaviors (such as learning, reasoning, thinking, planning, etc.), and includes both hardware and software technologies. Artificial intelligence hardware technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing, and the like; the artificial intelligence software technology mainly comprises a computer vision technology, a voice recognition technology, a natural language processing technology, machine learning/deep learning, a big data processing technology, a knowledge map technology and the like.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel or sequentially or in different orders, and are not limited herein as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the scope of protection of the present disclosure.

Claims (17)

1. A special effects processing method, comprising:
initializing an Augmented Reality (AR) scene in response to an initialization instruction sent by a service end;
importing a target object to be specially processed into the AR scene;
determining the input texture set by a service end as the source texture of the target object;
according to the special effect set by the service end, carrying out special effect processing on the source texture of the target object in the AR scene to obtain the output texture of the target object;
and returning the output texture to the service end.
2. The method according to claim 1, wherein the performing special effect processing on the source texture of the target object in the AR scene according to a special effect set by a service end to obtain an output texture of the target object includes:
and calling a texture rendering function corresponding to the set special effect, and performing texture rendering on the source texture of the target object in the AR scene to obtain the output texture of the target object.
3. The method according to claim 1, wherein after performing special effect processing on the source texture of the target object in the AR scene according to a special effect set by a service end to obtain an output texture of the target object, the method further comprises:
according to the output texture, drawing the target object in the AR scene to obtain special effect image information of the AR scene;
and returning the special effect image information to the service end.
4. The method of claim 3, wherein the method further comprises:
when the target object is not in a locked state, scheduling a first thread in a thread pool to execute the step of processing the source texture of the target object in the AR scene, and locking the target object until the first thread is executed;
and when the target object is not in a locked state, scheduling a second thread in the thread pool to execute the step of drawing the target object in the AR scene and locking the target object until the second thread is completely executed.
5. The method of any of claims 1-4, wherein said importing a target object to be special effects processed in the AR scene comprises:
receiving an identifier of a shared context sent by the service end;
and importing the sharing context into the foreground of the AR scene according to the identifier of the sharing context so as to obtain a target object serving as the foreground in the AR scene.
6. The method according to any one of claims 1 to 4, wherein after initializing the AR scene in response to the initialization instruction sent by the service end, the method further comprises:
acquiring a resource video from the service end;
under the condition that the frame rate of the resource video is lower than a set threshold value, performing frame interpolation processing on the resource video;
and importing the processed resource video into the background of the AR scene to obtain a target object serving as the background in the AR scene.
7. The method of claim 6, wherein after the acquiring the resource video from the service end, further comprising:
and according to the set target size, target playing direction and/or target format, at least one of size, playing direction and format adjustment is carried out on the resource video sent by the service end.
8. A special effects processing apparatus comprising:
the initialization module is used for responding to an initialization instruction sent by the service end and initializing an Augmented Reality (AR) scene;
the import module is used for importing a target object to be specially processed into the AR scene;
the input module is used for determining the input texture set by the service end as the source texture of the target object;
the output module is used for carrying out special effect processing on the source texture of the target object in the AR scene according to a special effect set by a service end to obtain an output texture of the target object;
and the feedback module is used for returning the output texture to the service end.
9. The apparatus of claim 8, wherein the output module comprises:
and the rendering unit is used for calling a texture rendering function corresponding to the set special effect, performing texture rendering on the source texture of the target object in the AR scene, and obtaining the output texture of the target object.
10. The apparatus of claim 8, wherein the output module further comprises:
the drawing unit is used for drawing the target object in the AR scene according to the output texture to obtain special effect image information of the AR scene;
and the feedback unit is used for returning the special effect image information to the service end.
11. The apparatus of claim 10, wherein the output module further comprises:
the execution unit is used for scheduling a first thread in a thread pool to execute the step of processing the source texture of the target object in the AR scene when the target object is not in a locked state, and locking the target object until the first thread is executed; and
and when the target object is not in a locked state, scheduling a second thread in the thread pool to execute the step of drawing the target object in the AR scene and locking the target object until the second thread is completely executed.
12. The apparatus of any one of claims 8-11, wherein the import module is to:
receiving an identifier of a shared context sent by the service end;
and importing the sharing context into the foreground of the AR scene according to the identifier of the sharing context so as to obtain a target object serving as the foreground in the AR scene.
13. The apparatus of any of claims 8-11, wherein the initialization module is to:
acquiring a resource video from the service end;
under the condition that the frame rate of the resource video is lower than a set threshold value, performing frame interpolation processing on the resource video;
and importing the processed resource video into the background of the AR scene to obtain a target object serving as the background in the AR scene.
14. The apparatus of claim 13, wherein the initialization module is further configured to:
and according to the set target size, target playing direction and/or target format, at least one of size, playing direction and format adjustment is carried out on the resource video sent by the service end.
15. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-7.
16. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-7.
17. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any one of claims 1-7.
CN202110761080.2A 2021-07-06 2021-07-06 Special effect processing method and device and electronic equipment Active CN113542620B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110761080.2A CN113542620B (en) 2021-07-06 2021-07-06 Special effect processing method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110761080.2A CN113542620B (en) 2021-07-06 2021-07-06 Special effect processing method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN113542620A true CN113542620A (en) 2021-10-22
CN113542620B CN113542620B (en) 2022-02-25

Family

ID=78126820

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110761080.2A Active CN113542620B (en) 2021-07-06 2021-07-06 Special effect processing method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN113542620B (en)

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104616243A (en) * 2015-01-20 2015-05-13 北京大学 Effective GPU three-dimensional video fusion drawing method
US20150286873A1 (en) * 2014-04-03 2015-10-08 Bruce L. Davis Smartphone-based methods and systems
CN106803921A (en) * 2017-03-20 2017-06-06 深圳市丰巨泰科电子有限公司 Instant audio/video communication means and device based on AR technologies
CN107369200A (en) * 2017-07-04 2017-11-21 福建星网视易信息系统有限公司 A kind of 3D scene special effects processing method and storage device
US20180309808A1 (en) * 2017-04-20 2018-10-25 Saysearch, Inc. Communication sessions between computing devices using dynamically customizable interaction environments
CN109600666A (en) * 2018-12-12 2019-04-09 网易(杭州)网络有限公司 Video broadcasting method, device, medium and electronic equipment in scene of game
US10497180B1 (en) * 2018-07-03 2019-12-03 Ooo “Ai-Eksp” System and method for display of augmented reality
US20190379742A1 (en) * 2018-06-08 2019-12-12 Vulcan Inc. Session-based information exchange
CN110784733A (en) * 2019-11-07 2020-02-11 广州虎牙科技有限公司 Live broadcast data processing method and device, electronic equipment and readable storage medium
CN111640197A (en) * 2020-06-09 2020-09-08 上海商汤智能科技有限公司 Augmented reality AR special effect control method, device and equipment
CN112184856A (en) * 2020-09-30 2021-01-05 广州光锥元信息科技有限公司 Multimedia processing device supporting multi-layer special effect and animation mixing
CN112348969A (en) * 2020-11-06 2021-02-09 北京市商汤科技开发有限公司 Display method and device in augmented reality scene, electronic equipment and storage medium
WO2021022962A1 (en) * 2019-08-08 2021-02-11 华为技术有限公司 Method and device for model inference based on graphics rendering pipelines, and storage medium

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150286873A1 (en) * 2014-04-03 2015-10-08 Bruce L. Davis Smartphone-based methods and systems
CN104616243A (en) * 2015-01-20 2015-05-13 北京大学 Effective GPU three-dimensional video fusion drawing method
CN106803921A (en) * 2017-03-20 2017-06-06 深圳市丰巨泰科电子有限公司 Instant audio/video communication means and device based on AR technologies
US20180309808A1 (en) * 2017-04-20 2018-10-25 Saysearch, Inc. Communication sessions between computing devices using dynamically customizable interaction environments
CN107369200A (en) * 2017-07-04 2017-11-21 福建星网视易信息系统有限公司 A kind of 3D scene special effects processing method and storage device
US20190379742A1 (en) * 2018-06-08 2019-12-12 Vulcan Inc. Session-based information exchange
US10497180B1 (en) * 2018-07-03 2019-12-03 Ooo “Ai-Eksp” System and method for display of augmented reality
CN109600666A (en) * 2018-12-12 2019-04-09 网易(杭州)网络有限公司 Video broadcasting method, device, medium and electronic equipment in scene of game
WO2021022962A1 (en) * 2019-08-08 2021-02-11 华为技术有限公司 Method and device for model inference based on graphics rendering pipelines, and storage medium
CN110784733A (en) * 2019-11-07 2020-02-11 广州虎牙科技有限公司 Live broadcast data processing method and device, electronic equipment and readable storage medium
CN111640197A (en) * 2020-06-09 2020-09-08 上海商汤智能科技有限公司 Augmented reality AR special effect control method, device and equipment
CN112184856A (en) * 2020-09-30 2021-01-05 广州光锥元信息科技有限公司 Multimedia processing device supporting multi-layer special effect and animation mixing
CN112348969A (en) * 2020-11-06 2021-02-09 北京市商汤科技开发有限公司 Display method and device in augmented reality scene, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN113542620B (en) 2022-02-25

Similar Documents

Publication Publication Date Title
CN113327278B (en) Three-dimensional face reconstruction method, device, equipment and storage medium
CN114820905B (en) Virtual image generation method and device, electronic equipment and readable storage medium
CN113420719A (en) Method and device for generating motion capture data, electronic equipment and storage medium
CN113052962B (en) Model training method, information output method, device, equipment and storage medium
CN113365146B (en) Method, apparatus, device, medium and article of manufacture for processing video
CN114723888B (en) Three-dimensional hair model generation method, device, equipment, storage medium and product
CN113870399A (en) Expression driving method and device, electronic equipment and storage medium
CN113453073A (en) Image rendering method and device, electronic equipment and storage medium
CN115953468A (en) Method, device and equipment for estimating depth and self-movement track and storage medium
CN113380269B (en) Video image generation method, apparatus, device, medium, and computer program product
CN114187392A (en) Virtual even image generation method and device and electronic equipment
CN116402914B (en) Method, device and product for determining stylized image generation model
CN112529161A (en) Training method for generating countermeasure network, and method and device for translating human face image
CN115393488B (en) Method and device for driving virtual character expression, electronic equipment and storage medium
CN113542620B (en) Special effect processing method and device and electronic equipment
CN115861510A (en) Object rendering method, device, electronic equipment, storage medium and program product
CN113240780B (en) Method and device for generating animation
CN114529649A (en) Image processing method and device
CN114760526A (en) Video rendering method and device, electronic equipment and storage medium
CN113836455A (en) Special effect rendering method, device, equipment, storage medium and computer program product
CN113887435A (en) Face image processing method, device, equipment, storage medium and program product
CN113434551B (en) Data processing method, device, equipment and computer storage medium
CN113099231B (en) Method and device for determining sub-pixel interpolation position, electronic equipment and storage medium
CN114820908B (en) Virtual image generation method and device, electronic equipment and storage medium
CN116882482A (en) Training of virtual image generation model and virtual image generation method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant