CN118055201A - Method and equipment for displaying special effect picture - Google Patents

Method and equipment for displaying special effect picture Download PDF

Info

Publication number
CN118055201A
CN118055201A CN202410171789.0A CN202410171789A CN118055201A CN 118055201 A CN118055201 A CN 118055201A CN 202410171789 A CN202410171789 A CN 202410171789A CN 118055201 A CN118055201 A CN 118055201A
Authority
CN
China
Prior art keywords
information
special effect
virtual model
marker
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410171789.0A
Other languages
Chinese (zh)
Inventor
闫东葆
沈翀
檀彦利
涂子豪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sankuai Online Technology Co Ltd
Original Assignee
Beijing Sankuai Online Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sankuai Online Technology Co Ltd filed Critical Beijing Sankuai Online Technology Co Ltd
Priority to CN202410171789.0A priority Critical patent/CN118055201A/en
Publication of CN118055201A publication Critical patent/CN118055201A/en
Pending legal-status Critical Current

Links

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The application discloses a display method and device of special effect pictures, and belongs to the technical field of artificial intelligence. The method comprises the following steps: displaying a scene image obtained by shooting through a shooting device; determining a selected marker in the scene image, wherein the marker has associated virtual reality special effect information, and the association mode of the marker and the virtual reality special effect information is unchanged under the condition that the shooting angle of the shooting device is changed; and displaying the special effect picture, wherein the special effect picture comprises virtual reality special effect information related to the marker displayed on the field image. The application selects the marker from the scene image, improves the convenience of marker selection, utilizes the marker in the scene image to be associated with the virtual reality special effect information, adapts the display angle of the virtual reality special effect information to the shooting angle of the shooting device, can present more lifelike special effect pictures, is applied to various scenes needing the special effect pictures, for example, is applied to a video tool for generating the special effect pictures, and improves the interactive experience of users.

Description

Method and equipment for displaying special effect picture
Technical Field
The embodiment of the application relates to the field of artificial intelligence, in particular to a method and equipment for displaying special effect pictures.
Background
The virtual reality is to superimpose a virtual model into a scene image of a real world to obtain a corresponding special effect picture, namely, the real environment and the virtual model are presented in the same picture.
At present, the virtual reality technology mainly superimposes a fixed virtual model and a real scene image, and the superimposed effect is single, so that the user experience is influenced.
Disclosure of Invention
The embodiment of the application provides a display method and device of special effect pictures, which can be used for solving the problems in the related technology. The technical scheme is as follows:
In one aspect, an embodiment of the present application provides a method for displaying a special effect picture, where the method includes: displaying a scene image obtained by shooting through a shooting device; determining a selected marker in the scene image, wherein the marker has associated virtual reality special effect information, and the association mode of the marker and the virtual reality special effect information is unchanged under the condition that the shooting angle of the shooting device is changed; and displaying a special effect picture, wherein the special effect picture comprises virtual reality special effect information related to the marker displayed on the scene image, and the display angle of the virtual reality special effect information is matched with the shooting angle of the shooting device.
In a possible implementation manner, the special effect picture further includes a processing control, where the processing control includes at least one of a sharing control, a release control, or a modification control, the sharing control is used for performing a sharing operation on the special effect picture, the release control is used for performing a release operation on the special effect picture, and the modification control is used for performing a modification operation on the special effect picture; after the special effect picture is displayed, the method further comprises the following steps: and under the condition that the processing control is triggered, processing operation corresponding to the processing control is carried out on the special effect picture.
In one possible implementation manner, before the displaying the special effect picture, the method further includes: displaying a plurality of model resources uploaded by a user, and determining a selected model resource in the plurality of model resources as a virtual model associated with the marker; and performing special effect processing on the virtual model to obtain virtual reality special effect information for rendering the virtual model, and associating the virtual reality special effect information with the marker.
In one possible implementation manner, the virtual reality effect information includes first virtual reality effect information or second virtual reality effect information, and performing effect processing on the virtual model to obtain virtual reality effect information for rendering the virtual model may include: obtaining model parameters of the virtual model, and determining dimension information of the virtual model based on the model parameters; under the condition that the dimension information indicates that the virtual model is a two-dimensional virtual model, decoding processing and format conversion processing are carried out on the virtual model, so that the first virtual reality special effect information is obtained; and under the condition that the dimension information indicates that the virtual model is a three-dimensional virtual model, determining format information of the three-dimensional virtual model based on the model parameters, and processing the virtual model based on the format information to obtain the second virtual reality special effect information.
In one possible implementation manner, the decoding processing and format conversion processing are performed on the virtual model to obtain the first virtual reality special effect information, which may include: decoding the virtual model by using a decoder to obtain a frame image in a first image format, wherein the first image format is an image format output by the decoder; and carrying out format conversion on the frame image in the first image format to obtain the first virtual reality special effect information.
In a possible implementation manner, the processing the virtual model based on the format information to obtain the second virtual reality special effect information may include: determining a first virtual model or a second virtual model based on format information of the three-dimensional virtual model and standard format information, wherein the format information of the first virtual model is the same as the standard format information, and the format information of the second virtual model is different from the standard format information; analyzing the first virtual model under the condition that the three-dimensional virtual model is the first virtual model to obtain the second virtual reality special effect information or analysis error information corresponding to the first virtual model; performing format conversion on the first virtual model corresponding to the analysis error information to obtain the second virtual reality special effect information; and under the condition that the three-dimensional virtual model is the second virtual model, performing format conversion on the second virtual model to obtain the second virtual reality special effect information.
In one possible implementation, the determining, after the marker selected in the scene image, further includes: performing feature matching on the plurality of scene images and the marker; obtaining rendering parameters according to the successfully matched scene images and the markers; rendering the marker and the virtual model based on the rendering parameters to obtain a rendering picture; and superposing the rendering picture and the scene image to obtain the special effect picture.
In one possible implementation, the feature matching the plurality of scene images and the marker may include: acquiring a feature matching interval, determining a plurality of first scene images and a plurality of second scene images from the plurality of scene images based on the feature matching interval, the first scene images being determined based on the feature matching interval, the second scene images being located between the first scene images; performing feature matching on a first scene image and the marker based on the generation sequence of the scene images to obtain first matching information, wherein the first matching information comprises the matching degree of the first scene image and the marker and the position information of a reference feature; tracking and detecting the position of the image pickup device corresponding to the marker and the second scene image based on the first matching information to obtain second matching information, wherein the second matching information comprises the moving position information of the reference feature; and carrying out feature matching on the first scene images except the first scene image and the marker based on the second matching information to obtain third matching information, wherein the third matching information comprises the matching degree of the first scene image and the marker.
In one possible implementation manner, the virtual reality special effect information includes model parameters of the virtual model, and the obtaining rendering parameters according to the successfully matched scene image and the marker includes: obtaining parameters of the camera device based on the scene image successfully matched with the marker; determining a first coordinate system corresponding to the marker by using parameters of the image pickup device, and determining a second coordinate system corresponding to the virtual model by using model parameters of the virtual model; and acquiring parameter information of the first coordinate system and the second coordinate system, and processing at least one of parameters of the image pickup device or model parameters based on the parameter information to obtain rendering parameters.
In a possible implementation manner, the processing at least one of the parameters of the image capturing device or the model parameters based on the parameter information to obtain rendering parameters may include: when the directions of the coordinate axes of the first coordinate system and the second coordinate system are the same, determining a parameter matrix corresponding to the parameters of the image pickup device and the model parameters, and obtaining the rendering parameters based on the parameter matrix; when the directions of the coordinate axes of the first coordinate system and the second coordinate system are different, determining conversion parameters of the first coordinate system and the second coordinate system, adjusting at least one of parameters of the image capturing device or model parameters based on the conversion parameters, determining a parameter matrix corresponding to at least one of the adjusted parameters of the image capturing device or model parameters, and obtaining the rendering parameters based on the parameter matrix.
In one possible implementation manner, the overlaying the rendered frame with the scene image to obtain the special effect frame may include: superposing the rendering picture and the scene image to obtain a preview picture; acquiring a control instruction, wherein the control instruction is used for confirming the preview picture; and taking the preview picture as the special effect picture according to the control instruction.
In one possible implementation manner, the acquiring a control instruction may include: detecting at least one of first trigger information or second trigger information, and generating a corresponding control instruction when the at least one of the first trigger information or the second trigger information is detected, wherein the first trigger information is generated based on a control of a preview interface, the preview interface comprises a preview picture, and the second trigger information is generated based on at least one of a hand gesture or a hand motion track of a user.
In another aspect, an embodiment of the present application provides a display apparatus for a special effect picture, including: the first display module is used for displaying the scene image shot by the camera device; the determining module is used for determining a marker selected from the scene image, the marker has associated virtual reality special effect information, and the association mode of the marker and the virtual reality special effect information is unchanged under the condition that the shooting angle of the shooting device is changed; the second display module is used for displaying a special effect picture, the special effect picture comprises virtual reality special effect information related to the marker displayed on the scene image, and the display angle of the virtual reality special effect information is matched with the shooting angle of the shooting device.
In a possible implementation manner, the special effect picture further includes a processing control, where the processing control includes at least one of a sharing control, a release control, or a modification control, the sharing control is used for performing a sharing operation on the special effect picture, the release control is used for performing a release operation on the special effect picture, and the modification control is used for performing a modification operation on the special effect picture; and the second display module is further used for performing processing operation corresponding to the processing control on the special effect picture under the condition that the processing control is triggered.
In one possible implementation, the determining module is configured to display a plurality of model resources uploaded by a user, and determine a selected model resource of the plurality of model resources as a virtual model associated with the marker; and performing special effect processing on the virtual model to obtain virtual reality special effect information for rendering the virtual model, and associating the virtual reality special effect information with the marker.
In one possible implementation manner, the virtual reality special effect information includes first virtual reality special effect information and second virtual reality special effect information, and the determining module is used for obtaining model parameters of the virtual model and determining dimension information of the virtual model based on the model parameters; under the condition that the dimension information indicates that the virtual model is a two-dimensional virtual model, decoding processing and format conversion processing are carried out on the virtual model, so that the first virtual reality special effect information is obtained; and under the condition that the dimension information indicates that the virtual model is a three-dimensional virtual model, determining format information of the three-dimensional virtual model based on the model parameters, and processing the virtual model based on the format information to obtain the second virtual reality special effect information.
In a possible implementation manner, the determining module is configured to decode the virtual model with a decoder to obtain a frame image in a first image format, where the first image format is an image format output by the decoder; and carrying out format conversion on the frame image in the first image format to obtain the first virtual reality special effect information.
In a possible implementation manner, the determining module is configured to determine, based on format information of the three-dimensional virtual model and standard format information, a first virtual model or a second virtual model, where the format information of the first virtual model is the same as the standard format information, and the format information of the second virtual model is different from the standard format information; analyzing the first virtual model under the condition that the three-dimensional virtual model is the first virtual model to obtain the second virtual reality special effect information or analysis error information corresponding to the first virtual model; performing format conversion on the first virtual model corresponding to the analysis error information to obtain the second virtual reality special effect information; and under the condition that the three-dimensional virtual model is the second virtual model, performing format conversion on the second virtual model to obtain the second virtual reality special effect information.
In a possible implementation manner, the determining module is further configured to perform feature matching on a plurality of scene images and the markers; obtaining rendering parameters according to the successfully matched scene images and the markers; rendering the marker and the virtual model based on the rendering parameters to obtain a rendering picture; and superposing the rendering picture and the scene image to obtain the special effect picture.
In one possible implementation, the determining module is configured to obtain a feature matching interval, determine a plurality of first scene images and a plurality of second scene images from the plurality of scene images based on the feature matching interval, where the first scene images are determined based on the feature matching interval, and the second scene images are located between the first scene images; performing feature matching on a first scene image and the marker based on the generation sequence of the scene images to obtain first matching information, wherein the first matching information comprises the matching degree of the first scene image and the marker and the position information of a reference feature; tracking and detecting the position of the image pickup device corresponding to the marker and the second scene image based on the first matching information to obtain second matching information, wherein the second matching information comprises the moving position information of the reference feature; and carrying out feature matching on the first scene images except the first scene image and the marker based on the second matching information to obtain third matching information, wherein the third matching information comprises the matching degree of the first scene image and the marker.
In a possible implementation manner, the virtual reality special effect information includes model parameters of the virtual model, and the determining module is further configured to obtain parameters of the image capturing device based on the scene image successfully matched with the marker; determining a first coordinate system corresponding to the marker by using parameters of the image pickup device, and determining a second coordinate system corresponding to the virtual model by using model parameters of the virtual model; and acquiring parameter information of the first coordinate system and the second coordinate system, and processing at least one of parameters of the image pickup device or model parameters based on the parameter information to obtain rendering parameters.
In a possible implementation manner, the determining module is configured to determine a parameter matrix corresponding to the parameter of the image capturing device and the model parameter when the directions of coordinate axes of the first coordinate system and the second coordinate system are the same, and obtain the rendering parameter based on the parameter matrix; when the directions of the coordinate axes of the first coordinate system and the second coordinate system are different, determining conversion parameters of the first coordinate system and the second coordinate system, adjusting at least one of parameters of the image capturing device or model parameters based on the conversion parameters, determining a parameter matrix corresponding to at least one of the adjusted parameters of the image capturing device or model parameters, and obtaining the rendering parameters based on the parameter matrix.
In a possible implementation manner, the determining module is configured to superimpose the rendering frame and the scene to obtain a preview frame; acquiring a control instruction, wherein the control instruction is used for confirming the preview picture; and taking the preview picture as the special effect picture according to the control instruction.
In one possible implementation manner, the determining module is configured to detect at least one of first trigger information or second trigger information, and generate the corresponding control instruction when the at least one of the first trigger information or the second trigger information is detected, where the first trigger information is generated based on a control of a preview interface, and the preview interface includes the preview screen, and the second trigger information is generated based on at least one of a hand gesture or a hand motion track of a user.
In another aspect, an embodiment of the present application provides a computer device, where the computer device includes a processor and a memory, where the memory stores at least one program code, and the at least one program code is loaded and executed by the processor, so that the computer device implements a method for displaying a special effect picture as described in any one of the foregoing.
In another aspect, there is further provided a computer readable storage medium having at least one program code stored therein, the at least one program code being loaded and executed by a processor, so that a computer implements the method for displaying a special effect picture as described in any one of the above.
In another aspect, a computer program or a computer program product is provided, where at least one computer instruction is stored, where the at least one computer instruction is loaded and executed by a processor, so that the computer implements a method for displaying any of the special effects pictures described above.
The technical scheme provided by the embodiment of the application at least has the following beneficial effects:
According to the technical scheme provided by the embodiment of the application, the marker contained in the scene image is utilized to correlate with the virtual reality special effect information, the convenience of marker selection is improved, the marker in the scene image is utilized to correlate with the virtual reality special effect information, the display angle of the virtual reality special effect information is matched with the shooting angle of the shooting device, more realistic special effect pictures can be presented, and the method is applied to various scenes needing the special effect pictures, for example, the method is applied to video tools containing the special effect pictures, and the user interaction experience is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic diagram of an implementation environment of a method for displaying a special effect picture according to an embodiment of the present application;
fig. 2 is a flowchart of a method for displaying a special effect picture according to an embodiment of the present application;
FIG. 3 is a flowchart of a method for feature matching of a scene image according to an embodiment of the present application;
Fig. 4 is a schematic diagram of obtaining camera parameters according to an embodiment of the present application;
FIG. 5 is a flowchart of a method for obtaining a special effect picture according to an embodiment of the present application;
FIG. 6 is a schematic diagram of a first coordinate system according to an embodiment of the present application;
FIG. 7 is a schematic diagram of a second coordinate system according to an embodiment of the present application;
fig. 8 is a schematic diagram of a display process of a special effect picture according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of a display device for special effect pictures according to an embodiment of the present application;
Fig. 10 is a schematic structural diagram of a terminal device according to an embodiment of the present application;
fig. 11 is a schematic structural diagram of a server according to an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the embodiments of the present application will be described in further detail with reference to the accompanying drawings.
Fig. 1 is a schematic diagram of an implementation environment of a method for displaying a special effect picture according to an embodiment of the present application. As shown in fig. 1, the implementation environment includes: a terminal device 101 and a server 102.
The method for displaying the special effect picture provided in the embodiment of the present application may be executed by the terminal device 101, or may be executed by the terminal device 101 and the server 102 together, which is not limited in the embodiment of the present application. For the case where the display method of the special effect picture provided by the embodiment of the present application is executed by the terminal device 101 and the server 102 together, the server 102 takes over the primary computing work, and the terminal device 101 takes over the secondary computing work; or the server 102 carries the secondary computing work and the terminal device 101 carries the primary computing work; or the server 102 and the terminal device 101 perform cooperative computing by adopting a distributed computing architecture.
Alternatively, the terminal device 101 may be any electronic product that can perform man-machine interaction with a user through one or more modes of a keyboard, a touchpad, a touch screen, a remote controller, a voice interaction or a handwriting device, etc. Terminal devices 101 include, but are not limited to, cell phones, computers, intelligent voice interaction devices, vehicle terminals, and the like. In the embodiment of the present application, the terminal device 101 may have an image pickup device and a display screen, based on which a special effect picture may be further obtained by photographing a picture by the image pickup device, and the special effect picture may be displayed by the display screen. The server 102 is a server, or a server cluster formed by a plurality of servers, or any one of a cloud computing platform and a virtualization center, which is not limited in this embodiment of the present application. The server 102 is in communication connection with the terminal device 101 via a wired network or a wireless network. The server 102 has a data receiving function, a data processing function, and a data transmitting function. Of course, the server 102 may also have other functions, which embodiments of the present application do not limit.
It will be appreciated by those skilled in the art that the above-described terminal device 101 and server 102 are merely illustrative, and that other terminal devices or servers, now existing or hereafter may be present, as applicable to the present application, are intended to be within the scope of the present application and are incorporated herein by reference.
The embodiment of the present application provides a method for displaying a special effect picture, which can be applied to the above implementation environment, and takes a flowchart of the method for displaying a special effect picture provided in the embodiment of the present application shown in fig. 2 as an example, where the method can be executed by an electronic device, and the electronic device may be the terminal device 101 in fig. 1, or may be a generic name of the server 102 and the terminal device 101 in fig. 1, which is not limited in the embodiment of the present application. As shown in fig. 2, the method includes the following steps 110 to 130.
In step 110, a scene image captured by an imaging device is displayed.
In an exemplary embodiment of the present application, the scene image is a real image of a real scene, and may be obtained by directly shooting through an image capturing device, where the image capturing device may be a camera of the terminal device. For example, the scene image may be an image taken by a camera onboard the terminal device. The terminal equipment starts a camera of the camera to shoot the marker contained in the current scene in real time, and in the shooting process, different angles of the marker can be shot to obtain a scene image containing the marker. The marker can be used for bearing a virtual model contained in virtual reality special effect information, and the marker is an object with texture characteristics. Taking a special effect picture display system on the terminal equipment as an example, the camera device uploads the scene image containing the marker to the special effect picture display system of the terminal equipment, and the special effect picture display system can display the scene image.
Illustratively, the Format of the scene image is not limited by the embodiment of the present application, and for example, the Format of the scene image may include one or more of jpg or jpeg (Joint Photographic Experts Group, joint image expert group), png (Portable Network Graphics ), tiff (TAG IMAGE FILE Format, tag image file Format), gif (GRAPHICS INTERCHANGE Format ), bmp (Bitmap), pcx (PC Paintbrush Exchange, PC brush Format Bitmap graphics).
In step 120, a marker selected in the scene image is determined, the marker has associated virtual reality special effect information, and in the case that the shooting angle of the imaging device is changed, the association manner of the marker and the virtual reality special effect information is unchanged.
In an exemplary embodiment of the present application, the marker may be a portion of the scene image having a texture feature, and after the scene image is displayed, the user may select the marker from the scene image, and the embodiment of the present application does not limit the manner in which the marker is selected from the scene image, and the effect picture display system determines the marker selected from the scene image based on the selection operation, for example. For example, when displaying a scene image, selection items may be displayed for each marker included in the scene image, and after the selection item of any marker is selected, the marker corresponding to the selected selection item is used as the selected marker. To achieve effect display, the markers have associated virtual reality effect information. The embodiment of the application does not limit the association mode of the marker and the virtual reality special effect information, for example, the marker and the virtual reality special effect information can be associated through the same position information or are bound based on the same identifier. In addition, the method can be applied to a scene shot by the camera in real time, so that the shooting angle of the camera can be changed, and under the condition that the shooting angle of the camera is changed, the association mode of the marker and the virtual reality special effect information is unchanged, so that the adaptation of the virtual reality special effect information to the marker along with the shooting angle of the camera is realized.
In one possible implementation, the process of determining virtual reality special effects information associated with the marker may include, but is not limited to: displaying a plurality of model resources uploaded by a user, and determining a selected model resource in the plurality of model resources as a virtual model associated with the marker; and performing special effect processing on the virtual model to obtain virtual reality special effect information for rendering the virtual model, and associating the virtual reality special effect information with the marker.
The virtual model may be a virtual model uploaded to the special effects display system by a user or may be a virtual model provided by the special effects display system. The virtual model has corresponding model parameters, wherein the virtual model may include a two-dimensional (2D) virtual model and a three-dimensional (3D) virtual model. The two-dimensional virtual model may include, but is not limited to, pictures, videos, 2D text, etc., and the three-dimensional virtual model may include, but is not limited to, static three-dimensional virtual models, dynamic three-dimensional virtual models, 3D text, etc. Model parameters may include a model coordinate system established based on the virtual model and associated coordinates of the virtual model. The origin of the model coordinate system may be the center point of the virtual model, or may be other points set by the user.
In an exemplary embodiment of the present application, after the virtual model is obtained, model parameters of the virtual model may also be obtained, and dimension information of the virtual model may be determined based on the model parameters.
Illustratively, in the case that the dimension information indicates that the virtual model is a two-dimensional virtual model, decoding processing and format conversion processing are performed on the virtual model, so as to obtain first virtual reality special effect information. The two-dimensional virtual model may include mp4 (Moving Picture Experts Group Audio Layer IV, dynamic Video expert compression standard audio layer 4), flv (Flash Video), avi (Audio Video Interleaved, audio Video interlaced format), pg (Progressive Graphics ), png (Portable Network Graphics, portable network graphics), jpeg, tiff, gif, bmp, pcx, and the like.
In an exemplary embodiment of the present application, the process of performing decoding processing and format conversion processing on the virtual model to obtain the first virtual reality special effect information may include: decoding the virtual model by using a decoder to obtain a frame image in a first image format, wherein the first image format is an image format output by the decoder; and converting the frame image in the first image format to obtain first virtual reality special effect information.
Taking a video file with a two-dimensional virtual model as an mp4 format as an example, decoding the video file by using a decoder to obtain a plurality of frame images with a first image format, wherein the first image format is an image format output by the decoder. For example, the first image format may be a YUV format, Y representing luminance information of the frame image, and U and V representing chrominance information of the frame image. And then converting the frame image in the first image format to obtain first virtual reality special effect information, wherein the first virtual reality special effect information can be directly used for rendering by a renderer. For example, the first virtual reality special effect information may include an RGB format image composed of three primary colors, and the RGB format image may be directly applied to a subsequent rendering process.
In an exemplary case where the dimension information indicates that the virtual model is a three-dimensional virtual model, format information of the three-dimensional virtual model is determined based on the model parameters, and the virtual model is processed based on the format information to obtain second virtual reality special effect information.
Due to the variety of three-dimensional virtual models, there are some three-dimensional virtual models in common formats, in addition to the three-dimensional virtual models in multiple common formats generated in common three-dimensional modeling software. For example, common formats of the three-dimensional virtual model may include FBX (Autodesk FBX, olter FBX Exchange file format), gltf (GL Transmission Format, graphic language transmission), glb (GL Transmission Format Binary, graphic transmission binary), obj (Wavefront OBJect, waveform prefix object file), dae (DIGITAL ASSET Exchange ), and the like.
For example, after the format information of the three-dimensional virtual model is obtained, the virtual model may be processed based on the format information of the three-dimensional virtual model to obtain the second virtual reality special effect information. Processing the virtual model based on the format information to obtain the second virtual reality special effect information may include steps 121 to 123.
In step 121, a first virtual model or a second virtual model is determined based on the format information of the three-dimensional virtual model and the standard format information, the format information of the first virtual model being identical to the standard format information, and the format information of the second virtual model being different from the standard format information.
For example, the standard format information may be a format commonly used for a three-dimensional virtual model, that is, a format that can be directly processed by the special effects screen display system. For example, the standard format information may include one or more of fbx, gltf, glb, obj, dae.
In step 122, if the three-dimensional virtual model is the first virtual model, the first virtual model is parsed to obtain second virtual reality special effect information or parsing error information corresponding to the first virtual model; and under the condition that the analysis error information is obtained, performing format conversion on the first virtual model corresponding to the analysis error information to obtain second virtual reality special effect information.
In an exemplary embodiment of the present application, for a first virtual model in a standard format, parsing the first virtual model is performed, where the first virtual model may include a three-dimensional virtual model and model parameters corresponding to the three-dimensional virtual model. Because of the complexity of the three-dimensional virtual model in the modeling process, for example, the three-dimensional virtual model may be a simple virtual model with a small number of patches, or a complex virtual model with a large number of patches, including texture maps, multiple materials, and multiple animation effects. For simple virtual models and complex virtual models, there is a large difference in the model parsing process, and thus, the parsing result may be different for virtual models of the same format. For the first virtual model, if the first virtual model is successfully analyzed, second virtual reality special effect information is obtained; and if the first virtual model fails to analyze, obtaining analysis error information.
For example, for the first virtual model that fails to parse, i.e. the first virtual model that obtains the parsing error information, the format of the first virtual model may be converted into a custom format by using a format conversion tool. In the format conversion process, the data in the format conversion process is reserved, the data in the format conversion process is detected, whether abnormal data exist or not is determined, and if the abnormal data exist in the format conversion process, correction processing is carried out on the abnormal data. And then analyzing the first virtual model to obtain second virtual reality special effect information. The second virtual reality special effect information may include mesh data, texture map, animation data, material data, and the like of the three-dimensional virtual model, and may be directly used for rendering by the renderer.
In step 123, format conversion is performed on the second virtual model, so as to obtain second virtual reality special effect information.
The process of performing format conversion on the second virtual model is similar to the process of performing format conversion on the first virtual mat model with failed parsing, and will not be described in detail herein. According to the embodiment of the application, the two-dimensional virtual model and the three-dimensional virtual model are processed differently through the dimension of the virtual model to obtain the first virtual reality special effect information or the second virtual reality special effect information, which can be compatible with virtual models with more formats, provide more selected virtual models for users, present more realistic visual effects in the rendering process, improve the interactive experience of the users, and reduce the cost of model creation and maintenance to a certain extent by improving the compatibility of the virtual models.
In an exemplary embodiment of the present application, after obtaining the scene image, parameters of the image capturing apparatus may also be determined using the scene image and the marker. Taking a marker as an example of a marker image, the process of determining parameters of the image pickup device can comprise the following steps: performing feature matching on the scene image and the marker; and determining parameters of the image pickup device according to the scene images and the markers which are successfully matched.
Illustratively, taking the case that the image capturing device continuously captures the scene including the marker image at different angles, a plurality of continuous scene images may be obtained as an example. Fig. 3 is a flowchart of feature matching of a scene image according to an embodiment of the present application. As shown in fig. 3, feature matching the scene image may include steps 124 through 127.
In step 124, a feature matching interval is obtained, a plurality of first scene images and a plurality of second scene images are determined from the plurality of scene images based on the feature matching interval, the first scene images being determined based on the feature matching interval, the second scene images being located between the first scene images.
For example, the feature matching interval may be a preset time or a preset number of scene images. The application takes the feature matching interval as the preset scene image quantity as an example. And determining first scene images and second scene images in the scene images through the number of preset scene images, wherein the first scene images are determined based on feature matching intervals, namely the number of the scene images between adjacent first scene images is the same as the number of the preset scene images, and the second scene images are the scene images between the adjacent first scene images.
In step 125, feature matching is performed on the first scene image and the marker based on the order of generation of the scene images, so as to obtain first matching information, where the first matching information includes a degree of matching between the first scene image and the marker and location information of the reference feature.
Illustratively, feature extraction is performed on a first scene image from among a plurality of scene images to obtain key features and marker features of the first scene image. Wherein the method for extracting features of the first scene image includes, but is not limited to, extracting features of the first scene image using a Convolutional Neural Network (CNN). The key features are then used to generate corresponding feature vectors, wherein the feature vectors contain key information of the image, such as shape, texture, etc.
In one possible implementation, after feature extraction is completed, the extracted key features and the marker features may also be feature matched. For example, feature vectors corresponding to key features and marker features in the first scene image may be determined, where the feature vectors are vectors corresponding to feature points in the first scene image. The distances of the feature vectors corresponding to the key features and the marker features are calculated, wherein the distances of the feature vectors include, but are not limited to, euclidean distances, hamming distances, cosine distances, and the like. The smaller the distance of the feature vector, the more similar the two feature points are. First matching information of the first scene image and the marker image is determined based on the distance of the vector of features, wherein the first matching information may include a degree of matching and location information of the reference feature. The reference feature may be a feature corresponding to a feature vector having a distance between a feature vector corresponding to the key feature and a feature vector corresponding to the marker feature being smaller than a distance threshold, and the position information of the reference feature may be position information of a pixel block corresponding to a feature point of the reference feature, where the pixel block corresponding to the reference feature is used for tracking and detecting a position of the marker image relative to the image capturing device.
In step 126, tracking detection is performed on the position of the image capturing device corresponding to the marker and the second scene image based on the first matching information, so as to obtain second matching information, where the second matching information includes moving position information of the reference feature.
By way of example, by using the position information of the pixel block where the reference feature is included in the first matching information, the offset of the pixel block corresponding to the reference feature may be determined by searching the second scene image for the position of the pixel block corresponding to the reference feature, and tracking detection is performed on the position of the image capturing device corresponding to the second scene image by using the offset of the pixel block corresponding to the reference feature, so as to obtain the second matching information. Wherein the second matching information includes movement position information of the marker image with respect to the image pickup device, and the movement position information may include a direction of movement and a distance of movement.
In step 127, the first scene image other than the first scene image is feature-matched with the marker based on the second matching information to obtain third matching information, where the third matching information includes a degree of matching between the first scene image and the marker.
For example, the mobile position information in the second matching information can be assisted in positioning in the first scene images except the first scene image, so that the feature extraction areas of the rest of the first scene images can be quickly determined, and the rest of the first scene images can be assisted in feature matching with the mark images. For example, the movement position information may include a movement direction of the pixel block and a movement distance of the pixel block in the movement direction, and the direction and distance in which the feature points of the key feature and the feature points of the marker feature move may be determined based on the movement position information, and the feature extraction region in the first scene image may be rapidly determined.
In an exemplary embodiment of the present application, the scene images are initially screened based on the matching information, for example, the matching degree in the first matching information and the third matching information, so as to obtain successfully matched scene images, where the matching degree of the successfully matched scene images and the marked images is greater than or equal to a reference matching threshold, and the reference matching threshold may be set based on actual situations. By carrying out feature matching on the scene image and processing the matched scene image and the mark image in the subsequent process, the calculated amount in the subsequent image processing process can be reduced to a certain extent, and the calculation accuracy is improved.
In the exemplary embodiment of the application, after the feature matching of the scene image and the mark image is completed, the successfully matched scene image and mark image can be processed to obtain the parameters of the image pickup device.
Illustratively, a plurality of keypoints are determined in the marker and the matching-completed scene image (first scene image), the keypoints being in one-to-one correspondence in the marker and the matching-completed scene image. Acquiring two-dimensional coordinates of key points of the marker in a world coordinate system, completing three-dimensional coordinates of the key points of the matched scene image in a camera coordinate system, and determining parameters of the image pickup device by utilizing the two-dimensional coordinates of each key point and the corresponding three-dimensional coordinates, wherein the parameters of the image pickup device can comprise relative position information of the marker and the image pickup device. For the second scene image, the position information of the marker relative to the image capturing device may be tracked and detected based on the movement position information in the second matching information, for example, using the movement direction of the marker in the movement position information and the movement distance in the movement direction, to obtain the parameter of the image capturing device. For example, when the image pickup apparatus is a camera, a camera internal parameter and a camera external parameter can be obtained.
For example, camera parameters include, but are not limited to, focal length, principal point position, pixel size, and the like. Camera external parameters refer to position and posture information of the camera under the world coordinate system, including but not limited to a rotation matrix and a translation matrix corresponding to the camera. The camera internal parameters can be obtained by the parameters of the camera, and the camera external parameters can be obtained by using a PnP (PERSPECTIVE-n-Points) algorithm. Fig. 4 is a schematic diagram of obtaining camera parameters according to an embodiment of the present application. As shown in fig. 4, a plurality of reference points Pi are selected in the world coordinate system (OwXwYwZw), and the coordinates Pi of the reference points in the corresponding pixel coordinate system (OcXcYcZc) are acquired. For example, n reference points P1, P2 … Pn are selected, the coordinates in the corresponding pixel coordinate system are P1, P2 … Pn, and the external parameters R and t of the camera are determined by DLT (DIRECT LINEAR Transform, direct linear transformation).
It should be noted that, the manner of obtaining the camera internal parameters and the camera external parameters is described as an example in the present application, and other manners of obtaining the camera internal parameters and the camera external parameters may be used, which is not limited in this aspect of the present application.
For example, after the parameters of the image capturing apparatus are obtained, the parameters of the image capturing apparatus may be further processed to remove the parameters having a small amount of change in the relative position of the marker and the image capturing apparatus. For example, a screening threshold of the relative position information may be set, the amount of change in the relative position of the marker and the image pickup device may be compared with the screening threshold, and a parameter whose amount of change is smaller than the screening threshold may be removed. By processing the amount of relative position change, a relatively stable output can be formed during subsequent model rendering.
In the exemplary embodiment of the present application, after the parameters of the image capturing apparatus are determined, the parameters of the image capturing apparatus and the model parameters may be further processed to obtain the special effect picture. Fig. 5 is a flowchart of a method for obtaining a special effect picture according to an embodiment of the present application. As shown in fig. 5, obtaining a rendered screen may include steps 141 to 144.
In step 141, a first coordinate system corresponding to the marker is determined using parameters of the imaging device, and a second coordinate system corresponding to the virtual model is determined using model parameters of the virtual model.
For example, a first coordinate system may be established based on the markers. Taking the marker as a marker image for example, the origin of the first coordinate system may be the center of the marker image, or may be any point in the marker image set by the user, which is not limited in the present application. Fig. 6 is a schematic diagram of a first coordinate system according to an embodiment of the present application. As shown in fig. 6, a first coordinate system is established taking the center of the marker image 510 as the origin of the coordinate system as an example, an X-axis and a Z-axis are established with respect to the plane in which the marker image is located, and a Y-axis is established with respect to the plane in which the vertical marker image is located, for example, the first coordinate system is a right-hand coordinate system. After the first coordinate system is established, the coordinates of the image capturing device 520 in the first coordinate system can be determined by the parameters of the image capturing device, that is, the relative position of the image capturing device 520 and the marker image 510 can be determined. For example, the image pickup device 520 is located in the negative Y-axis direction of the first coordinate system, and the image pickup device faces the positive Y-axis direction.
The virtual reality special effect information includes a processed virtual model and model parameters corresponding to the virtual model, and the model parameters may include a second coordinate system corresponding to the virtual model. The origin of the second coordinate system may be the center of the virtual model or a point customized in the virtual model. Fig. 7 is a schematic diagram of a second coordinate system according to an embodiment of the present application. As shown in fig. 7, taking the virtual model as an example of a three-dimensional virtual model, taking the center of the bottom of the three-dimensional virtual model as the origin of the second coordinate system, establishing an X-axis and a Z-axis on the plane where the bottom of the three-dimensional virtual model is located, and establishing a Y-axis perpendicular to the planes of the X-axis and the Z-axis and corresponding to the extending direction of the three-dimensional virtual model.
In step 142, parameter information of the first coordinate system and the second coordinate system is acquired, and at least one of parameters of the image capturing apparatus or model parameters is processed based on the parameter information, so as to obtain rendering parameters.
For example, after the first coordinate system and the second coordinate system are established, parameter information of the first coordinate system and the second coordinate system may be obtained, wherein the parameter information may include an orientation of coordinate axes of the coordinate systems and a position of an origin of the coordinate axes. Comparing the first coordinate system with the second coordinate system, and determining a parameter matrix corresponding to the parameters of the image pickup device and the model parameters if the first coordinate system is the same as the second coordinate system. The first coordinate system and the second coordinate system are the same, and the directions of the coordinate axes of the first coordinate system and the second coordinate system are the same. Taking the camera as an example, the parameter matrix may include a model matrix, a view matrix, a projection matrix and a translation matrix, the parameter matrix corresponding to the model parameter may be the model matrix, the first coordinate system may coincide with the second coordinate system through the translation matrix, the parameter matrix corresponding to the camera external parameter may be the view matrix, the parameter matrix corresponding to the camera internal parameter may be the projection matrix, the view matrix is used for representing the placement position of the virtual model under the view angle of the camera, and the projection matrix is used for representing the perspective relation of the placement position of the virtual model under the view angle of the camera.
Illustratively, after obtaining the model matrix, the view matrix, the projection matrix, and the translation matrix, a product of the model matrix, the view matrix, the projection matrix, and the translation matrix may be obtained, and the product is used as a rendering parameter of the virtual model. The virtual model can meet the perspective relation under the view angle of the camera through the adjustment of rendering parameters, and the virtual model is positioned at the appointed position and the direction of the marked image are in accordance with the requirements.
For example, if the first coordinate system is different from the second coordinate system, a transformation parameter of the first coordinate system and the second coordinate system is determined. Wherein the first coordinate system and the second coordinate system are different in direction of at least one coordinate axis in the first coordinate system and the second coordinate system; the conversion parameters of the first coordinate system and the second coordinate system may include a rotation angle and a translation parameter of the coordinate system. Taking fig. 6 and fig. 7 as an example, after the first coordinate system in fig. 6 is rotated clockwise by 180 ° to obtain a coordinate system with the same coordinate axis direction as the second coordinate system in fig. 7, the first coordinate system in fig. 6 is translated based on the translation parameter to obtain the coordinate system with the same coordinate axis direction as fig. 7, and at this time, the first coordinate system coincides with the origin of the second coordinate system, and the corresponding coordinate axis direction is the same.
The conversion parameter may be one matrix or two matrices, for example. When the conversion parameters are two matrices, a rotation matrix and a translation matrix can be included, wherein the rotation matrix of 3*3 can be used for adjusting the rotation angle of coordinate axes in the coordinate system, and the translation matrix of 1*3 is used for controlling the translation of the coordinate system; when the conversion parameter is a matrix, the rotation matrix of 3*3 and the translation matrix of 1*3 can be combined to form a 3×4 matrix, or the 4 th row data in the 3×4 matrix is complemented to form a 4*4 square matrix, and rotation and translation of the coordinate system can be simultaneously realized through the 3×4 matrix or the 4*4 square matrix.
And then adjusting at least one of the model matrix, the view matrix and the projection matrix based on the information such as the conversion parameters, the placement position of the virtual model in the marked image, the orientation of the virtual model and the like, obtaining the product of the adjusted model matrix, the view matrix and the projection matrix, and taking the product of the adjusted model matrix, the adjusted view matrix and the adjusted projection matrix as rendering parameters. The virtual model can meet the perspective relation under the view angle of the camera through the adjustment of rendering parameters, and the virtual model is positioned at the appointed position and the direction of the marked image are in accordance with the requirements.
In the exemplary embodiment of the present application, the number of virtual models may be one or more. When the number of the virtual models is plural, the plural virtual models may be located in the same coordinate system or in different coordinate systems, and the dimensions of the plural virtual models may be the same or different. When the virtual models are located in the same coordinate system, namely the second coordinate system, the positions and the postures of the multiple virtual models relative to the marker can be adjusted through adjusting the view matrix, the projection matrix and the model matrix, and the positions and the postures of each virtual model can also be adjusted through adjusting the model matrix of a single virtual model.
In step 143, the marker and the virtual model are rendered based on the rendering parameters, resulting in a rendered picture.
Illustratively, the rendering parameters may include, but are not limited to, texture maps, animation data, texture data, etc. of the virtual model and the marker, which are rendered based on the rendering parameters to obtain a rendered picture.
In step 144, the rendered frame is superimposed with the scene image to obtain a special effect frame.
In the exemplary embodiment of the present application, the format of the scene image may also be determined, and if the format of the scene image satisfies the overlay format, the rendered image and the scene image are overlaid; if the format of the scene image does not meet the superposition format, converting the format of the scene image, and superposing the scene image after converting the format with the rendering picture. The superposition format may be a format that can be processed by the special effect picture display system. For example, the format that the special effect picture display system can process is jpg, jpeg, png, tiff, gif, bmp, pcx, etc., and if the format of the scene image is other format, the format of the scene image can be converted into the format that the special effect picture display system can process.
Illustratively, overlaying the rendered picture with the scene image to obtain the special effects picture may include: superposing the rendering picture and the scene image to obtain a preview picture; acquiring a control instruction, wherein the control instruction is used for confirming a preview picture; and taking the preview picture as a special effect picture according to the control instruction.
For example, the preview interface may include a preview screen, which may be an overlay of the rendered screen and the scene image, and controls.
The process of acquiring the control instruction may include: and detecting at least one of the first trigger information or the second trigger information, and generating a corresponding control instruction when the at least one of the first trigger information or the second trigger information is detected, wherein the first trigger information is generated based on a control of a preview interface, the preview interface comprises a preview picture, and the second trigger information is generated based on at least one of a hand gesture or a hand motion track of a user.
Illustratively, the first trigger information corresponding to the control is detected. Among other things, controls may include, but are not limited to, zoom-in controls, zoom-out controls, move controls, rotate controls, and confirm controls. When the triggering operation of any control is detected, corresponding first triggering information is generated, corresponding control instructions are generated based on the first triggering information, and the control instructions can include, but are not limited to, zooming in, zooming out, moving, rotating instructions, confirming instructions and the like of the preview screen.
For example, the preview interface may further detect at least one of a user hand gesture and a hand motion trajectory, and when detecting that the user hand gesture satisfies at least one of a specified motion or a hand motion trajectory, may generate second trigger information based on the at least one of the user hand gesture or the hand motion trajectory, generate a corresponding control instruction based on the second trigger information, and the control instruction may include, but is not limited to, zooming in, zooming out, moving, rotating instruction, confirmation instruction, and the like of the preview screen.
The method and the device can control the preview picture to execute corresponding operation through the control command, and can obtain a confirmation command after the user determines the preview picture, and take the preview picture as a special effect picture after receiving the confirmation command; when the generated preview picture does not meet the user requirement, the rendering parameters can be adjusted, the corresponding preview picture is obtained based on the adjusted rendering parameters, the preview picture is confirmed again until the preview picture meets the user requirement, a confirmation instruction is obtained, and the preview picture is taken as a special effect picture.
In step 130, a special effect picture is displayed, the special effect picture including virtual reality special effect information associated with the marker displayed on the field image, the display angle of the virtual reality special effect information being adapted to the shooting angle of the camera.
In an exemplary embodiment of the present application, the generated effect picture may be displayed with a transmission frame number per second greater than or equal to a preset frame number, wherein the effect picture includes virtual reality effect information associated with a marker displayed on the scene image. For example, the preset frame number may be 30, and smooth playing of the special effect picture may be ensured by setting the transmission frame number of the special effect picture in a unit time. In addition, since the scene is shot by the shooting device, the display angle of the virtual reality special effect information is matched with the shooting angle of the shooting device, and the user can watch the virtual reality special effect information along with the shooting angle of the shooting device. In this case, the user can directly augment the effect of the real-time technology (Augmented Reality, AR) through the screen of the terminal device, and can observe the preview virtual 3D model from any angle or position, that is, the virtual reality special effect information associated with the marker, on the premise that the marker is always in the original scene picture.
For example, capturing a marker in an AR scene by a camera of the terminal equipment, and determining a position and a direction for displaying virtual reality special effect information according to the marker; generating various 2D/3D special effects according to a preset special effect generation algorithm; and finally, fusing the special effect with the AR scene to obtain a special effect picture, and displaying the AR scene obtained after fusing, namely the special effect picture, to a user, thereby improving the realism and fidelity of the displayed content and achieving the effect of enhancing the interactive experience of the user.
In one possible implementation, the special effects screen further includes a processing control, and the embodiment of the present application does not limit the type and number of the processing control, and the processing control may be used to process the special effects screen. For example, the processing control comprises at least one of a sharing control, a release control or a modification control, wherein the sharing control is used for sharing the special effect picture, the release control is used for releasing the special effect picture, and the modification control is used for modifying the special effect picture. Whichever processing control, after displaying the special effect picture, further includes: and under the condition that the processing control is triggered, processing operation corresponding to the processing control is carried out on the special effect picture. For example, in the case where the special effects screen includes a release control, the special effects screen is subjected to a release operation.
In addition, the method can be applied to various scenes needing special effect pictures, the application scene of the method is not limited, for example, the method can be applied to a video tool for generating the special effect pictures, a user can generate the special effect pictures meeting the requirements of the user by applying the method to the video tool, the user is supported to generate and preview the custom AR special effect scene pictures by using the matched video tool, the preview effect is strictly aligned with the final display effect, and the effect of 'what you see is what you get' can be realized. In addition, after the special effect picture is obtained, operations such as publishing, sharing and the like can be performed, so that the interactive experience of the user is improved.
According to the exemplary embodiment of the application, the scene image is obtained based on shooting of the image pickup device, the scene image can contain the marker, the marker can comprise a specific object in the scene, the marker image uploaded by a user and the marker image provided by the special effect picture display system, and the convenience of marker selection is improved. And for the virtual model associated with the marker, virtual reality special effect information of the virtual model is obtained by processing the virtual model, so that the virtual model with more formats can be compatible, and more choices are provided for users. Therefore, more realistic special effect pictures can be presented in the rendering process, and the method can be applied to various scenes needing the special effect pictures, for example, in the generation of video tools containing the special effect pictures, and the interactive experience of users is improved.
Fig. 8 is a flowchart of displaying a special effect picture according to an embodiment of the present application. As shown in fig. 8, a user inputs a marker, a virtual model and a scene image into a special effect picture display system, the special effect picture display system converts the scene image of the YUV format image into an RGB format image, generates the scene image by using the RGB format image and the marker, calculates parameters of an image pickup device by using the scene image, wherein the parameters of the image pickup device can include camera internal parameters and camera external parameters, and performs special effect rendering on the virtual model by using the camera internal parameters, the camera external parameters and the RGB format image to obtain a special effect picture. It should be noted that, after obtaining the tag and the virtual model uploaded by the book, the present application optionally performs processing on the tag and the virtual model, for example, format conversion processing, as shown in a dotted line step in fig. 8.
It should be noted that, each step of displaying the special effect picture in fig. 8 has been specifically described in steps 110 to 130, and will not be described in detail herein.
The application also provides a display device of the special effect picture. Fig. 9 is a block diagram of a display device of a special effect picture according to an embodiment of the present application. As shown in fig. 9, the display device of the special effect screen may include:
the first display module 210 is configured to display a scene image captured by the imaging device.
The determining module 220 is configured to determine a selected marker in the scene image, where the marker has associated virtual reality special effect information, and in a case where a shooting angle of the image capturing device is changed, a manner of association between the marker and the virtual reality special effect information is unchanged.
The second display module 230 is configured to display a special effect picture, where the special effect picture includes virtual reality special effect information associated with a marker displayed on the field image, and a display angle of the virtual reality special effect information is adapted to a shooting angle of the camera.
In one possible implementation manner, the special effect picture further comprises a processing control, the processing control comprises at least one of a sharing control, a release control or a modification control, the sharing control is used for sharing the special effect picture, the release control is used for releasing the special effect picture, and the modification control is used for modifying the special effect picture; the second display module 230 is further configured to perform a processing operation corresponding to the processing control on the special effect picture when the processing control is triggered.
In one possible implementation, the determining module 220 is configured to display a plurality of model resources uploaded by the user, and determine a selected model resource of the plurality of model resources as a virtual model associated with the marker; and performing special effect processing on the virtual model to obtain virtual reality special effect information for rendering the virtual model, and associating the virtual reality special effect information with the marker.
In one possible implementation, the virtual reality special effect information includes first virtual reality special effect information and second virtual reality special effect information, and a determining module 220 is configured to obtain model parameters of the virtual model, and determine dimension information of the virtual model based on the model parameters; under the condition that the dimension information indicates that the virtual model is a two-dimensional virtual model, decoding processing and format conversion processing are carried out on the virtual model, so that first virtual reality special effect information is obtained; and under the condition that the dimension information indicates that the virtual model is a three-dimensional virtual model, determining format information of the three-dimensional virtual model based on model parameters, and processing the virtual model based on the format information to obtain second virtual reality special effect information.
In a possible implementation manner, the determining module 220 is configured to perform decoding processing on the virtual model by using a decoder to obtain a frame image in a first image format, where the first image format is an image format output by the decoder; and converting the frame image in the first image format to obtain first virtual reality special effect information.
In one possible implementation manner, the determining module 220 is configured to determine, based on format information of the three-dimensional virtual model and standard format information, a first virtual model or a second virtual model, where the format information of the first virtual model is the same as the standard format information, and the format information of the second virtual model is different from the standard format information; under the condition that the three-dimensional virtual model is a first virtual model, analyzing the first virtual model to obtain second virtual reality special effect information or analysis error information corresponding to the first virtual model; under the condition that analysis error information is obtained, format conversion is carried out on a first virtual model corresponding to the analysis error information, and second virtual reality special effect information is obtained; and under the condition that the three-dimensional virtual model is a second virtual model, performing format conversion on the second virtual model to obtain second virtual reality special effect information.
In one possible implementation, the determining module 220 is further configured to perform feature matching on the plurality of scene images and the markers; obtaining rendering parameters according to the successfully matched scene images and the markers; rendering the marker and the virtual model based on the rendering parameters to obtain a rendering picture; and superposing the rendering picture and the scene image to obtain the special effect picture.
In one possible implementation, the determining module 220 is configured to obtain a feature matching interval, determine a plurality of first scene images and a plurality of second scene images from the plurality of scene images based on the feature matching interval, where the first scene images are determined based on the feature matching interval, and the second scene images are located between the first scene images; based on the generation sequence of the scene images, performing feature matching on the first scene image and the marker to obtain first matching information, wherein the first matching information comprises the matching degree of the first scene image and the marker and the position information of the reference feature; tracking and detecting the position of the image pickup device corresponding to the marker and the second scene image based on the first matching information to obtain second matching information, wherein the second matching information comprises moving position information of the reference feature; and carrying out feature matching on the first scene images except the first scene image and the marker based on the second matching information to obtain third matching information, wherein the third matching information comprises the matching degree of the first scene image and the marker.
In one possible implementation, the virtual reality special effect information includes model parameters of the virtual model, and the determining module 220 is further configured to obtain parameters of the image capturing device based on the successfully matched scene image and the marker; determining a first coordinate system corresponding to the marker by using parameters of the image pickup device, and determining a second coordinate system corresponding to the virtual model by using model parameters of the virtual model; and acquiring parameter information of the first coordinate system and the second coordinate system, and processing at least one of parameters of the image pickup device or model parameters based on the parameter information to obtain rendering parameters.
In a possible implementation manner, the determining module 220 is configured to determine a parameter matrix corresponding to parameters of the image capturing device and model parameters when directions of coordinate axes of the first coordinate system and the second coordinate system are the same, and obtain rendering parameters based on the parameter matrix; when the directions of the coordinate axes of the first coordinate system and the second coordinate system are different, determining conversion parameters of the first coordinate system and the second coordinate system, adjusting at least one of parameters of the image pickup device or model parameters based on the conversion parameters, determining a parameter matrix corresponding to at least one of the adjusted parameters of the image pickup device or model parameters, and obtaining rendering parameters based on the parameter matrix.
In one possible implementation, the determining module 220 is configured to superimpose the rendered image and the scene image to obtain a preview image; acquiring a control instruction, wherein the control instruction is used for confirming a preview picture; and taking the preview picture as a special effect picture according to the control instruction.
In one possible implementation, the determining module 220 detects at least one of the first trigger information or the second trigger information, and generates the corresponding control instruction when the at least one of the first trigger information or the second trigger information is detected, where the first trigger information is generated based on a control of a preview interface, and the preview interface includes a preview screen, and the second trigger information is generated based on at least one of a hand gesture or a hand motion trajectory of the user.
According to the embodiment of the application, the virtual model is processed to obtain the virtual reality special effect information of the virtual model, so that the virtual model with more formats can be compatible, and more choices are provided for users. In the rendering process, the marker and the virtual model are rendered by utilizing the virtual reality special effect information and parameters of the camera device, and a special effect picture is obtained. And a more realistic visual effect is presented, and the interactive experience effect of the user is improved.
It should be understood that, in implementing the functions of the apparatus provided above, only the division of the above functional modules is illustrated, and in practical application, the above functional allocation may be implemented by different functional modules, that is, the internal structure of the device is divided into different functional modules, so as to implement all or part of the functions described above. In addition, the apparatus and the method embodiments provided in the foregoing embodiments belong to the same concept, and specific implementation processes of the apparatus and the method embodiments are detailed in the method embodiments and are not repeated herein.
Fig. 10 is a block diagram of a terminal device according to an embodiment of the present application. The terminal device 1100 may be a portable mobile terminal such as: smart phones, tablet computers, notebook computers or desktop computers. Terminal device 1100 may also be referred to by other names of user devices, portable terminals, laptop terminals, desktop terminals, intelligent voice interaction devices, car terminals, etc.
In general, the terminal apparatus 1100 includes: a processor 1101 and a memory 1102.
The processor 1101 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and the like. The processor 1101 may be implemented in at least one hardware form of DSP (DIGITAL SIGNAL Processing), FPGA (Field-Programmable gate array), PLA (Programmable Logic Array ). The processor 1101 may also include a main processor, which is a processor for processing data in an awake state, also called a CPU (Central Processing Unit ), and a coprocessor; a coprocessor is a low-power processor for processing data in a standby state. In some embodiments, the processor 1101 may be integrated with a GPU (Graphics Processing Unit, image processor) for rendering and drawing of content required to be displayed by the display screen. In some embodiments, the processor 1101 may also include an AI (ARTIFICIAL INTELLIGENCE ) processor for processing computing operations related to machine learning.
Memory 1102 may include one or more computer-readable storage media, which may be non-transitory. Memory 1102 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 1102 is used to store at least one instruction for execution by processor 1101 to implement the method of displaying a special effects picture provided by the method embodiment shown in fig. 2 in the present application.
In some embodiments, the terminal device 1100 may further optionally include: a peripheral interface 1103 and at least one peripheral. The processor 1101, memory 1102, and peripheral interface 1103 may be connected by a bus or signal lines. The individual peripheral devices may be connected to the peripheral device interface 1103 by buses, signal lines or circuit boards. Specifically, the peripheral device includes: at least one of radio frequency circuitry 1104, a display screen 1105, a camera assembly 1106, audio circuitry 1107, and a power supply 1109.
A peripheral interface 1103 may be used to connect I/O (Input/Output) related at least one peripheral device to the processor 1101 and memory 1102. In some embodiments, the processor 1101, memory 1102, and peripheral interface 1103 are integrated on the same chip or circuit board; in some other embodiments, any one or both of the processor 1101, memory 1102, and peripheral interface 1103 may be implemented on a separate chip or circuit board, which is not limited in this embodiment.
The Radio Frequency circuit 1104 is used to receive and transmit RF (Radio Frequency) signals, also known as electromagnetic signals. The radio frequency circuit 1104 communicates with a communication network and other communication devices via electromagnetic signals. The radio frequency circuit 1104 converts an electrical signal into an electromagnetic signal for transmission, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 1104 includes: antenna systems, RF transceivers, one or more amplifiers, tuners, oscillators, digital signal processors, codec chipsets, subscriber identity module cards, and so forth. The radio frequency circuitry 1104 may communicate with other terminal devices via at least one wireless communication protocol. The wireless communication protocol includes, but is not limited to: the world wide web, metropolitan area networks, intranets, generation mobile communication networks (2G, 3G, 4G, and 5G), wireless local area networks, and/or WiFi (WIRELESS FIDELITY ) networks. In some embodiments, the radio frequency circuit 1104 may further include NFC (NEAR FIELD Communication) related circuits, which is not limited by the present application.
The display screen 1105 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display 1105 is a touch display, the display 1105 also has the ability to collect touch signals at or above the surface of the display 1105. The touch signal may be input to the processor 1101 as a control signal for processing. At this time, the display screen 1105 may also be used to provide virtual buttons and/or virtual keyboards, also referred to as soft buttons and/or soft keyboards. In some embodiments, the display 1105 may be one and disposed on the front panel of the terminal device 1100; in other embodiments, the display 1105 may be at least two, and disposed on different surfaces of the terminal device 1100 or in a folded design; in other embodiments, the display 1105 may be a flexible display disposed on a curved surface or a folded surface of the terminal device 1100. Even more, the display 1105 may be arranged in a non-rectangular irregular pattern, i.e., a shaped screen. The display screen 1105 may be made of materials such as an LCD (Liquid CRYSTAL DISPLAY) and an OLED (Organic Light-Emitting Diode).
The camera assembly 1106 is used to capture images or video. Optionally, the camera assembly 1106 includes a front camera and a rear camera. In general, a front camera is provided at a front panel of the terminal apparatus 1100, and a rear camera is provided at a rear surface of the terminal apparatus 1100. In some embodiments, the at least two rear cameras are any one of a main camera, a depth camera, a wide-angle camera and a tele camera, so as to realize that the main camera and the depth camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize a panoramic shooting and Virtual Reality (VR) shooting function or other fusion shooting functions. In some embodiments, the camera assembly 1106 may also include a flash. The flash lamp can be a single-color temperature flash lamp or a double-color temperature flash lamp. The dual-color temperature flash lamp refers to a combination of a warm light flash lamp and a cold light flash lamp, and can be used for light compensation under different color temperatures.
The audio circuit 1107 may include a microphone and a speaker. The microphone is used for collecting sound waves of users and environments, converting the sound waves into electric signals, and inputting the electric signals to the processor 1101 for processing, or inputting the electric signals to the radio frequency circuit 1104 for voice communication. For the purpose of stereo acquisition or noise reduction, a plurality of microphones may be provided at different portions of the terminal device 1100, respectively. The microphone may also be an array microphone or an omni-directional pickup microphone. The speaker is used to convert electrical signals from the processor 1101 or the radio frequency circuit 1104 into sound waves. The speaker may be a conventional thin film speaker or a piezoelectric ceramic speaker. When the speaker is a piezoelectric ceramic speaker, not only the electric signal can be converted into a sound wave audible to humans, but also the electric signal can be converted into a sound wave inaudible to humans for ranging and other purposes. In some embodiments, the audio circuit 1107 may also include a headphone jack.
The power supply 1109 is used to supply power to the respective components in the terminal device 1100. The power source 1109 may be an alternating current, a direct current, a disposable battery, or a rechargeable battery. When the power source 1109 includes a rechargeable battery, the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, terminal device 1100 also includes one or more sensors 1110. The one or more sensors 1110 include, but are not limited to: acceleration sensor 1111, gyroscope sensor 1112, pressure sensor 1113, optical sensor 1115, and proximity sensor 1116.
The acceleration sensor 1111 can detect the magnitudes of accelerations on three coordinate axes of the coordinate system established in the terminal apparatus 1100. For example, the acceleration sensor 1111 may be configured to detect components of gravitational acceleration in three coordinate axes. The processor 1101 may control the display screen 1105 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal acquired by the acceleration sensor 1111. Acceleration sensor 1111 may also be used for the acquisition of motion data of a game or a user.
The gyro sensor 1112 may detect a body direction and a rotation angle of the terminal device 1100, and the gyro sensor 1112 may collect a 3D motion of the user on the terminal device 1100 in cooperation with the acceleration sensor 1111. The processor 1101 may implement the following functions based on the data collected by the gyro sensor 1112: motion sensing (e.g., changing UI according to a tilting operation by a user), image stabilization at shooting, game control, and inertial navigation.
The pressure sensor 1113 may be disposed at a side frame of the terminal device 1100 and/or at a lower layer of the display screen 1105. When the pressure sensor 1113 is provided at a side frame of the terminal apparatus 1100, a grip signal of the terminal apparatus 1100 by a user can be detected, and the processor 1101 performs left-right hand recognition or quick operation based on the grip signal collected by the pressure sensor 1113. When the pressure sensor 1113 is disposed at the lower layer of the display screen 1105, the processor 1101 realizes control of the operability control on the UI interface according to the pressure operation of the user on the display screen 1105. The operability controls include at least one of a button control, a scroll bar control, an icon control, and a menu control.
The optical sensor 1115 is used to collect the ambient light intensity. In one embodiment, the processor 1101 may control the display brightness of the display screen 1105 based on the intensity of ambient light collected by the optical sensor 1115. Specifically, when the intensity of the ambient light is high, the display luminance of the display screen 1105 is turned up; when the ambient light intensity is low, the display luminance of the display screen 1105 is turned down. In another embodiment, the processor 1101 may also dynamically adjust the shooting parameters of the camera assembly 1106 based on the intensity of ambient light collected by the optical sensor 1115.
A proximity sensor 1116, also referred to as a distance sensor, is typically provided on the front panel of the terminal device 1100. The proximity sensor 1116 is used to collect a distance between the user and the front surface of the terminal device 1100. In one embodiment, when the proximity sensor 1116 detects that the distance between the user and the front face of the terminal device 1100 gradually decreases, the processor 1101 controls the display 1105 to switch from the bright screen state to the off screen state; when the proximity sensor 1116 detects that the distance between the user and the front surface of the terminal apparatus 1100 gradually increases, the processor 1101 controls the display screen 1105 to switch from the off-screen state to the on-screen state.
It will be appreciated by those skilled in the art that the structure shown in fig. 10 is not limiting and that terminal device 1100 may include more or fewer components than shown, or may combine certain components, or may employ a different arrangement of components.
Fig. 11 is a schematic structural diagram of a server according to an embodiment of the present application, where the server 1200 may have a relatively large difference due to different configurations or performances, and may include one or more processors (Central Processing Unit, CPU) 1201 and one or more memories 1202, where at least one program code is stored in the one or more memories 1202, and the at least one program code is loaded and executed by the one or more processors 1201 to implement the method for displaying a special effect picture provided by the method embodiment shown in fig. 2. Of course, the server 1200 may also have a wired or wireless network interface, a keyboard, an input/output interface, etc. for performing input/output, and the server 1200 may also include other components for implementing device functions, which are not described herein.
In an exemplary embodiment, there is also provided a computer-readable storage medium having at least one program code stored therein, the at least one program code being loaded and executed by a processor to cause a computer to implement the method for displaying a special effect picture provided by the method embodiment shown in fig. 2.
Alternatively, the above-mentioned computer readable storage medium may be a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a Read-Only optical disk (Compact Disc Read-Only Memory, CD-ROM), a magnetic tape, a floppy disk, an optical data storage device, and the like.
In an exemplary embodiment, there is also provided a computer program or a computer program product, in which at least one computer instruction is stored, which is loaded and executed by a processor, to cause the computer to implement the method for displaying a special effect picture provided by the method embodiment shown in fig. 2.
It should be noted that, the information (including but not limited to user equipment information, user personal information, etc.), data (including but not limited to data for analysis, stored data, presented data, etc.), and signals related to the present application are all authorized by the user or are fully authorized by the parties, and the collection, use, and processing of the related data is required to comply with the relevant laws and regulations and standards of the relevant countries and regions. For example, the virtual reality special effect information, the scene image, the special effect picture and the like related to the application are all acquired under the condition of full authorization.
It should be understood that references herein to "a plurality" are to two or more. "and/or", describes an association relationship of an association object, and indicates that there may be three relationships, for example, a and/or B, and may indicate: a exists alone, A and B exist together, and B exists alone. The character "/" generally indicates that the context-dependent object is an "or" relationship.
The above embodiments are merely exemplary embodiments of the present application and are not intended to limit the present application, any modifications, equivalent substitutions, improvements, etc. that fall within the principles of the present application should be included in the scope of the present application.

Claims (12)

1. A method for displaying a special effect picture, the method comprising:
Displaying a scene image obtained by shooting through a shooting device;
determining a selected marker in the scene image, wherein the marker has associated virtual reality special effect information, and the association mode of the marker and the virtual reality special effect information is unchanged under the condition that the shooting angle of the shooting device is changed;
And displaying a special effect picture, wherein the special effect picture comprises virtual reality special effect information related to the marker displayed on the scene image, and the display angle of the virtual reality special effect information is matched with the shooting angle of the shooting device.
2. The method of claim 1, wherein the special effects picture further comprises a processing control, the processing control comprising at least one of a sharing control, a publishing control, or a modification control, the sharing control being used for sharing the special effects picture, the publishing control being used for publishing the special effects picture, the modification control being used for modifying the special effects picture;
after the special effect picture is displayed, the method further comprises the following steps:
and under the condition that the processing control is triggered, processing operation corresponding to the processing control is carried out on the special effect picture.
3. The method of claim 1, wherein prior to displaying the special effects picture, further comprising:
Displaying a plurality of model resources uploaded by a user, and determining a selected model resource in the plurality of model resources as a virtual model associated with the marker;
and performing special effect processing on the virtual model to obtain virtual reality special effect information for rendering the virtual model, and associating the virtual reality special effect information with the marker.
4. The method of claim 3, wherein the virtual reality effect information includes first virtual reality effect information or second virtual reality effect information, and wherein performing effect processing on the virtual model to obtain virtual reality effect information for rendering the virtual model includes:
Obtaining model parameters of the virtual model, and determining dimension information of the virtual model based on the model parameters;
Under the condition that the dimension information indicates that the virtual model is a two-dimensional virtual model, decoding processing and format conversion processing are carried out on the virtual model, so that the first virtual reality special effect information is obtained;
And under the condition that the dimension information indicates that the virtual model is a three-dimensional virtual model, determining format information of the three-dimensional virtual model based on the model parameters, and processing the virtual model based on the format information to obtain the second virtual reality special effect information.
5. The method of claim 4, wherein the decoding and format conversion of the virtual model to obtain the first virtual reality effect information comprises:
decoding the virtual model by using a decoder to obtain a frame image in a first image format, wherein the first image format is an image format output by the decoder;
And carrying out format conversion on the frame image in the first image format to obtain the first virtual reality special effect information.
6. The method of claim 4, wherein processing the virtual model based on the format information to obtain the second virtual reality effect information comprises:
Determining a first virtual model or a second virtual model based on format information of the three-dimensional virtual model and standard format information, wherein the format information of the first virtual model is the same as the standard format information, and the format information of the second virtual model is different from the standard format information;
Analyzing the first virtual model under the condition that the three-dimensional virtual model is the first virtual model to obtain the second virtual reality special effect information or analysis error information corresponding to the first virtual model; under the condition that analysis error information is obtained, carrying out format conversion on the first virtual model corresponding to the analysis error information to obtain the second virtual reality special effect information;
And under the condition that the three-dimensional virtual model is the second virtual model, performing format conversion on the second virtual model to obtain the second virtual reality special effect information.
7. The method of claim 3, wherein said determining, after the selected marker in the scene image, further comprises:
Performing feature matching on the plurality of scene images and the marker;
obtaining rendering parameters according to the successfully matched scene images and the markers;
rendering the marker and the virtual model based on the rendering parameters to obtain a rendering picture;
And superposing the rendering picture and the scene image to obtain the special effect picture.
8. The method of claim 7, wherein the feature matching the plurality of scene images with the marker comprises:
Acquiring a feature matching interval, determining a plurality of first scene images and a plurality of second scene images from the plurality of scene images based on the feature matching interval, the first scene images being determined based on the feature matching interval, the second scene images being located between the first scene images;
Performing feature matching on a first scene image and the marker based on the generation sequence of the scene images to obtain first matching information, wherein the first matching information comprises the matching degree of the first scene image and the marker and the position information of a reference feature;
Tracking and detecting the position of the image pickup device corresponding to the marker and the second scene image based on the first matching information to obtain second matching information, wherein the second matching information comprises the moving position information of the reference feature;
And carrying out feature matching on the first scene images except the first scene image and the marker based on the second matching information to obtain third matching information, wherein the third matching information comprises the matching degree of the first scene image and the marker.
9. The method of claim 7, wherein the virtual reality special effects information includes model parameters of the virtual model, the obtaining rendering parameters from the successfully matched scene image and the marker comprises:
obtaining parameters of the camera device based on the scene image successfully matched with the marker;
Determining a first coordinate system corresponding to the marker by using parameters of the image pickup device, and determining a second coordinate system corresponding to the virtual model by using model parameters of the virtual model;
and acquiring parameter information of the first coordinate system and the second coordinate system, and processing at least one of parameters of the image pickup device or model parameters based on the parameter information to obtain rendering parameters.
10. The method according to claim 9, wherein the processing at least one of the parameters of the image capturing apparatus or the model parameters based on the parameter information to obtain rendering parameters includes:
when the directions of the coordinate axes of the first coordinate system and the second coordinate system are the same, determining a parameter matrix corresponding to the parameters of the image pickup device and the model parameters, and obtaining the rendering parameters based on the parameter matrix;
When the directions of the coordinate axes of the first coordinate system and the second coordinate system are different, determining conversion parameters of the first coordinate system and the second coordinate system, adjusting at least one of parameters of the image capturing device or model parameters based on the conversion parameters, determining a parameter matrix corresponding to at least one of the adjusted parameters of the image capturing device or model parameters, and obtaining the rendering parameters based on the parameter matrix.
11. A computer device, characterized in that it comprises a processor and a memory, in which at least one program code is stored, which is loaded and executed by the processor, so that the computer device implements the method for displaying a special effect picture according to any one of claims 1 to 10.
12. A computer program product, characterized in that at least one computer instruction is stored in the computer program product, which is loaded and executed by a processor, to cause the computer to implement the method of displaying special effects pictures according to any of claims 1-10.
CN202410171789.0A 2024-02-06 2024-02-06 Method and equipment for displaying special effect picture Pending CN118055201A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410171789.0A CN118055201A (en) 2024-02-06 2024-02-06 Method and equipment for displaying special effect picture

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410171789.0A CN118055201A (en) 2024-02-06 2024-02-06 Method and equipment for displaying special effect picture

Publications (1)

Publication Number Publication Date
CN118055201A true CN118055201A (en) 2024-05-17

Family

ID=91051348

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410171789.0A Pending CN118055201A (en) 2024-02-06 2024-02-06 Method and equipment for displaying special effect picture

Country Status (1)

Country Link
CN (1) CN118055201A (en)

Similar Documents

Publication Publication Date Title
CN110992493B (en) Image processing method, device, electronic equipment and storage medium
CN110427110B (en) Live broadcast method and device and live broadcast server
CN110544280A (en) AR system and method
JP2022537614A (en) Multi-virtual character control method, device, and computer program
CN110599593B (en) Data synthesis method, device, equipment and storage medium
CN112287852B (en) Face image processing method, face image display method, face image processing device and face image display equipment
WO2022052620A1 (en) Image generation method and electronic device
CN111680758B (en) Image training sample generation method and device
CN113426117B (en) Shooting parameter acquisition method and device for virtual camera, electronic equipment and storage medium
CN109947338B (en) Image switching display method and device, electronic equipment and storage medium
CN112581358B (en) Training method of image processing model, image processing method and device
CN112581571B (en) Control method and device for virtual image model, electronic equipment and storage medium
KR20220124244A (en) Image processing method, electronic device and computer readable storage medium
CN113706678A (en) Method, device and equipment for acquiring virtual image and computer readable storage medium
CN110839174A (en) Image processing method and device, computer equipment and storage medium
CN112749613A (en) Video data processing method and device, computer equipment and storage medium
CN113556481B (en) Video special effect generation method and device, electronic equipment and storage medium
CN110312144B (en) Live broadcast method, device, terminal and storage medium
CN109767482B (en) Image processing method, device, electronic equipment and storage medium
CN118055201A (en) Method and equipment for displaying special effect picture
CN113938606A (en) Method and device for determining ball machine erection parameters and computer storage medium
CN113298040A (en) Key point detection method and device, electronic equipment and computer-readable storage medium
CN114093020A (en) Motion capture method, motion capture device, electronic device and storage medium
CN112399080A (en) Video processing method, device, terminal and computer readable storage medium
CN112950535A (en) Video processing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination