CN113658343A - Multimedia interaction method, device and readable medium - Google Patents

Multimedia interaction method, device and readable medium Download PDF

Info

Publication number
CN113658343A
CN113658343A CN202110849226.9A CN202110849226A CN113658343A CN 113658343 A CN113658343 A CN 113658343A CN 202110849226 A CN202110849226 A CN 202110849226A CN 113658343 A CN113658343 A CN 113658343A
Authority
CN
China
Prior art keywords
multimedia
image
multimedia data
picture
audio
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110849226.9A
Other languages
Chinese (zh)
Inventor
费弘
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhuhai Dayue Technology Co ltd
Original Assignee
Zhuhai Dayue Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhuhai Dayue Technology Co ltd filed Critical Zhuhai Dayue Technology Co ltd
Priority to CN202110849226.9A priority Critical patent/CN113658343A/en
Publication of CN113658343A publication Critical patent/CN113658343A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/48Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/0486Drag-and-drop
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/97Determining parameters from multiple pictures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/51Discovery or management thereof, e.g. service location protocol [SLP] or web services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/08Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20228Disparity calculation for image-based rendering

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Human Computer Interaction (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Software Systems (AREA)
  • Geometry (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Hardware Design (AREA)
  • Library & Information Science (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention relates to a technical scheme of a multimedia interaction method, a multimedia interaction device and a readable medium, which comprises the following steps: acquiring multimedia data, wherein the multimedia data comprises a user-defined combination of a picture and an audio; calling an interface to generate an augmented reality multimedia file corresponding to the multimedia data according to the multimedia data; and generating an access identifier corresponding to the multimedia file. The invention has the beneficial effects that: the method can display multimedia contents in multiple ways, reduces the requirements of augmented reality display on the terminal, enhances the universality, enables the terminal which does not support augmented reality to display the augmented reality contents through the WeChat applet platform, and enables the display contents to be richer and more interesting.

Description

Multimedia interaction method, device and readable medium
Technical Field
The invention relates to the field of multimedia and image processing, in particular to a multimedia interaction method, a multimedia interaction device and a readable medium.
Background
Along with the development of augmented reality technology, the application of augmented reality technology is more extensive, wherein with augmented reality technology be applied to album show and album show, among the prior art, some children's books use augmented reality technology, after using the cell-phone camera to aim at the content that has the augmented reality technology, can see the content of augmented reality, this kind directly uses current augmented reality technology, and the requirement to the cell-phone is higher, if the cell-phone does not support the augmented reality technology, then can't show the content of augmented reality.
The technology has the similar problems of high cost, low efficiency, time consumption and the like in realization. The existing augmented reality technology is directly used, the requirement on the mobile phone is high, and if the mobile phone does not support the augmented reality technology, augmented reality contents cannot be displayed, so that the function can be used by all the existing mobile phones with cameras.
Disclosure of Invention
The invention aims to at least solve one of the technical problems in the prior art, and provides a method, a device, equipment and a medium for storing and similarly retrieving an encrypted document, which can display multimedia contents in multiple ways, reduce the requirements of augmented reality on a terminal and improve the universality.
The technical scheme of the invention comprises a multimedia interaction method, which is characterized by comprising the following steps: s100, multimedia data are obtained, and the multimedia data comprise a user-defined combination of pictures and audio; s200, according to the multimedia data, calling an interface to generate an augmented reality multimedia file corresponding to the multimedia data; s300, generating an access identifier corresponding to the multimedia file.
According to the multimedia interaction method, acquiring multimedia data comprises: acquiring the picture through image acquisition equipment, wherein the image acquisition equipment comprises a mobile phone camera, a photographing watch and a monitoring camera; and the audio is uploaded by a user in a self-defined way.
According to the multimedia interaction method, S200 includes: s210, converting the picture in the multimedia data into the three-dimensional model in a lens array and parallax image mode; and S220, coding the three-dimensional model and the audio by adopting audio information and combining with picture information to generate augmented reality multimedia data.
According to the multimedia interaction method, S210 includes: s211, analyzing through a GD library to obtain the number of all colors in the picture, sequentially traversing all color values according to a traversal algorithm, and converting the color values into gray values through a gray value calculation formula; s212, judging the type of the acquired image, generating gif or png images according to the image type, and storing the gif or png images as a gray level image; s213, the original image of the image and the gray image are synthesized by viewpoint, the parallax value of each pixel point is taken, the pixel point position in a new image is calculated, the pixel points are assigned and mapped in the forward direction, and a basic synthesized image is obtained; s214, analyzing the basic synthetic image, determining a floating point position of each pixel point in the original image obtained through reverse mapping, and obtaining a final color value through linear interpolation to obtain a parallax map of a virtual viewpoint; and S215, based on the disparity map of the virtual viewpoint, carrying out cavity filling through OpenCV, obtaining and storing an effect map, and obtaining the three-dimensional model.
The multimedia interaction method, wherein the method further comprises: and generating a plurality of images with potential differences through the GD library, and finally combining the images with the potential differences into the three-dimensional model.
According to the multimedia interaction method, S220 includes: s221, obtaining the three-dimensional model, and calculating to obtain audio time length through a PHP class library getid 3; s222, calculating the total frame number of the video according to the audio time length, wherein the total frame number is the time length and the frame number per second, and the frame number per second of the video can be set in a self-defined mode; s223, establishing a temporary folder, naming in sequence through the temporary folder, storing the copied three-dimensional model graph of the corresponding frame number, and naming in sequence; s224, synthesizing the three-dimensional model of the temporary folder and the audio into a video through an ffmpeg command to obtain an augmented reality multimedia file.
The multimedia interaction method, wherein the method further comprises: uploading, sorting and typesetting of the pictures and the audios are processed in a user-defined mode through the interactive interface, and the pictures and the audios can be edited in a user-defined mode through the interactive interface.
The multimedia interaction method, wherein the method further comprises: checking pictures in the acquired multimedia data, including checking the respective rate and size of the pictures, and feeding back picture checking results; and checking the access identifier corresponding to the generated multimedia file, and feeding back the access identifier checking result.
The technical solution of the present invention further includes a multimedia interaction device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor implements any of the method steps when executing the computer program.
The technical solution of the present invention further includes a computer-readable storage medium, in which a computer program is stored, where the computer program is characterized in that when being executed by a processor, the computer program implements any of the above method steps.
The invention has the beneficial effects that: the method can display multimedia contents in multiple ways, reduces the requirements of augmented reality display on the terminal, enhances the universality, enables the terminal which does not support augmented reality to display the augmented reality contents through the WeChat applet platform, and enables the display contents to be richer and more interesting.
Drawings
The invention is further described below with reference to the accompanying drawings and examples;
FIG. 1 shows a flow diagram according to an embodiment of the invention.
FIG. 2 is a flow chart illustrating the generation of a three-dimensional model according to an embodiment of the present invention.
Fig. 3 is a flowchart illustrating augmented reality multimedia file generation according to an embodiment of the present invention.
Fig. 4 is a detailed process flow diagram according to an embodiment of the present invention.
Fig. 5 is a diagram of an apparatus according to an embodiment of the present invention.
Detailed Description
Reference will now be made in detail to the present preferred embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to like elements throughout.
In the description of the present invention, the meaning of a plurality of means is one or more, the meaning of a plurality of means is two or more, and larger, smaller, larger, etc. are understood as excluding the number, and larger, smaller, inner, etc. are understood as including the number.
In the description of the present invention, the consecutive reference numbers of the method steps are for convenience of examination and understanding, and the implementation order between the steps is adjusted without affecting the technical effect achieved by the technical solution of the present invention by combining the whole technical solution of the present invention and the logical relationship between the steps.
In the description of the present invention, unless otherwise explicitly defined, terms such as set, etc. should be broadly construed, and those skilled in the art can reasonably determine the specific meanings of the above terms in the present invention in combination with the detailed contents of the technical solutions.
FIG. 1 shows a flow diagram according to an embodiment of the invention. The process comprises the following steps: s100, multimedia data are obtained, wherein the multimedia data comprise a custom combination of pictures and audios; s200, calling an interface to generate an augmented reality multimedia file corresponding to the multimedia data according to the multimedia data; s300, generating an access identifier corresponding to the multimedia file.
FIG. 2 is a flow chart illustrating the generation of a three-dimensional model according to an embodiment of the present invention. As shown in fig. 2, the process includes:
firstly, analyzing through a GD library to obtain the number of all colors in an uploaded picture, then traversing all color values in sequence according to a traversal algorithm, and converting the color values into gray values through a gray value calculation formula (the gray values are red 0.228+ green 0.587+ blue 0.114);
judging according to the type of the original image to generate gif or png images, and storing the images as a gray level image;
using the original image and the gray level image to carry out viewpoint synthesis, taking a parallax value at each pixel point, then calculating the position of the pixel point in a new image, finally assigning values, carrying out forward mapping, obtaining a basic synthesis image at the moment, and having sawteeth on the edge of the image;
analyzing the basic synthetic image, reversely mapping to obtain the floating point position of each pixel point in the original image, and obtaining a final color value by utilizing linear interpolation to obtain a parallax image of the virtual viewpoint, wherein the parallax image does not fill a cavity, has more visible cracks and needs to be further processed;
and filling the holes by using the OpenCV technology on the basis of the virtual viewpoint disparity map, wherein the disparity map after the holes are filled can effectively remove cracks, and the effect map is stored to obtain the three-dimensional model.
Fig. 3 is a flowchart illustrating augmented reality multimedia file generation according to an embodiment of the present invention. As shown in fig. 3, the process is as follows:
uploading the picture, and obtaining a three-dimensional model picture through the three-dimensional model algorithm;
uploading an audio file, and calculating to obtain audio time length through a PHP class library getid 3;
calculating the total frame number of the video according to the audio time length, wherein the total frame number is duration and fps (frame number per second), and the frame number used in the scheme is 10 frames/second;
establishing a temporary folder, copying the three-dimensional model diagram for the corresponding frame number, and naming the temporary folder according to the sequence, such as: image001.jpg, image002. jpg.;
the ffmpeg command is run to synthesize the video, such as: ffmpeg-tracks 2-y-r 10-i/tmpdir/image% 03d.jpg-i voice.mp3-absf aac _ adtstoasc target.mp4, the analysis is 2 threads and 10 frames/second, and the target.mp4 is obtained by synthesizing the pictures in the temporary folder tmpdir and voice.mp3, and the target.mp4 is the above.
Fig. 4 is a detailed process flow diagram according to an embodiment of the present invention. With reference to fig. 2 and 3, the user starts the client by the wechat applet, and can upload the photographed or local picture to the cloud server for storage, so as to process and model the picture.
Include in the high in the clouds server:
(1) and (4) converting the picture into an algorithm interface of the three-dimensional model.
(2) And the audio information is coded and is combined with the picture information to be coded.
And corresponding VR technology multimedia information can be generated only by uploading the picture to a cloud server.
(1) The image-to-three-dimensional model algorithm is realized by a three-dimensional display technology based on a two-dimensional parallax image, and the three-dimensional technology is realized by adopting an integrated imaging method. In technical implementation, 2-4 pictures with potential difference are generated through a GD library of php, and finally a three-dimensional model image is synthesized.
(2) The algorithmic principle of the coding of audio information and the coding in combination with picture information is based on FFmpeg (multimedia processing tool) technology implementation. FFmpeg is the leading multimedia framework capable of decoding, encoding, transcoding, mixing, decrypting, streaming media, filtering, and playing virtually everything created by humans and machines. The algorithm just utilizes the technologies of decoding, encoding, transcoding and the like of the FFmpeg to realize the synthesis of the audio + picture information.
The picture acquisition equipment can be a mobile phone camera, a professional camera, a photographing watch, a monitoring camera and the like.
A user obtains a first picture after taking a picture through image obtaining equipment such as a mobile phone, and the first picture is an initial picture. Clicking an uploading picture by opening the WeChat applet, uploading the first picture to a cloud image adaptive three-dimensional model at the moment, and generating a second picture; in addition, the user can upload audio and click to submit, so that a third picture can be generated and a corresponding two-dimensional code is generated, the two-dimensional code combines the audio information coding and two-dimensional code coding technologies, and the user can experience the fun brought by the VR technology on the mobile phone by clicking a 'one-click generation' button of the WeChat applet.
As shown in fig. 2 to 4, the cloud server may obtain the pixel and quality size of the uploaded picture, and if the pixel is too low, such as 300 × 300, or lower, the pixel is returned that the picture pixel is too low, and a failure prompt is generated. If the quality is too high, for example, the quality exceeds 1GB, the system also terminates the upload operation because of occupying too much network resources, and returns a prompt to the client that the picture quality is too high, generating a failure.
The client performs operation of generating two-dimension codes according to the check result: and returning the two-dimensional code only when the check result is that the generation is successful, otherwise, the client only receives a corresponding generation failure prompt.
The technical scheme of the invention also comprises: a user can use any mobile phone with a camera and a WeChat App, picture and audio resources are uploaded through a mobile phone WeChat applet platform, picture processing functions such as sorting, layout and typesetting, picture synthesis, picture beautification and the like can be carried out by dragging pictures, after audio on-line cutting function processing is carried out, a button can be generated by clicking one key, after three-dimensional technology processing of a system is carried out, multimedia information after three-dimensional stereo and audio coding is generated, and then a presentation identifier of VR + audio + images can be generated on the mobile phone. Generating 2-4 pictures with potential difference through a GD (graphics library) of the PHP (phase-shift graphics), and finally synthesizing a three-dimensional model image; the synthesis of audio + picture information is realized through the technologies of FFmpeg decoding, encoding, transcoding and the like; triggering the picture/audio data to be uploaded to the server through the small program button; storing the picture/audio data through the server.
Fig. 5 shows a diagram of an apparatus according to an embodiment of the present invention. The apparatus comprises a memory 100 and a processor 200, wherein the processor 200 stores a computer program for performing: acquiring multimedia data, wherein the multimedia data comprises a user-defined combination of a picture and an audio; calling an interface to generate an augmented reality multimedia file corresponding to the multimedia data according to the multimedia data; and generating an access identifier corresponding to the multimedia file.
It should be recognized that the method steps in embodiments of the present invention may be embodied or carried out by computer hardware, a combination of hardware and software, or by computer instructions stored in a non-transitory computer readable memory. The method may use standard programming techniques. Each program may be implemented in a high level procedural or object oriented programming language to communicate with a computer system. However, the program(s) can be implemented in assembly or machine language, if desired. In any case, the language may be a compiled or interpreted language. Furthermore, the program can be run on a programmed application specific integrated circuit for this purpose.
Further, the operations of processes described herein can be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context. The processes described herein (or variations and/or combinations thereof) may be performed under the control of one or more computer systems configured with executable instructions, and may be implemented as code (e.g., executable instructions, one or more computer programs, or one or more applications) collectively executed on one or more processors, by hardware, or combinations thereof. The computer program includes a plurality of instructions executable by one or more processors.
Further, the method may be implemented in any type of computing platform operatively connected to a suitable interface, including but not limited to a personal computer, mini computer, mainframe, workstation, networked or distributed computing environment, separate or integrated computer platform, or in communication with a charged particle tool or other imaging device, and the like. Aspects of the invention may be embodied in machine-readable code stored on a non-transitory storage medium or device, whether removable or integrated into a computing platform, such as a hard disk, optically read and/or write storage medium, RAM, ROM, or the like, such that it may be read by a programmable computer, which when read by the storage medium or device, is operative to configure and operate the computer to perform the procedures described herein. Further, the machine-readable code, or portions thereof, may be transmitted over a wired or wireless network. The invention described herein includes these and other different types of non-transitory computer-readable storage media when such media include instructions or programs that implement the steps described above in conjunction with a microprocessor or other data processor. The invention also includes the computer itself when programmed according to the methods and techniques described herein.
A computer program can be applied to input data to perform the functions described herein to transform the input data to generate output data that is stored to non-volatile memory. The output information may also be applied to one or more output devices, such as consumers. In a preferred embodiment of the present invention, the transformed data represents physical and tangible objects, including particular visual depictions of physical and tangible objects produced on the consumer.
The embodiments of the present invention have been described in detail with reference to the accompanying drawings, but the present invention is not limited to the above embodiments, and various changes can be made within the knowledge of those skilled in the art without departing from the gist of the present invention.

Claims (10)

1. A multimedia interaction method, comprising the steps of:
s100, multimedia data are obtained, and the multimedia data comprise a user-defined combination of pictures and audio;
s200, according to the multimedia data, calling an interface to generate an augmented reality multimedia file corresponding to the multimedia data;
s300, generating an access identifier corresponding to the multimedia file.
2. The method of claim 1, wherein the obtaining multimedia data comprises:
acquiring the picture through image acquisition equipment, wherein the image acquisition equipment comprises a mobile phone camera, a photographing watch and a monitoring camera;
and the audio is uploaded by a user in a self-defined way.
3. The multimedia interaction method of claim 1, wherein the S200 comprises:
s210, converting the picture in the multimedia data into the three-dimensional model in a lens array and parallax image mode;
and S220, coding the three-dimensional model and the audio by adopting audio information and combining with picture information to generate augmented reality multimedia data.
4. The multimedia interaction method of claim 3, wherein the S210 comprises:
s211, analyzing through a GD library to obtain the number of all colors in the picture, sequentially traversing all color values according to a traversal algorithm, and converting the color values into gray values through a gray value calculation formula;
s212, judging the type of the acquired image, generating gif or png images according to the image type, and storing the gif or png images as a gray level image;
s213, the original image of the image and the gray image are synthesized by viewpoint, the parallax value of each pixel point is taken, the pixel point position in a new image is calculated, the pixel points are assigned and mapped in the forward direction, and a basic synthesized image is obtained;
s214, analyzing the basic synthetic image, determining a floating point position of each pixel point in the original image obtained through reverse mapping, and obtaining a final color value through linear interpolation to obtain a parallax map of a virtual viewpoint;
and S215, based on the disparity map of the virtual viewpoint, carrying out cavity filling through OpenCV, obtaining and storing an effect map, and obtaining the three-dimensional model.
5. The method of claim 4, further comprising:
and generating a plurality of images with potential differences through the GD library, and finally combining the images with the potential differences into the three-dimensional model.
6. The multimedia interaction method of claim 4, wherein the step S220 comprises:
s221, obtaining the three-dimensional model, and calculating to obtain audio time length through a PHP class library getid 3;
s222, calculating the total frame number of the video according to the audio time length, wherein the total frame number is the time length and the frame number per second, and the frame number per second of the video can be set in a self-defined mode;
s223, establishing a temporary folder, naming in sequence through the temporary folder, storing the copied three-dimensional model graph of the corresponding frame number, and naming in sequence;
s224, synthesizing the three-dimensional model of the temporary folder and the audio into a video through an ffmpeg command to obtain an augmented reality multimedia file.
7. The method of claim 1, further comprising:
uploading, sorting and typesetting of the pictures and the audios are processed in a user-defined mode through the interactive interface, and the pictures and the audios can be edited in a user-defined mode through the interactive interface.
8. The method of claim 1, further comprising:
checking pictures in the acquired multimedia data, including checking the respective rate and size of the pictures, and feeding back picture checking results;
and checking the access identifier corresponding to the generated multimedia file, and feeding back the access identifier checking result.
9. A multimedia interaction device comprising a memory, a processor and a computer program stored in said memory and executable on said processor, characterized in that said processor implements the method steps of any of claims 1-8 when executing said computer program.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the method steps of any one of claims 1 to 8.
CN202110849226.9A 2021-07-27 2021-07-27 Multimedia interaction method, device and readable medium Pending CN113658343A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110849226.9A CN113658343A (en) 2021-07-27 2021-07-27 Multimedia interaction method, device and readable medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110849226.9A CN113658343A (en) 2021-07-27 2021-07-27 Multimedia interaction method, device and readable medium

Publications (1)

Publication Number Publication Date
CN113658343A true CN113658343A (en) 2021-11-16

Family

ID=78490658

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110849226.9A Pending CN113658343A (en) 2021-07-27 2021-07-27 Multimedia interaction method, device and readable medium

Country Status (1)

Country Link
CN (1) CN113658343A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140210857A1 (en) * 2013-01-28 2014-07-31 Tencent Technology (Shenzhen) Company Limited Realization method and device for two-dimensional code augmented reality
CN104967900A (en) * 2015-05-04 2015-10-07 腾讯科技(深圳)有限公司 Video generating method and video generating device
CN106023692A (en) * 2016-05-13 2016-10-12 广东博士早教科技有限公司 AR interest learning system and method based on entertainment interaction
CN106601043A (en) * 2016-11-07 2017-04-26 爱可目(北京)科技股份有限公司 Multimedia interaction education device and multimedia interaction education method based on augmented reality
CN110688003A (en) * 2019-09-09 2020-01-14 华南师范大学 Augmented reality-based electronic textbook system, display method, device and medium
CN112306438A (en) * 2020-10-29 2021-02-02 珠海市大悦科技有限公司 Multimedia implementation method and device based on AR and two-dimensional code and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140210857A1 (en) * 2013-01-28 2014-07-31 Tencent Technology (Shenzhen) Company Limited Realization method and device for two-dimensional code augmented reality
CN104967900A (en) * 2015-05-04 2015-10-07 腾讯科技(深圳)有限公司 Video generating method and video generating device
CN106023692A (en) * 2016-05-13 2016-10-12 广东博士早教科技有限公司 AR interest learning system and method based on entertainment interaction
CN106601043A (en) * 2016-11-07 2017-04-26 爱可目(北京)科技股份有限公司 Multimedia interaction education device and multimedia interaction education method based on augmented reality
CN110688003A (en) * 2019-09-09 2020-01-14 华南师范大学 Augmented reality-based electronic textbook system, display method, device and medium
CN112306438A (en) * 2020-10-29 2021-02-02 珠海市大悦科技有限公司 Multimedia implementation method and device based on AR and two-dimensional code and storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
WANGSHUAINAN: "FFmpeg将多张图片合成视频", pages 1 - 3, Retrieved from the Internet <URL:https://blog.csdn.net/wangshuainan/article/details/77914508> *
一度逍遥: "利用视差图合成新视点", pages 1 - 7, Retrieved from the Internet <URL:https://www.cnblogs.com/riddick/p/7355353.html> *
杨铀;郁梅;蒋刚毅;: "交互式三维视频系统研究进展", 计算机辅助设计与图形学学报, no. 05, 15 May 2009 (2009-05-15), pages 569 - 578 *

Similar Documents

Publication Publication Date Title
CN114119849B (en) Three-dimensional scene rendering method, device and storage medium
CN110557625A (en) live virtual image broadcasting method, terminal, computer equipment and storage medium
CN110290425A (en) A kind of method for processing video frequency, device and storage medium
US8281281B1 (en) Setting level of detail transition points
US10803653B2 (en) Methods and systems for generating a surface data projection that accounts for level of detail
CN111612878B (en) Method and device for making static photo into three-dimensional effect video
US11443450B2 (en) Analyzing screen coverage of a target object
CN114708391B (en) Three-dimensional modeling method, three-dimensional modeling device, computer equipment and storage medium
US20230274400A1 (en) Automatically removing moving objects from video streams
TW201911240A (en) Image processing device and method, file generating device and method, and program
CN114972574A (en) WEB-based digital image real-time editing using latent vector stream renderer and image modification neural network
US10347037B2 (en) Methods and systems for generating and providing virtual reality data that accounts for level of detail
KR20190130556A (en) Image processing method and device
CN113658343A (en) Multimedia interaction method, device and readable medium
CN115293994A (en) Image processing method, image processing device, computer equipment and storage medium
CN109729285B (en) Fuse grid special effect generation method and device, electronic equipment and storage medium
CN114302128A (en) Video generation method and device, electronic equipment and storage medium
WO2019089477A1 (en) Methods and systems for generating and providing virtual reality data that accounts for level of detail
TWI817273B (en) Real-time multiview video conversion method and system
TWI802204B (en) Methods and systems for derived immersive tracks
CN114581611B (en) Virtual scene construction method and device
EP3979651A1 (en) Encoding and decoding immersive video
US20210274091A1 (en) Reconstruction of obscured views of captured imagery using arbitrary captured inputs
CN116188682A (en) Real-time rendering method and device based on dynamic perceptron graph
WO2023129214A1 (en) Methods and system of multiview video rendering, preparing a multiview cache, and real-time multiview video conversion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination