CN110784662A - Method, system, device and storage medium for replacing video background - Google Patents

Method, system, device and storage medium for replacing video background Download PDF

Info

Publication number
CN110784662A
CN110784662A CN201910846235.5A CN201910846235A CN110784662A CN 110784662 A CN110784662 A CN 110784662A CN 201910846235 A CN201910846235 A CN 201910846235A CN 110784662 A CN110784662 A CN 110784662A
Authority
CN
China
Prior art keywords
background
curve
video
combining
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910846235.5A
Other languages
Chinese (zh)
Inventor
呼伦夫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Lajin Zhongbo Technology Co ltd
Original Assignee
Tianmai Juyuan (hangzhou) Media Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianmai Juyuan (hangzhou) Media Technology Co Ltd filed Critical Tianmai Juyuan (hangzhou) Media Technology Co Ltd
Priority to CN201910846235.5A priority Critical patent/CN110784662A/en
Publication of CN110784662A publication Critical patent/CN110784662A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Acoustics & Sound (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a method, a system, a device and a storage medium for replacing a video background, wherein the method comprises the following steps: after the manuscript information is obtained, analyzing the manuscript information and obtaining character characteristics; generating a corresponding background image by combining the character characteristics and a preset background database; and after generating voice information according to the manuscript information, dynamically rendering a background picture of the playing model by combining the voice information and the background image. According to the method and the device, the manuscript information is analyzed, the corresponding background image is obtained, and the background picture of the model is dynamically played according to the background image when the video is played, so that the video background is prevented from being unchanged, the video content is richer and more intuitive, the video making quality is improved, and the method and the device can be widely applied to the field of video making.

Description

Method, system, device and storage medium for replacing video background
Technical Field
The present invention relates to the field of video production, and in particular, to a method, a system, an apparatus, and a storage medium for replacing a video background.
Background
With the development of internet technology and self-media, many video platforms and corresponding video software, such as today's headlines, watermelon videos, jitters and the like, are available, and many network redplayers and self-media bloggers are also available. The blogger obtains click rate and attracts the attention of the fan by making videos and playing the videos on the video software, such as making movie comment videos or current affair comment videos. In order to assist the bloggers to make videos, the corresponding software products which appear at present, such as automatically downloading corresponding pictures or videos from the network according to written manuscripts, avoid that users spend a large amount of time on collecting picture or video materials, and improve the video making efficiency; or in order to make the playing content more rich and interesting, playing scene data is provided, a virtual host and a playing background are arranged in the playing scene, and the playing background is fixed and unchanged after the playing scene is selected in the video manufactured at present, so that the watching experience of audiences is seriously reduced.
Disclosure of Invention
In order to solve the above technical problems, an object of the present invention is to provide a method, system, apparatus and storage medium for replacing a video background in conjunction with a document content.
The first technical scheme adopted by the invention is as follows:
a method for replacing a video background comprises the following steps:
after the manuscript information is obtained, analyzing the manuscript information and obtaining character characteristics;
generating a corresponding background image by combining the character characteristics and a preset background database;
and after generating voice information according to the manuscript information, dynamically rendering a background picture of the playing model by combining the voice information and the background image.
Further, the step of analyzing the manuscript information and obtaining the character features specifically comprises the following steps:
identifying noun words and/or numerical data in the manuscript information;
counting the occurrence frequency of each noun vocabulary, and acquiring a plurality of key vocabularies as character features according to the occurrence frequency; and/or
And after matching and associating the numerical data, generating array data as character features.
Further, the step of generating a corresponding background image by combining the character features and a preset background database specifically includes the following steps:
acquiring a corresponding picture from a preset background database as a background image according to the key vocabulary; and/or
And after acquiring a corresponding statistical template from a preset background database according to the type of the array data, generating a statistical graph as a background image by combining the array data and the statistical template.
Further, the step of dynamically rendering the background picture of the playback model by combining the voice information and the background image specifically includes the following steps:
sequentially recognizing characters in the voice information, and when the recognized characters are detected to be key words, acquiring corresponding background images according to the key words and rendering background pictures of the playing model; and/or
And sequentially identifying characters in the voice information, and when the recognized characters are detected to be array data, acquiring a corresponding statistical chart according to the array data, and rendering a background picture of the playing model.
Further, the playing model is provided with a virtual host, and the method further comprises the step of switching the rendering mouth shape of the virtual host, wherein the step of switching the rendering mouth shape of the virtual host specifically comprises the following steps:
after analyzing the manuscript information, obtaining the pinyin of each Chinese character in the manuscript information;
after the pinyin of each Chinese character is disassembled, obtaining a phoneme array corresponding to the pinyin;
fusing the phoneme arrays by adopting a preset fusion curve, and obtaining a mixed curve;
and combining the voice information and the mixed curve to drive the change of the mouth shape, thereby rendering different mouth shapes.
Further, the phoneme array comprises an initial consonant and a final, and the step of fusing the phoneme array by using a preset fusion curve and obtaining a mixing curve specifically comprises the following steps:
acquiring an initial curve according to the type of the initial and acquiring a final curve according to the type of the final;
fusing the initial curve and the final curve of the same phoneme array to obtain a phoneme curve;
and fusing the phoneme curves of the adjacent phoneme arrays to obtain a mixed curve.
Further, the step of combining the voice information and the mixed curve to drive the change of the mouth shape so as to render different mouth shapes specifically comprises the following steps:
analyzing the mixed curve to obtain a continuous driving value;
recognizing characters in the voice information, and matching the recognized characters with the driving values;
and sequentially combining the driving value and the preset mouth model to drive the change of the mouth model, thereby rendering different mouth models.
The second technical scheme adopted by the invention is as follows:
a system for replacing a video background, comprising:
the manuscript analyzing module is used for analyzing the manuscript information after the manuscript information is acquired and acquiring character characteristics;
the image acquisition module is used for generating a corresponding background image by combining the character characteristics and a preset background database;
and the background switching module is used for dynamically rendering the background picture of the playing model by combining the voice information and the background image after generating the voice information according to the manuscript information.
The third technical scheme adopted by the invention is as follows:
a video background exchange apparatus, comprising:
at least one processor;
at least one memory for storing at least one program;
when executed by the at least one processor, cause the at least one processor to implement the method described above.
The fourth technical scheme adopted by the invention is as follows:
a storage medium having stored therein processor-executable instructions for performing the method as described above when executed by a processor.
The invention has the beneficial effects that: according to the method and the device, the manuscript information is analyzed, the corresponding background image is obtained, and the background picture of the model is dynamically played according to the background image when the video is played, so that the video background is prevented from being unchanged, the video content is richer and more visual, and the video making quality is improved.
Drawings
FIG. 1 is a flow chart illustrating the steps of a method for replacing a video background according to the present invention;
FIG. 2 is a block diagram of a video background exchange system according to the present invention;
FIG. 3 is a diagram illustrating a background rendering in accordance with an exemplary embodiment;
FIG. 4 is a diagram of another background rendering in an exemplary embodiment;
FIG. 5 is a graph of the phonemes of a single Pinyin in an exemplary embodiment;
FIG. 6 is a diagram illustrating the merging of phoneme curves in an exemplary embodiment;
FIG. 7 is a diagram illustrating rendering with continuous switching of the dies in an embodiment.
Detailed Description
As shown in fig. 1, the present embodiment provides a method for replacing a video background, including the following steps:
s1, after the manuscript information is obtained, analyzing the manuscript information and obtaining character characteristics;
s2, generating a corresponding background image by combining the character characteristics and a preset background database;
and S3, after voice information is generated according to the manuscript information, dynamically rendering the background picture of the playing model by combining the voice information and the background image.
In the method of this embodiment, the manuscript information is a file with pure text, and the file may be downloaded from a network, for example, from various major news websites such as the xinhua network, the civil network, the phoenix network, and the headline, or may be a file written by the user. After the manuscript information is input, the manuscript information is automatically analyzed, character features in the manuscript information are obtained, the character features can be keywords in the manuscript or key data evidences, and the like, wherein the character features are identified and obtained by adopting the existing character identification technology. After the character features are obtained, the character features are combined with a preset background database to generate a corresponding background image, wherein the background database comprises a preset background image, a special effect model (such as special effects of thunder, cloudy and the like) and a data statistical model (such as a percentage pie statistical model). The step of generating the voice information from the manuscript information can be that the user manually converts the manuscript information into the voice information, such as spoken language input, and can also adopt the existing text conversion voice software to automatically convert. Switching and rendering the background picture of the playing model by combining the voice information and the background image, for example, according to the key words in the recognized voice information, triggering and acquiring the corresponding background image for rendering when the key words are recognized; or setting background image switching time according to the duration of the voice information, and switching rendering when a time point is detected.
By the method, the background of the video made by the user can be dynamically switched according to the content of the manuscript, so that the situation that the background picture in the video is unchanged is avoided, the watching experience of audiences is reduced, the content of the video is increased through the switched background picture, the content of the video is richer and more visual, and the quality of making the video is improved.
Wherein the step S1 specifically includes steps S11 to S12:
s11, identifying noun words and/or numerical data in the manuscript information;
s12, counting the occurrence frequency of each noun word, and acquiring a plurality of key words as character features according to the occurrence frequency; and/or
And after matching and associating the numerical data, generating array data as character features.
The noun words are words of nouns in the manuscript, and the numerical data comprise general numerical values, percentage numerical values, array numerical values and the like. By counting the occurrence times of the noun vocabularies in the manuscript, the noun vocabularies with more occurrence times are obtained as character characteristics, for example, the noun vocabularies of the first five times of the setting times are used as character characteristics, and the specific number of the obtained vocabularies can be set as character characteristics. The matching and associating of the numerical data is to perform combination and matching on a plurality of related or compared data, for example, data a is data in a first quarter, data B is data in a second quarter, and data C is data in a third quarter, three data are associated to generate array data, and then the array data are compared and displayed, for example, a bar graph or a line graph is generated.
The step S2 specifically includes:
acquiring a corresponding picture from a preset background database as a background image according to the key vocabulary; and/or
And after acquiring a corresponding statistical template from a preset background database according to the type of the array data, generating a statistical graph as a background image by combining the array data and the statistical template.
The background database is preset with background images, the background images can be matched and fused with the playing and playing model, for example, the content of a manuscript is a report about weather conditions, detected key words are stormy rain, thunder and flood and the like, then the background images corresponding to the stormy rain, thunder and flood words are respectively obtained from the background database, and the background images are dynamically rendered during video playing.
And if the manuscript has array data, acquiring a corresponding statistical template according to the type of the array data, for example, if the array is about the distribution condition of the age structure proportion of China, and the occupation ratios of old people, young people and children are respectively large, making a pie chart according to the related percentage, and rendering a background image of the pie chart when the voice is played to the link.
The step S3 specifically includes:
sequentially recognizing characters in the voice information, and when the recognized characters are detected to be key words, acquiring corresponding background images according to the key words and rendering background pictures of the playing model; and/or
And sequentially identifying characters in the voice information, and when the recognized characters are detected to be array data, acquiring a corresponding statistical chart according to the array data, and rendering a background picture of the playing model.
Recognizing characters in the voice information, wherein the technology for recognizing the characters by voice can be realized by adopting the existing technology, and when the characters are detected to be key words, acquiring corresponding background images according to the key words and rendering the background images; and when the array data is detected, acquiring a corresponding statistical chart for rendering. Therefore, when the video is played and the virtual host speaks the corresponding topic, the corresponding picture is presented in the background picture, so that the video content is richer and more colorful, and audiences can receive the information more easily. Referring to fig. 3, when talking about agricultural problems, for example, a golden paddy field is displayed in the background, in this embodiment, a virtual host and a playing window are also included in the playing model, and the playing window can be used for playing pictures or video materials collected by the user. Referring to fig. 4, for example, when weather forecast refers to thunder and lightning, a dynamic special effect of thunder and lightning is rendered.
Further as a preferred embodiment, the playing model is provided with a virtual host, and further includes a step of switching rendering virtual host mouth shape, where the step of switching rendering virtual host mouth shape specifically includes steps a1 to a 4:
a1, analyzing the manuscript information to obtain the pinyin of each Chinese character in the manuscript information;
a2, disassembling the pinyin of each Chinese character to obtain a phoneme array corresponding to the pinyin;
a3, fusing the phoneme arrays by adopting a preset fusion curve, and obtaining a mixed curve;
and A4, combining the voice information and the mixed curve to drive the change of the mouth shape, thereby rendering different mouth shapes.
The phoneme array comprises an initial consonant and a final consonant, and the step A3 specifically comprises the steps A31-A33:
a31, acquiring an initial curve according to the type of the initial, and acquiring a final curve according to the type of the final;
a32, fusing the initial curve and the final curve of the same phoneme array to obtain a phoneme curve;
and A33, fusing the phoneme curves of the adjacent phoneme arrays to obtain a mixed curve.
The step A4 specifically comprises the following steps A41-A43:
a41, analyzing the mixed curve to obtain a continuous driving value;
a42, recognizing characters in the voice information, and matching the recognized characters with the driving values;
and A43, sequentially combining the driving values and the preset mouth shape model to drive the change of the mouth shape, thereby rendering different mouth shapes.
The existing mouth shape driving technology of the virtual character mainly reduces the watching experience of audiences by checking speaking time and driving the mouth shapes of the virtual character to be combined in the time period, and the mouth shapes cannot be matched with the spoken words. In the embodiment, the pinyin of the Chinese characters in the manuscript is analyzed, each Chinese character is used for acquiring the mouth shape matched with the pinyin, and the mouth shape change during speaking is related to the pronunciation of the last Chinese character, so that the mouth shape change is smoother, the embodiment adopts a curve fusion mode to fuse adjacent Chinese characters, and then the mouth shape of the pronunciation Chinese character is acquired by the fused curve. The technology for analyzing the pinyin of the Chinese characters can be realized by adopting the existing technology, and specifically, refer to the patent with the application number of 201410712164.7.
Referring to fig. 5 and 6, the mouth shapes of adjacent Chinese characters are fused specifically in the following manner: the initial consonant pronunciation is prior to the final pronunciation, but the time length of the initial consonant pronunciation is shorter than that of the final pronunciation, so different time weights need to be allocated to the initial consonant and the final, and corresponding initial consonant curves and final curves are obtained from the database. Referring to fig. 5, an initial curve and a final curve are fused into a phoneme curve in combination with time weights of an initial and a final. Referring to fig. 6, after the phoneme curves corresponding to each chinese character are obtained, the phoneme curves of the adjacent chinese characters are mixed, so that the mouth shape of the current chinese character is the mouth shape of the previous chinese character and the pronunciation control of the present chinese character can be realized. Specifically, the vowel curve of the previous chinese character needs to be mixed with the initial consonant curve of the present chinese character, and in this embodiment, a Lerp function is used to fuse adjacent phoneme curves. Referring to fig. 7, the mouth shape changes more smoothly in order to avoid the change of the mouth shape being too abrupt, or each chinese word is composed of an open mouth and a closed mouth, which is not real and natural enough, and improves the viewing experience of the audience.
As shown in fig. 2, the present embodiment further provides a system for replacing a video background, including:
the manuscript analyzing module is used for analyzing the manuscript information after the manuscript information is acquired and acquiring character characteristics;
the image acquisition module is used for generating a corresponding background image by combining the character characteristics and a preset background database;
and the background switching module is used for dynamically rendering the background picture of the playing model by combining the voice information and the background image after generating the voice information according to the manuscript information.
The system for replacing a video background according to the embodiment of the present invention is capable of executing the method for replacing a video background according to the embodiment of the present invention, and capable of executing any combination of the steps of the embodiment of the method, and has corresponding functions and advantages of the method.
This embodiment also provides a video background's change device, includes:
at least one processor;
at least one memory for storing at least one program;
when executed by the at least one processor, cause the at least one processor to implement the method described above.
The video background replacing device of the embodiment can execute the video background replacing method provided by the method embodiment of the invention, can execute any combination of the implementation steps of the method embodiment, and has corresponding functions and beneficial effects of the method.
The present embodiments also provide a storage medium having stored therein processor-executable instructions, which when executed by a processor, are configured to perform the method as described above.
The storage medium of this embodiment can execute the method for replacing a video background provided by the method embodiment of the present invention, can execute any combination of the implementation steps of the method embodiment, and has corresponding functions and advantages of the method.
While the preferred embodiments of the present invention have been illustrated and described, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. A method for replacing a video background is characterized by comprising the following steps:
after the manuscript information is obtained, analyzing the manuscript information and obtaining character characteristics;
generating a corresponding background image by combining the character characteristics and a preset background database;
and after generating voice information according to the manuscript information, dynamically rendering a background picture of the playing model by combining the voice information and the background image.
2. The method for changing a video background according to claim 1, wherein the step of analyzing the manuscript information and obtaining the character features specifically comprises the steps of:
identifying noun words and/or numerical data in the manuscript information;
counting the occurrence frequency of each noun vocabulary, and acquiring a plurality of key vocabularies as character features according to the occurrence frequency; and/or
And after matching and associating the numerical data, generating array data as character features.
3. The method for replacing a video background according to claim 2, wherein the step of generating the corresponding background image by combining the character features and a preset background database specifically comprises the following steps:
acquiring a corresponding picture from a preset background database as a background image according to the key vocabulary; and/or
And after acquiring a corresponding statistical template from a preset background database according to the type of the array data, generating a statistical graph as a background image by combining the array data and the statistical template.
4. The method for replacing a video background according to claim 3, wherein the step of dynamically rendering the background picture of the playback model by combining the voice information and the background image specifically comprises the steps of:
sequentially recognizing characters in the voice information, and when the recognized characters are detected to be key words, acquiring corresponding background images according to the key words and rendering background pictures of the playing model; and/or
And sequentially identifying characters in the voice information, and when the recognized characters are detected to be array data, acquiring a corresponding statistical chart according to the array data, and rendering a background picture of the playing model.
5. The method for replacing a video background according to claim 1, wherein a virtual host is provided on the playing model, further comprising a step of switching a rendering virtual host mouth shape, wherein the step of switching the rendering virtual host mouth shape specifically comprises the steps of:
after analyzing the manuscript information, obtaining the pinyin of each Chinese character in the manuscript information;
after the pinyin of each Chinese character is disassembled, obtaining a phoneme array corresponding to the pinyin;
fusing the phoneme arrays by adopting a preset fusion curve, and obtaining a mixed curve;
and combining the voice information and the mixed curve to drive the change of the mouth shape, thereby rendering different mouth shapes.
6. The method for replacing a video background according to claim 5, wherein the phoneme array comprises an initial consonant and a final, and the step of fusing the phoneme array by using a preset fusion curve and obtaining a mixing curve comprises the following steps:
acquiring an initial curve according to the type of the initial and acquiring a final curve according to the type of the final;
fusing the initial curve and the final curve of the same phoneme array to obtain a phoneme curve;
and fusing the phoneme curves of the adjacent phoneme arrays to obtain a mixed curve.
7. The method for changing a video background according to claim 6, wherein the step of combining the voice information and the mixed curve to drive the change of the mouth shape so as to render different mouth shapes comprises the following steps:
analyzing the mixed curve to obtain a continuous driving value;
recognizing characters in the voice information, and matching the recognized characters with the driving values;
and sequentially combining the driving value and the preset mouth model to drive the change of the mouth model, thereby rendering different mouth models.
8. A system for changing a video background, comprising:
the manuscript analyzing module is used for analyzing the manuscript information after the manuscript information is acquired and acquiring character characteristics;
the image acquisition module is used for generating a corresponding background image by combining the character characteristics and a preset background database;
and the background switching module is used for dynamically rendering the background picture of the playing model by combining the voice information and the background image after generating the voice information according to the manuscript information.
9. An apparatus for replacing a video background, comprising:
at least one processor;
at least one memory for storing at least one program;
when executed by the at least one processor, cause the at least one processor to implement a method of changing a video background as claimed in any one of claims 1 to 7.
10. A storage medium having stored therein processor-executable instructions, which when executed by a processor, are configured to perform the method of any one of claims 1-7.
CN201910846235.5A 2019-09-09 2019-09-09 Method, system, device and storage medium for replacing video background Pending CN110784662A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910846235.5A CN110784662A (en) 2019-09-09 2019-09-09 Method, system, device and storage medium for replacing video background

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910846235.5A CN110784662A (en) 2019-09-09 2019-09-09 Method, system, device and storage medium for replacing video background

Publications (1)

Publication Number Publication Date
CN110784662A true CN110784662A (en) 2020-02-11

Family

ID=69383398

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910846235.5A Pending CN110784662A (en) 2019-09-09 2019-09-09 Method, system, device and storage medium for replacing video background

Country Status (1)

Country Link
CN (1) CN110784662A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111491123A (en) * 2020-04-17 2020-08-04 维沃移动通信有限公司 Video background processing method and device and electronic equipment
CN111935528A (en) * 2020-06-22 2020-11-13 北京百度网讯科技有限公司 Video generation method and device
CN112148900A (en) * 2020-09-14 2020-12-29 联想(北京)有限公司 Multimedia file display method and device
CN113422914A (en) * 2021-06-24 2021-09-21 脸萌有限公司 Video generation method, device, equipment and medium
CN113613062A (en) * 2021-07-08 2021-11-05 广州云智达创科技有限公司 Video data processing method, apparatus, device, storage medium, and program product
CN116401359A (en) * 2023-06-09 2023-07-07 深圳前海环融联易信息科技服务有限公司 Document extraction method and device, medium and equipment

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111491123A (en) * 2020-04-17 2020-08-04 维沃移动通信有限公司 Video background processing method and device and electronic equipment
CN111935528A (en) * 2020-06-22 2020-11-13 北京百度网讯科技有限公司 Video generation method and device
CN112148900A (en) * 2020-09-14 2020-12-29 联想(北京)有限公司 Multimedia file display method and device
CN113422914A (en) * 2021-06-24 2021-09-21 脸萌有限公司 Video generation method, device, equipment and medium
CN113422914B (en) * 2021-06-24 2023-11-21 脸萌有限公司 Video generation method, device, equipment and medium
CN113613062A (en) * 2021-07-08 2021-11-05 广州云智达创科技有限公司 Video data processing method, apparatus, device, storage medium, and program product
CN113613062B (en) * 2021-07-08 2024-01-23 广州云智达创科技有限公司 Video data processing method, device, equipment and storage medium
CN116401359A (en) * 2023-06-09 2023-07-07 深圳前海环融联易信息科技服务有限公司 Document extraction method and device, medium and equipment

Similar Documents

Publication Publication Date Title
CN110784662A (en) Method, system, device and storage medium for replacing video background
CN111415399B (en) Image processing method, device, electronic equipment and computer readable storage medium
CN109859298B (en) Image processing method and device, equipment and storage medium thereof
CN107516509B (en) Voice database construction method and system for news broadcast voice synthesis
CN113035199B (en) Audio processing method, device, equipment and readable storage medium
CN103544140A (en) Data processing method, display method and corresponding devices
CN110782511A (en) Method, system, apparatus and storage medium for dynamically changing avatar
CN110427809A (en) Lip reading recognition methods, device, electronic equipment and medium based on deep learning
CN110602516A (en) Information interaction method and device based on live video and electronic equipment
CN111050023A (en) Video detection method and device, terminal equipment and storage medium
CN112399269B (en) Video segmentation method, device, equipment and storage medium
CN110781346A (en) News production method, system, device and storage medium based on virtual image
JP2012181358A (en) Text display time determination device, text display system, method, and program
CN111711834A (en) Recorded broadcast interactive course generation method and device, storage medium and terminal
CN114464180A (en) Intelligent device and intelligent voice interaction method
CN113411674A (en) Video playing control method and device, electronic equipment and storage medium
CN109376145B (en) Method and device for establishing movie and television dialogue database and storage medium
CN107122393A (en) Electron album generation method and device
US20230326369A1 (en) Method and apparatus for generating sign language video, computer device, and storage medium
CN111160051B (en) Data processing method, device, electronic equipment and storage medium
CN110796718A (en) Mouth-type switching rendering method, system, device and storage medium
CN116962787A (en) Interaction method, device, equipment and storage medium based on video information
CN110781327A (en) Image searching method and device, terminal equipment and storage medium
CN116528015A (en) Digital human video generation method and device, electronic equipment and storage medium
CN114494951B (en) Video processing method, device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20220929

Address after: Room 1602, 16th Floor, Building 18, Yard 6, Wenhuayuan West Road, Beijing Economic and Technological Development Zone, Daxing District, Beijing 100176

Applicant after: Beijing Lajin Zhongbo Technology Co.,Ltd.

Address before: 310000 room 650, building 3, No. 16, Zhuantang science and technology economic block, Xihu District, Hangzhou City, Zhejiang Province

Applicant before: Tianmai Juyuan (Hangzhou) Media Technology Co.,Ltd.

TA01 Transfer of patent application right