CN109151339B - Method for synthesizing characters in recommendation video and related products - Google Patents

Method for synthesizing characters in recommendation video and related products Download PDF

Info

Publication number
CN109151339B
CN109151339B CN201810983414.9A CN201810983414A CN109151339B CN 109151339 B CN109151339 B CN 109151339B CN 201810983414 A CN201810983414 A CN 201810983414A CN 109151339 B CN109151339 B CN 109151339B
Authority
CN
China
Prior art keywords
images
pixel frame
video
sub
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810983414.9A
Other languages
Chinese (zh)
Other versions
CN109151339A (en
Inventor
张磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan youwenxin culture media Co.,Ltd.
Original Assignee
Wuhan Youwenxin Culture Media Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Youwenxin Culture Media Co ltd filed Critical Wuhan Youwenxin Culture Media Co ltd
Priority to CN201810983414.9A priority Critical patent/CN109151339B/en
Publication of CN109151339A publication Critical patent/CN109151339A/en
Application granted granted Critical
Publication of CN109151339B publication Critical patent/CN109151339B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Signal Processing (AREA)
  • Information Transfer Between Computers (AREA)
  • Image Processing (AREA)

Abstract

The present disclosure provides a method for synthesizing a character in a recommendation video and a related product, wherein the method comprises the following steps: acquiring a plurality of sub-videos to be synthesized, wherein the plurality of sub-videos comprise: the sub-videos of the same person shot for multiple times in the same scene; extracting a plurality of character images of each sub-video in the plurality of sub-videos, and selecting n character images with image definition reaching a set threshold value from the plurality of character images; and filtering the n personal object images to obtain m personal object images, wherein the visual angle of each image in the m personal object images is different, and synthesizing at least two images in the m personal object images according to the sequence of the visual angles to obtain a synthesized person video. The technical scheme provided by the application has the advantage of low cost.

Description

Method for synthesizing characters in recommendation video and related products
Technical Field
The invention relates to the technical field of culture media, in particular to a method for synthesizing characters in a recommendation video and a related product.
Background
Enterprises are the main bodies of social business operation, and many enterprises need some recommendations, so recommendation videos come with the advent of the enterprises, and the recommendation videos also become enterprise promotion videos, which are produced by professional movie companies.
The existing recommendation videos are synthesized by a plurality of video files, wherein the synthesis of people is based on artificial selection of synthesis modes, the method is high in cost firstly, and secondly, the artificial synthesis is important for the experience of the people, so that the synthesis effect cannot be controlled and is not uniform.
Disclosure of Invention
The embodiment of the invention provides a method for synthesizing a character in a recommendation video and a related product, which can realize automatic character video synthesis and have the advantages of low cost and uniform synthesis effect.
In a first aspect, an embodiment of the present invention provides a method for synthesizing a character in a recommendation video, where the method includes the following steps:
acquiring a plurality of sub-videos to be synthesized, wherein the plurality of sub-videos comprise: the sub-videos of the same person shot for multiple times in the same scene;
extracting a plurality of character images of each sub-video in the plurality of sub-videos, and selecting n character images with image definition reaching a set threshold value from the plurality of character images;
filtering the n personal object images to obtain m personal object images, wherein the visual angle of each image in the m personal object images is different, and synthesizing at least two images in the m personal object images according to the sequence of the visual angles to obtain a synthesized person video; n is an integer of 5 or more, m is 2 or more and m is n or less.
Optionally, the filtering the n personal object images to obtain m personal object images specifically includes:
acquiring n visual angles of n personal object images, extracting an image from each visual angle of the n visual angles, and forming the m personal object images by one image of each video.
Optionally, if a view has a plurality of images, the extracting an image for each view specifically includes:
an image with the best visual effect is selected from the plurality of images as an extracted image.
Optionally, the synthesizing at least two images of the m personal object images according to the sequence of the view angle to obtain a synthesized personal video specifically includes:
receiving a character video synthesizing requirement input by a user, determining the sequence of the visual angle according to the requirement, and synthesizing at least two images according to the sequence of the visual angle to obtain a synthesized character video.
In a second aspect, a terminal is provided, which includes: a processor, a communication unit and a display screen,
the communication unit is configured to acquire a plurality of sub-videos to be synthesized, where the plurality of sub-videos include: the sub-videos of the same person shot for multiple times in the same scene;
the processor is used for extracting a plurality of character images of each sub-video in the plurality of sub-videos and selecting n character images with image definition reaching a set threshold value from the plurality of character images; filtering the n personal object images to obtain m personal object images, wherein the visual angle of each image in the m personal object images is different, and synthesizing at least two images in the m personal object images according to the sequence of the visual angles to obtain a synthesized person video; n is an integer of 5 or more, m is 2 or more and m is n or less.
Optionally, the processing unit is specifically configured to acquire n view angles of n images of the person, extract one image from each view angle of the n view angles, and form one image of each video into the m images of the person.
Optionally, the processor is specifically configured to, for example, have a plurality of images in a viewing angle, and select an image with the best visual effect from the plurality of images as the extracted image.
Optionally, the processor is further configured to receive a request for synthesizing a character video input by a user, determine an order of the view angle according to the request, and synthesize at least two images according to the order of the view angle to obtain a synthesized character video.
In a third aspect, a computer-readable storage medium is provided, which stores a program for electronic data exchange, wherein the program causes a terminal to execute the method provided in the first aspect.
The embodiment of the invention has the following beneficial effects:
it can be seen that, according to the technical scheme provided by the application, after a plurality of sub-videos to be synthesized are obtained, a plurality of character images are obtained, then the plurality of character images are screened to obtain m character images, then the sequence of visual angles is determined according to the video requirements selected by a user, at least two videos are determined according to the sequence, and character video synthesis is carried out according to the sequence.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic structural diagram of a terminal.
Fig. 2 is a flow chart of a method for synthesizing a character in a recommendation video.
Fig. 3 is a schematic structural diagram of a terminal according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terms "first," "second," "third," and "fourth," etc. in the description and claims of the invention and in the accompanying drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, result, or characteristic described in connection with the embodiment can be included in at least one embodiment of the invention. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
Referring to fig. 1, fig. 1 is a schematic structural diagram of a terminal, and as shown in fig. 1, the terminal may include an intelligent terminal, and specifically may be a tablet computer, such as an Android tablet computer, an iOS tablet computer, a Windows Phone tablet computer, and the like. Specifically, the terminal may further include: personal computer, server, etc., the terminal comprising: processor 101, display screen 104, communication module 102, memory 103 and image processor.
The processor 101 is a control center of the terminal, connects various parts of the entire terminal using various interfaces and lines, and performs various functions of the terminal and processes data by operating or executing software programs and/or modules stored in the memory and calling data stored in the memory, thereby monitoring or controlling the terminal as a whole. Alternatively, processor 101 may include one or more processing units; optionally, the processor 101 may integrate an application processor, a modem processor, and an artificial intelligence chip, wherein the application processor mainly processes an operating system, a user interface, an application program, and the like.
Further, the memory may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The communication module can be used for receiving and sending information. Typically, the communication module includes, but is not limited to, an antenna, at least one Amplifier, a transceiver, a coupler, a Low Noise Amplifier (LNA), a duplexer, and the like. In addition, the communication module can also communicate with a network and other devices through wireless communication. The wireless communication may use any communication standard or protocol, such as a mobile communication protocol or a short-range communication protocol (including but not limited to bluetooth, WIFI, etc.).
The image processor may be specifically configured to perform relevant processing on an image (e.g., a video), and in practical applications, the image processor may be integrated into the processor 101.
The display screen may be used to display advertisements, and may specifically be an LCD display screen, but may also be other forms of display screens, such as a touch display screen.
Referring to fig. 2, fig. 2 provides a method for synthesizing a character in a recommended video, which is performed by the terminal shown in fig. 1, as shown in fig. 2, and includes the following steps:
step S201, obtaining a plurality of sub-videos to be synthesized, where the plurality of sub-videos include: the sub-videos of the same person shot for multiple times in the same scene;
step S202, extracting a plurality of character images of each sub-video in a plurality of sub-videos, and selecting n character images with image definition reaching a set threshold from the plurality of character images;
step S203, filtering the n personal object images to obtain m personal object images, wherein the visual angles of all the images in the m personal object images are different, and synthesizing at least two images in the m personal object images according to the sequence of the visual angles to obtain a synthesized person video. N is an integer of 5 or more, m is an integer of 2 or more, and m is n or less.
The filtering the n personal object images to obtain m personal object images may specifically include:
acquiring n visual angles of n personal object images, extracting an image from each visual angle of the n visual angles, and forming the m personal object images by one image of each video.
If a viewing angle has a plurality of images, an image with the best visual effect is selected from the plurality of images as an extracted image. The above-described viewing angle effect is most preferably the highest in sharpness, and if blurring is determined, it may be that blurring corresponds best to visual effect.
Such perspectives include, but are not limited to: the order of the front view, the back view, the left view, the right view, etc. may be different according to the synthesized person video, for example, if the synthesized person video requires front turning to back (left turning), the order may be front view-left view-back view, if the synthesized person video requires back turning to front (right turning), the order may be back view-right view-front view.
The order of the viewing angles may be implemented by a composite character video requirement, which may be selected by a user, or at least two images of the m character images may be determined by the order of the viewing angles, for example, if the order of the viewing angles is determined, then all the images in the order of the viewing angles are at least two images.
According to the technical scheme, after a plurality of sub-videos to be synthesized are obtained, a plurality of character images are obtained, then the plurality of character images are screened to obtain m character images, the sequence of visual angles is determined according to the video requirements selected by a user, at least two videos are determined according to the sequence, and character video synthesis is carried out according to the sequence.
The character video composition may specifically include: for example, combining three images, for example, a front view image, a left view image, and a back view image, the three images are first superimposed, the front view image is materialized and the left view image and the back view image are transparentized at the time of a front view, the left view image is materialized and the front view image and the back view image are transparentized at the time of a left view, and similarly, the back view image is materialized and the other two images are transparentized at the time of a back view, so that three videos are synthesized.
The definition refers to the definition of each detail shadow and its boundary on the image.
The method for determining the range of the person may specifically include:
determining a first range of a human face through a human face recognition algorithm, setting 1 chest region (rectangle), a left-hand region (rectangle), a right-hand region (rectangle) and a double-leg region (rectangle) by taking the first range as a reference, extracting RGB values of each pixel point in 1 chest region, counting the number of the same RGB values, determining the first RGB value with the largest number, connecting adjacent pixel points in the first RGB values to obtain a first pixel frame, if the first pixel frame is closed, determining that the region in the first pixel frame is the trimmed chest region, if the first pixel frame is discontinuous, determining the distance of a broken line segment of the discontinuous pixel frame, if the distance of the broken line segment is smaller than a set threshold value and the RGB values of each broken line segment are the same, connecting the broken line segment with a straight line to obtain a closed second pixel frame, and taking the second pixel frame as the trimmed chest region.
For the 3 four-limb areas, the left-hand area, the right-hand area and the two-leg area are divided, and for the two-leg area, the two-leg area after pruning can be obtained by adopting a pruning method of a chest area;
the left-hand area pruning method specifically comprises the following steps:
extracting the RGB value of each pixel point in 1 left-hand area, counting the number of the same RGB values, determining the first RGB value with the largest number and the second RGB value with the largest number, connecting the adjacent pixel points in the first RGB value to obtain a first pixel frame, connecting the adjacent pixel points in the second RGB value to obtain a second pixel frame, and determining that the first pixel frame and the second pixel frame are the left-hand area after trimming if the first pixel frame and the second pixel frame are both closed and the first pixel frame is connected with the second pixel frame. And obtaining the trimmed right-hand area in the same way.
The determination of the scope of the hero is only the confirmation of the approximate scope, because the recommendation video only needs to determine the approximate scope of the hero when being cut, and because the refinement determination is not necessary, because the scene and the person corresponding to the film source are the same, the determination of the scope can be directly processed.
Referring to fig. 3, fig. 3 provides a terminal including: a processor 301, a communication unit 302 and a display 303,
the communication unit is configured to acquire a plurality of sub-videos to be synthesized, where the plurality of sub-videos include: the sub-videos of the same person shot for multiple times in the same scene;
the processor is used for extracting a plurality of character images of each sub-video in the plurality of sub-videos and selecting n character images with image definition reaching a set threshold value from the plurality of character images; filtering the n personal object images to obtain m personal object images, wherein the visual angle of each image in the m personal object images is different, and synthesizing at least two images in the m personal object images according to the sequence of the visual angles to obtain a synthesized person video; n is an integer of 5 or more, m is 2 or more and m is n or less.
Embodiments of the present invention also provide a computer storage medium, wherein the computer storage medium stores a computer program for electronic data exchange, and the computer program enables a computer to execute part or all of the steps of any one of the methods for recommending the composition of a human being in a video as described in the above method embodiments.
Embodiments of the present invention also provide a computer program product comprising a non-transitory computer readable storage medium storing a computer program operable to cause a computer to perform some or all of the steps of any of the methods of recommending the composition of a character in a video as recited in the above method embodiments.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the invention. Further, those skilled in the art should also appreciate that the embodiments described in the specification are exemplary embodiments and that the acts and modules illustrated are not necessarily required to practice the invention.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions when actually implementing, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some interfaces, devices or units, and may be an electric or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit may be implemented in the form of hardware, or may be implemented in the form of a software program module.
The integrated units, if implemented in the form of software program modules and sold or used as stand-alone products, may be stored in a computer readable memory. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a memory and includes several instructions for causing a computer device (which may be a personal computer, a terminal, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned memory comprises: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable memory, which may include: flash Memory disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
The above embodiments of the present invention are described in detail, and the principle and the implementation of the present invention are explained by applying specific embodiments, and the above description of the embodiments is only used to help understanding the method of the present invention and the core idea thereof; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (10)

1. A method for synthesizing people in a recommended video, the method comprising the steps of:
acquiring a plurality of sub-videos to be synthesized, wherein the plurality of sub-videos comprise: the sub-videos of the same person shot for multiple times in the same scene;
extracting a plurality of character images of each sub-video in the plurality of sub-videos, and selecting n character images with image definition reaching a set threshold value from the plurality of character images;
filtering the n personal object images to obtain m personal object images, wherein the visual angle of each image in the m personal object images is different, and synthesizing at least two images in the m personal object images according to the sequence of the visual angles to obtain a synthesized person video; n is an integer greater than or equal to 5, m is an integer greater than or equal to 2 and m is less than or equal to n;
the method for determining the range of the person specifically comprises the following steps:
determining a first range of a human face through a human face recognition algorithm, setting 1 chest region, a left hand region, a right hand region and two leg regions by taking the first range as a reference, extracting RGB values of each pixel point in the 1 chest region, counting the number of the same RGB values, determining the first RGB value with the largest number, connecting adjacent pixel points in the first RGB values to obtain a first pixel frame, determining that the region in the first pixel frame is the trimmed chest region if the first pixel frame is closed, determining the distance of a broken line segment of the discontinuous pixel frame if the distance of the broken line segment is smaller than a set threshold value and the RGB values of each broken line segment are the same, connecting the broken line segments by using a straight line to obtain a closed second pixel frame, wherein the second pixel frame is the trimmed chest region;
dividing 3 limb areas into a left-hand area, a right-hand area and two-leg areas, and obtaining the two-leg areas after pruning by adopting a chest area pruning method for the two-leg areas;
the left-hand area pruning method specifically comprises the following steps:
extracting the RGB value of each pixel point in 1 left-hand area, counting the number of the same RGB values, determining the first RGB value with the largest number and the second RGB value with the largest number, connecting the adjacent pixel points in the first RGB value to obtain a first pixel frame, connecting the adjacent pixel points in the second RGB value to obtain a second pixel frame, and if the first pixel frame and the second pixel frame are both closed and the first pixel frame is connected with the second pixel frame, determining that the first pixel frame and the second pixel frame are the left-hand area after trimming, and similarly obtaining the right-hand area after trimming.
2. The method according to claim 1, wherein the filtering the n personal object images to obtain m personal object images specifically comprises:
acquiring n visual angles of n personal object images, extracting an image from each visual angle of the n visual angles, and forming the m personal object images by one image of each video.
3. The method according to claim 2, wherein, if a view has a plurality of images, said extracting an image from each view specifically comprises:
an image with the best visual effect is selected from the plurality of images as an extracted image.
4. The method of claim 1, wherein the synthesizing at least two of the m character images according to the sequence of the view angle to obtain a synthesized character video specifically comprises:
receiving a character video synthesizing requirement input by a user, determining the sequence of the visual angle according to the requirement, and synthesizing at least two images according to the sequence of the visual angle to obtain a synthesized character video.
5. A terminal, the terminal comprising: a processor, a communication unit and a display screen, characterized in that,
the communication unit is configured to acquire a plurality of sub-videos to be synthesized, where the plurality of sub-videos include: the sub-videos of the same person shot for multiple times in the same scene;
the processor is used for extracting a plurality of character images of each sub-video in the plurality of sub-videos and selecting n character images with image definition reaching a set threshold value from the plurality of character images; filtering the n personal object images to obtain m personal object images, wherein the visual angle of each image in the m personal object images is different, and synthesizing at least two images in the m personal object images according to the sequence of the visual angles to obtain a synthesized person video; n is an integer greater than or equal to 5, m is an integer greater than or equal to 2 and m is less than or equal to n;
the method for determining the range of the person specifically comprises the following steps:
determining a first range of a human face through a human face recognition algorithm, setting 1 chest region, a left hand region, a right hand region and two leg regions by taking the first range as a reference, extracting RGB values of each pixel point in the 1 chest region, counting the number of the same RGB values, determining the first RGB value with the largest number, connecting adjacent pixel points in the first RGB values to obtain a first pixel frame, determining that the region in the first pixel frame is the trimmed chest region if the first pixel frame is closed, determining the distance of a broken line segment of the discontinuous pixel frame if the distance of the broken line segment is smaller than a set threshold value and the RGB values of each broken line segment are the same, connecting the broken line segments by using a straight line to obtain a closed second pixel frame, wherein the second pixel frame is the trimmed chest region;
dividing 3 limb areas into a left-hand area, a right-hand area and two-leg areas, and obtaining the two-leg areas after pruning by adopting a chest area pruning method for the two-leg areas;
the left-hand area pruning method specifically comprises the following steps:
extracting the RGB value of each pixel point in 1 left-hand area, counting the number of the same RGB values, determining the first RGB value with the largest number and the second RGB value with the largest number, connecting the adjacent pixel points in the first RGB value to obtain a first pixel frame, connecting the adjacent pixel points in the second RGB value to obtain a second pixel frame, and if the first pixel frame and the second pixel frame are both closed and the first pixel frame is connected with the second pixel frame, determining that the first pixel frame and the second pixel frame are the left-hand area after trimming, and similarly obtaining the right-hand area after trimming.
6. The terminal of claim 5,
the processor is specifically configured to acquire n view angles of n images of the person, extract one image from each view angle of the n view angles, and form one image of each video into the m images of the person.
7. The terminal of claim 6,
the processor is specifically configured to, for example, have a plurality of images from a viewing angle, and select an image with the best visual effect from the plurality of images as an extracted image.
8. The terminal of claim 5,
the processor is further configured to receive a request for synthesizing a character video input by a user, determine an order of the view angle according to the request, and synthesize at least two images according to the order of the view angle to obtain a synthesized character video.
9. A terminal according to any of claims 5-8,
the terminal is as follows: a tablet computer or a personal computer.
10. A computer-readable storage medium storing a program for electronic data exchange, wherein the program causes a terminal to perform the method as provided in any one of claims 1-4.
CN201810983414.9A 2018-08-27 2018-08-27 Method for synthesizing characters in recommendation video and related products Active CN109151339B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810983414.9A CN109151339B (en) 2018-08-27 2018-08-27 Method for synthesizing characters in recommendation video and related products

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810983414.9A CN109151339B (en) 2018-08-27 2018-08-27 Method for synthesizing characters in recommendation video and related products

Publications (2)

Publication Number Publication Date
CN109151339A CN109151339A (en) 2019-01-04
CN109151339B true CN109151339B (en) 2021-08-13

Family

ID=64828444

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810983414.9A Active CN109151339B (en) 2018-08-27 2018-08-27 Method for synthesizing characters in recommendation video and related products

Country Status (1)

Country Link
CN (1) CN109151339B (en)

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101819663A (en) * 2009-08-27 2010-09-01 珠海琳琅信息科技有限公司 System for virtually trying on clothes
US20110085777A1 (en) * 2009-10-12 2011-04-14 StyleCaster Media Group LLC Systems and Methods for Generating Compact Multiangle Video
US10778905B2 (en) * 2011-06-01 2020-09-15 ORB Reality LLC Surround video recording
KR20150072231A (en) * 2013-12-19 2015-06-29 한국전자통신연구원 Apparatus and method for providing muti angle view service
CN105791980B (en) * 2016-02-29 2018-09-14 哈尔滨超凡视觉科技有限公司 Films and television programs renovation method based on increase resolution
CN108156429A (en) * 2018-01-09 2018-06-12 罗建平 Panoramic shooting system and the method that panoramic shooting system is checked using web browser

Also Published As

Publication number Publication date
CN109151339A (en) 2019-01-04

Similar Documents

Publication Publication Date Title
KR102287043B1 (en) Electronic device for processing image acquired by using camera and method for operating thefeof
CN109064390B (en) Image processing method, image processing device and mobile terminal
JP5996013B2 (en) Method, apparatus and computer program product for parallax map estimation of stereoscopic images
US20150371014A1 (en) Obscurely rendering content using masking techniques
CN111950056B (en) BIM display method and related equipment for building informatization model
CN108781262B (en) Method for synthesizing image and electronic device using the same
US20170193703A1 (en) Virtual reality scene implementation method and a virtual reality apparatus
CN110390641B (en) Image desensitizing method, electronic device and storage medium
CN110876078B (en) Animation picture processing method and device, storage medium and processor
KR102164686B1 (en) Image processing method and apparatus of tile images
CN109167939B (en) Automatic text collocation method and device and computer storage medium
CN114510523A (en) Intersystem data transmission method and device, terminal equipment and medium
CN114170472A (en) Image processing method, readable storage medium and computer terminal
WO2018184502A1 (en) Media file placing method and device, storage medium and virtual reality apparatus
CN111383289A (en) Image processing method, image processing device, terminal equipment and computer readable storage medium
CN109151339B (en) Method for synthesizing characters in recommendation video and related products
CN108924518B (en) Method for synthesizing in recommendation video and related products
CN115967854A (en) Photographing method and device and electronic equipment
CN109120980B (en) Special effect adding method for recommendation video and related product
CN111260537A (en) Image privacy protection method and device, storage medium and camera equipment
CN108875670A (en) Information processing method, device and storage medium
CN109640170B (en) Speed processing method of self-shooting video, terminal and storage medium
CN109087249B (en) Blurring method of recommendation video and related products
CN113473012A (en) Virtualization processing method and device and electronic equipment
CN110910303A (en) Image style migration method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20210728

Address after: 430090 South Taizi Lake innovation Valley Qidi Xiexin science and Technology Innovation Park, Wuhan Economic and Technological Development Zone, Wuhan City, Hubei Province (qdxx-q20105)

Applicant after: Wuhan youwenxin culture media Co.,Ltd.

Address before: 518003 4K, building B, jinshanghua, No.45, Jinlian Road, Huangbei street, Luohu District, Shenzhen City, Guangdong Province

Applicant before: SHENZHEN YIDA CULTURE MEDIA Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant