CN110971840B - Video mapping method and device, computer equipment and storage medium - Google Patents

Video mapping method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN110971840B
CN110971840B CN201911239406.4A CN201911239406A CN110971840B CN 110971840 B CN110971840 B CN 110971840B CN 201911239406 A CN201911239406 A CN 201911239406A CN 110971840 B CN110971840 B CN 110971840B
Authority
CN
China
Prior art keywords
video
map
mapping
image
source
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911239406.4A
Other languages
Chinese (zh)
Other versions
CN110971840A (en
Inventor
刘春宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Kugou Computer Technology Co Ltd
Original Assignee
Guangzhou Kugou Computer Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Kugou Computer Technology Co Ltd filed Critical Guangzhou Kugou Computer Technology Co Ltd
Priority to CN201911239406.4A priority Critical patent/CN110971840B/en
Publication of CN110971840A publication Critical patent/CN110971840A/en
Application granted granted Critical
Publication of CN110971840B publication Critical patent/CN110971840B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The disclosure discloses a video mapping method and device, computer equipment and a storage medium, and belongs to the technical field of video processing. The method comprises the following steps: the method comprises the steps of obtaining at least one map image used for mapping a source video, wherein each map image is used for mapping to a source image frame of the source video; generating a map video comprising a plurality of map image frames based on the at least one map image and video parameters of the source video, each map image frame carrying information of one or more map images in at least one of the plurality of map image frames; and sending the map video to a terminal so that the terminal can map the source video based on the information of at least one map image carried by the map video. The method and the device reduce the loading load of the terminal and reduce the influence of the loading process on the performance of the terminal.

Description

Video mapping method and device, computer equipment and storage medium
Technical Field
The present disclosure relates to the field of video processing technologies, and in particular, to a video mapping method and apparatus, a computer device, and a storage medium.
Background
With the development of science and technology, the functions of the application programs are more and more. For example, in order to improve the visual experience of the user, the video selected by the user can be subjected to mapping processing in the terminal according to the mapping image provided by the application service provider, so that the purpose of modifying the image frame by using the mapping image is achieved. The video mapping processing means: and synthesizing the mapping image and the image frame of the video, so that the image frame after the synthesis processing displays the mapping image at the mapping position and displays the original content of the image frame at the non-mapping position.
In the related art, when performing the mapping process on the video, the terminal needs to load the mapping image and the image frame of the video in real time so as to be able to combine the mapping image with the image frame.
However, the loading method may affect the performance of the terminal.
Disclosure of Invention
The embodiment of the disclosure provides a video mapping method and device, a computer device and a storage medium, which can solve the problem that the loading mode during mapping processing of videos affects the performance of a terminal. The technical scheme is as follows:
in a first aspect, a video mapping method is provided, and is applied to a server, where the video mapping method includes:
the method comprises the steps of obtaining at least one map image used for mapping a source video, wherein each map image is used for mapping to a source image frame of the source video;
generating a map video comprising a plurality of map image frames based on the at least one map image and video parameters of the source video, wherein in at least one map image frame of the plurality of map image frames, each map image frame carries information of one or more map images;
and sending the mapping video to a terminal so that the terminal can map the source video based on the information of at least one mapping image carried by the mapping video.
Optionally, the generating a map video comprising a plurality of map image frames based on the at least one map image and video parameters of the source video comprises:
determining a frame order of each target source image frame for a mapped image map among a plurality of source image frames comprised by the source video;
generating the map video based on the at least one map image and a frame order of each target source image frame.
Optionally, the generating the map video based on the at least one map image and the frame order of each target source image frame comprises:
determining the frame sequence of a map image frame carrying information of a map image mapped to any target source image frame in the map video based on the frame sequence of the target source image frame;
generating the map video based on the at least one map image and a frame order of each map image frame.
Optionally, the generating the map video based on the at least one map image and the frame order of each target source image frame comprises:
determining a relative playing time stamp of any target source image frame relative to the starting playing time of the source video based on the frame sequence of each target source image frame and the frame rate of the source video;
determining a relative playing time stamp of a chartlet image frame carrying chartlet image information chartlet to any target source image frame relative to the initial playing time of the chartlet video based on the relative playing time stamp of any target source image frame;
generating the map video based on the at least one map image and the relative play timestamps of each map image frame.
In a second aspect, a video mapping method is provided, and is applied to a terminal, where the video mapping method includes:
the method comprises the steps of obtaining a map video comprising a plurality of map image frames, wherein in at least one map image frame in the plurality of map image frames, each map image frame carries information of one or more map images;
acquiring a source video comprising a plurality of source image frames;
and carrying out mapping processing on a source image frame in the source video based on the information of at least one mapping image carried by the mapping video.
Optionally, the performing mapping processing on the source image frame in the source video based on the information of at least one mapping image carried by the mapping video includes:
mapping the ith + m + nxT source image frames in the source image frames based on the information of mapping images carried by the ith mapping image frame in the mapping image frames, wherein both i and T are positive integers less than or equal to the total number of mapping image frames in the mapping video, m and n are integers greater than or equal to 0, and m is less than or equal to the total number of source image frames in the source video;
or based on the information of the map images carried by the map image frames with the relative playing time stamps of T1 in the map image frames, performing map processing on source image frames with the relative playing time stamps of T1+ T2+ nxt in the source image frames, where the relative playing time stamp of any image frame is a time difference between the playing time of any image frame and the initial playing time of the video where the image frame is located, the T1 and the T are both positive numbers less than or equal to the total time length of the map video, the T2 is greater than or equal to 0, and the T2 is less than or equal to the total playing time length of the source video.
Optionally, the performing mapping processing on a source image frame in the source video based on information of at least one mapping image carried by the mapping video includes:
acquiring a reference position of each image pixel point used for representing the map image in any map image frame in the map video;
and updating the pixel values of the image pixel points at any reference position in the source image frame corresponding to any map image frame based on the pixel values of the image pixel points at any reference position in any map image frame.
Optionally, the resolution of any source image frame is the same as the resolution of the corresponding map image frame.
In a third aspect, a video mapping apparatus is provided, which is applied to a server, and includes:
the system comprises an acquisition module, a mapping module and a mapping module, wherein the acquisition module is used for acquiring at least one mapping image used for mapping a source video, and each mapping image is used for mapping to a source image frame of the source video;
a generating module, configured to generate a map video including multiple map image frames based on the at least one map image and video parameters of the source video, where in at least one of the multiple map image frames, each map image frame carries information of one or more map images;
and the sending module is used for sending the mapping video to a terminal so that the terminal can map the source video based on the information of at least one mapping image carried by the mapping video.
Optionally, the generating module includes:
a determination sub-module for determining a frame order of each target source image frame for a mapped image mapping among a plurality of source image frames comprised by the source video;
a generation sub-module for generating the map video based on the at least one map image and a frame order of each target source image frame.
Optionally, the generating sub-module is specifically configured to:
determining the frame sequence of a map image frame carrying information of a map image mapped to any target source image frame in the map video based on the frame sequence of the target source image frame;
generating the map video based on the at least one map image and a frame order of each map image frame.
Optionally, the generating sub-module is specifically configured to:
determining a relative playing time stamp of any target source image frame relative to the starting playing time of the source video based on the frame sequence of each target source image frame and the frame rate of the source video;
determining a relative playing time stamp of a chartlet image frame carrying chartlet image information chartlet to any target source image frame relative to the initial playing time of the chartlet video based on the relative playing time stamp of any target source image frame;
generating the mapping video based on the at least one mapping image and the relative play time stamp of each mapping image frame.
In a fourth aspect, a video mapping apparatus is provided, which is applied to a terminal, and includes:
the system comprises an acquisition module, a storage module and a processing module, wherein the acquisition module is used for acquiring a map video comprising a plurality of map image frames, and each map image frame carries information of one or more map images in at least one map image frame in the plurality of map image frames;
the acquisition module is further used for acquiring a source video comprising a plurality of source image frames;
and the processing module is used for carrying out mapping processing on a source image frame in the source video based on the information of at least one mapping image carried by the mapping video.
Optionally, the processing module is specifically configured to:
mapping the ith + m + nxT source image frames in the source image frames based on the information of mapping images carried by the ith mapping image frame in the mapping image frames, wherein both i and T are positive integers less than or equal to the total number of mapping image frames in the mapping video, m and n are integers greater than or equal to 0, and m is less than or equal to the total number of source image frames in the source video;
or based on the information of the map image carried by the map image frame with the relative playing time stamp of T1 in the plurality of map image frames, performing mapping processing on the source image frame with the relative playing time stamp of T1+ T2+ nxt in the plurality of source image frames, where the relative playing time stamp of any image frame is a time difference between the playing time of any image frame and the initial playing time of the video where any image frame is located, the T1 and the T are both positive numbers less than or equal to the total duration of the map video, the T2 is greater than or equal to 0, and the T2 is less than or equal to the total playing duration of the source video.
Optionally, the processing module is specifically configured to:
acquiring a reference position of each image pixel point used for representing the map image in any map image frame in the map video;
and updating the pixel values of the image pixel points at any reference position in the source image frame corresponding to any map image frame based on the pixel values of the image pixel points at any reference position in any map image frame.
Optionally, any source image frame is at the same resolution as the corresponding map image frame.
In a fifth aspect, a computer-readable storage medium is provided, having instructions stored therein, which when executed on a server, cause the computer to perform the video mapping method of any of the first aspects.
In a sixth aspect, a computer-readable storage medium is provided, which stores instructions that, when executed on a terminal, cause the computer to perform the video mapping method of any of the second aspects.
In a seventh aspect, a computer device is provided, which includes a memory and a processor, where the memory stores a computer program operable on the processor, and the processor executes the computer program to implement the video mapping method according to any one of the first aspect.
In an eighth aspect, there is provided a computer device, comprising a memory and a processor, wherein the memory stores a computer program operable on the processor, and the processor executes the computer program to implement the video mapping method according to any one of the second aspect.
The technical scheme provided by the embodiment of the disclosure has the following beneficial effects:
according to the video mapping method and device, the computer equipment and the computer readable storage medium, the mapping video comprising the plurality of mapping image frames is generated based on at least one mapping image used for mapping the source video and the video parameters of the source video, and the mapping video is sent to the terminal, so that the terminal can map the source video based on the information of the mapping image in the mapping video, the terminal loads the source video and the mapping video when mapping the source video, the loading load of the terminal is reduced, and the influence of the loading process on the performance of the terminal is reduced.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present disclosure, the drawings required to be used in the description of the embodiments are briefly introduced below, and it is apparent that the drawings in the description below are only some embodiments of the present disclosure, and it is obvious for those skilled in the art that other drawings may be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic diagram of an implementation environment related to a video mapping method provided by an embodiment of the present disclosure;
FIG. 2 is a flow chart of a video mapping method provided by an embodiment of the present disclosure;
FIG. 3 is a flow chart of another video mapping method provided by an embodiment of the present disclosure;
FIG. 4 is a flowchart of yet another video mapping method provided by an embodiment of the present disclosure;
FIG. 5 is a flowchart of a method for generating a map video according to an embodiment of the present disclosure;
FIG. 6 is a flow chart of another method for generating a map video according to an embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of a video mapping apparatus according to an embodiment of the present disclosure;
fig. 8 is a schematic structural diagram of a generating module according to an embodiment of the present disclosure;
fig. 9 is a schematic structural diagram of another video mapping apparatus provided in an embodiment of the present disclosure;
fig. 10 is a schematic structural diagram of a terminal according to an embodiment of the present disclosure;
fig. 11 is a schematic structural diagram of a server according to an embodiment of the present disclosure.
Detailed Description
To make the objects, technical solutions and advantages of the present disclosure more apparent, embodiments of the present disclosure will be described in detail with reference to the accompanying drawings.
Fig. 1 is a schematic diagram of an implementation environment related to a video mapping method provided by an embodiment of the present disclosure. The implementation environment may include: server 01 and terminal 02.
The server 01 may be a server, a server cluster composed of several servers, or a cloud computing service center. The terminal 02 has a function of processing video, and the terminal 02 is provided with a display screen. Alternatively, the terminal 02 may be installed with a video processing application, and the terminal 02 may perform mapping processing on the video by using the video processing application and play the video after the mapping processing. In an implementation manner, the terminal 02 may be a smart phone, a computer, a multimedia player with a display screen, a wearable device with a display screen, or other various terminals.
The server 01 and the terminal 02 may establish a connection through a wired network or a wireless network. The server 01 may generate a map video from a map image used for map processing of a source image frame in a source video, and provide the map video to the terminal 02. The terminal 02 can obtain the mapping video and perform mapping processing on the source video according to the mapping video. When the terminal 02 performs mapping processing on the source video, the terminal loads the mapping video and the source video, and the videos are easy to parse, so that the loading load of the terminal is reduced, the occupation of the memory of the terminal in the loading process is correspondingly reduced, and the influence of the loading process on the performance of the terminal can be reduced.
The embodiment of the disclosure provides a video mapping method, which can be applied to a server. Fig. 2 is a flowchart of a video mapping method provided by an embodiment of the present disclosure, and as shown in fig. 2, the method is applied to a server, and the method may include:
step 201, at least one mapping image used for mapping the source video is obtained.
Wherein each map image is used to map to a source image frame of the source video.
Step 202, generating a map video comprising a plurality of map image frames based on at least one map image and video parameters of the source video.
Wherein, in at least one map image frame in the plurality of map image frames, each map image frame carries information of one or more map images.
And 203, sending the mapping video to the terminal so that the terminal can map the source video based on the information of at least one mapping image carried by the mapping video.
To sum up, in the video mapping method provided by the embodiment of the present disclosure, based on at least one mapping image used for mapping a source video and a video parameter of the source video, a mapping video including a plurality of mapping image frames is generated, and the mapping video is sent to a terminal, so that the terminal can map the source video based on information of the mapping image in the mapping video, and when the terminal performs mapping on the source video, the source video and the mapping video are loaded into the terminal, thereby reducing the loading load of the terminal and reducing the influence of the loading process on the terminal performance.
The embodiment of the disclosure provides a video mapping method, which can be applied to a terminal. Fig. 3 is a flowchart of a video mapping method provided in an embodiment of the present disclosure, and as shown in fig. 3, the method is applied to a terminal, and the method may include:
step 301, obtaining a mapping video including a plurality of mapping image frames, where in at least one of the plurality of mapping image frames, each mapping image frame carries information of one or more mapping images.
Step 302, a source video comprising a plurality of source image frames is acquired.
And step 303, performing mapping processing on a source image frame in the source video based on information of at least one mapping image carried by the mapping video.
To sum up, according to the video mapping method provided by the embodiment of the present disclosure, the source video and the mapping video are obtained, and mapping processing is performed on the source video based on the information of the mapping image in the mapping video, so that when the terminal performs mapping processing on the source video, the source video and the mapping video are loaded into the terminal, which reduces the loading load of the terminal and reduces the influence of the loading process on the terminal performance.
The following describes the video mapping method by taking an application scenario related to the video mapping method provided in the embodiment of the present disclosure as an example, including a server and a terminal. As shown in fig. 4, the method may include the steps of:
step 401, the server obtains at least one mapping image used for mapping the source video.
Each map image is used for mapping to one source image frame of the source video, and at least one map image can be mapped in each source image frame.
The source video refers to video with video parameters meeting certain conditions. When the application service provider provides the mapping function, the mapping function can be specified to process the video with the video parameters meeting specified conditions. That is, if the mapping function is used to map a video, the video parameters of the video to be mapped need to satisfy the specified condition. Here, the source video is used to generally refer to a video whose video parameters satisfy a specified condition, that is, the source video is used to indicate a specified condition that a video using the mapping function needs to satisfy. Wherein the video parameters may include one or more of: duration of the video, frame rate, and resolution of each video image frame in the video, etc. For example, the application service provider may specify that the specified conditions that need to be satisfied for a video that is capable of using the chartlet function are: the frame rate is 60 frames per second to be displayed, and the resolution of each image frame is 1280 × 960, the source video may broadly refer to all videos satisfying the specified condition.
In addition, the application service provider may also specify parameters such as the position of the chartlet image in the chartlet source image frame.
The map image obtained in this step 401 is an image that has been determined to which image frame in the source video the map image needs to be mapped. And, which image frame is pasted with which image pasting image may be determined by the application service provider according to application requirements, which is not specifically limited by the embodiment of the present disclosure. In one implementation, the application facilitator can determine the chartlet image according to marketing strategies. For example, during the mother's day, the chartlet image may be an image of carnation. During the valentine's day, the chartlet image may be an image of a rose. During a hotcast of a certain drama, the map image may be an image reflecting the related elements of the drama.
Step 402, the server generates a map video comprising a plurality of map image frames based on at least one map image and video parameters of a source video.
In at least one map image frame in a plurality of map image frames of the map video, each map image frame carries information of one or more map images.
In order to ensure the user experience of using the mapping function, it is necessary to ensure that the changed content of the source video before and after mapping processing is the mapped image in the source image frame of the source video. Therefore, in order to ensure the processing effect of performing mapping processing on the source video according to the mapping video, in the embodiment of the present disclosure, the mapping video may be generated by using the video parameter of the source video as a reference standard. The video parameters of the source video mainly comprise parameters for determining whether the source video meets specified conditions. For example, the video parameter may be a frame rate, a resolution, and a frame order of the source image frame in a plurality of source image frames included in the source video. In one implementation, a frame order of each target source image frame in a plurality of source image frames included in a source video may be determined, and a map video may be generated based on at least one map image and the frame order of each target source image frame. Wherein, the target source image frame is an image frame of the mapped image map. Optionally, there are at least two realizations of generating a map video according to the frame order of the target source image frame:
in a first implementation manner, as shown in fig. 5, the implementation process may include:
step 402a1, the server determines the frame order in the map video of the map image frame carrying the information of the map image mapped to any target source image frame based on the frame order of any target source image frame.
The application service provider may specify the frame order of source image frames in the source video that can be mapped to the image map, and accordingly, the server may retrieve the frame order of the target source image frames. For example, the application facilitator may specify that an image of roses is mapped to the first source image frame in the source video and an image of carnation is mapped to the third source image frame in the source video. Thus, it can be determined that the frame order of the target source image frame of the rose-mapped image map is 1 and the frame order of the target source image frame of the carnation-mapped image map is 3.
Optionally, the mapping processing may be performed on the (i + m + nxt) th source image frame in the source video according to image information carried by the ith mapping image frame in the mapping video, and accordingly, the frame order of the mapping image frame and the frame order of the source image frame may be determined according to the relationship. Wherein i and T are positive integers less than or equal to the total frame number of the map image frames in the map video, m and n are integers greater than or equal to 0, and m is less than or equal to the total frame number of the source image frames in the source video. When the values of i, m, n, and T are different, the implementation manners thereof are different, and the following description is made for different situations respectively:
in the first case, where m is 0 and T is 0, the ith source image frame may be subjected to mapping processing according to image information carried by the ith map image frame, and thus, the frame order of the map image frames carrying information of any map image may be equal to the frame order of the source image frames mapped by the any map image.
TABLE 1
Frame order for map image frames 1 2 3 4 5 6 ……
Frame order of source image frames 1 2 3 4 5 6 ……
For example, as shown in table 1, the same column in table 1 shows the correspondence between the frame order of any map image frame and the frame order of the source image frame mapped by the any map image frame, for example, the 1 st source image frame may be mapped by using the image information carried by the 1 st map image frame. The 2 nd source image frame may be mapped using image information carried by the 2 nd mapping image frame.
In the second case, m is 0 and T is not 0, in which case the source image frames may be mapped using the map image carried by the map video in a loop, the i + n × T source image frames when n takes different values may be mapped based on the image information carried by the i-th map image frame, i.e., a plurality of source image frames may be mapped using the map image carried by the same map image frame, and thus the frame order of the map image frames may be determined according to this rule and the frame order of the source image frames mapped by any one map image frame.
For example, assuming that m is 0 and T is 5, as shown in table 2, the same column in table 2 shows the correspondence between the frame order of any map image frame and the frame order of the source image frame mapped by the any map image frame. For example, the 1+5 × n source image frames may be subjected to mapping processing using image information carried by the 1 st mapping image frame. The 2+5 xn source image frames may be mapped using the image information carried by the 2 nd map image frame.
TABLE 2
Figure BDA0002305789950000101
In the third case, m ≠ 0 and T ≠ 0, at this time, the source image frame may be subjected to the mapping processing by cyclically using the mapping image carried by the mapping video, the (i + m + nxt) th source image frame when n takes different values may be subjected to the mapping processing according to the image information carried by the ith mapping image frame, that is, a plurality of source image frames may be subjected to the mapping processing by using the mapping image carried by the same mapping image frame, and therefore, the frame order of the mapping image frames may be determined according to the rule and the frame order of the source image frame mapped by any one mapping image frame.
For example, assuming that m is 2 and T is 5, as shown in table 3, the same column in table 3 shows the correspondence between the frame order of any map image frame and the frame order of the source image frame mapped by the any map image frame. The 3+5 xn source image frame may be mapped using the image information carried by the 1 st mapping image frame. The 4+5 xn source image frames may be mapped using the image information carried by the 2 nd mapping image frame.
TABLE 3
Frame order for map image frames 1 2 3 4 5 1 2 3 4 ……
Frame order of source image frames 1 2 3 4 5 6 7 8 9 10 11 ……
Step 402a2, the server generates a map video based on the at least one map image and the frame order of each map image frame.
After determining the frame order of the map image frames carrying the information of the map images in the map video, the map video may be generated according to the frame order of each map image frame and each map image. For example, a blank video may be generated in advance, video parameters of the blank video are all default parameters, pixel values of all image pixels in each image frame in the blank video are all default values, then, according to a frame sequence of each map image frame, according to information of a map image to be carried in a corresponding map image frame, pixel values of image pixels of the image frame at the frame sequence in the image frame in the blank video are updated, positions of the pixels of the updated pixel values may be determined according to positions of the mapped image maps of the source video, after the pixels of the image frame in the blank video are updated according to each map image frame, video parameters of the blank video may also be modified according to specified conditions that the source video needs to meet, for example, video parameters such as a frame rate are modified, and then the map video is obtained.
In a second implementation manner, as shown in fig. 6, the implementation process may include:
step 402b1, the server determines the relative playing time stamp of any target source image frame relative to the starting playing time of the source video based on the frame order of each target source image frame and the frame rate of the source video.
Wherein, the relative playing time stamp of the source image frame is the time difference of the playing time of the source image frame relative to the starting playing time of the source video.
The application service provider can specify the frame sequence of the source image frames capable of being mapped by the mapped image in the source video, correspondingly, the server can obtain the frame sequence of the target source image frames, and can determine the relative playing time stamp of any target source image frame according to the frame sequence of the target source image frame and the frame rate of the source video. In one implementation, assuming that the starting playing time of the source video is 0, the relative playing timestamp t of any target source image frame may be equal to the product of the frame order k of the any target source image frame and the inverse of the frame rate f of the source video, i.e., t ═ k/f.
Step 402b2, the server determines the relative playing time stamp of the chartlet image frame carrying the chartlet image information mapped to any target source image frame relative to the starting playing time of the chartlet video based on the relative playing time stamp of any target source image frame.
Optionally, the source image frames in the source video whose relative playing timestamps are T1+ T2+ nxt may be mapped according to the image information carried by the map image frames whose relative playing timestamps are T1 in the map video, and accordingly, the relative playing timestamps of the map image frames and the relative playing timestamps of the source image frames may be determined according to the relationship. Wherein T1 and T are both positive numbers less than or equal to the total duration of the map video, T2 is greater than or equal to 0, and T2 is less than or equal to the total playing duration of the source video. When T1, T2, n, and T have different values, the implementation manners thereof are different, and the implementation manners thereof are described below for different cases, respectively:
in the first case, T2 is 0 and T is 0, at this time, the source image frame with the relative playing timestamp T1 in the source video may be subjected to mapping processing according to the image information carried by the map image frame with the relative playing timestamp T1 in the map video, and therefore, the relative playing timestamp of the map image frame carrying the information of any map image may be equal to the relative playing timestamp of the source image frame mapped by any map image.
In the second case, T2 is 0 and T ≠ 0, in which case, the source image frame may be mapped by cyclically using the map image carried by the map video, the source image frame with the relative play timestamp T1+ nxt in the source video may be mapped according to the image information carried by the map image frame with the relative play timestamp T1 in the map video, i.e., the source image frames may be mapped by using the map image carried by the same map image frame, and thus, the relative play timestamps of the map image frames may be determined according to the rule and the relative play timestamps of the source image frames mapped by any one map image.
In the third case, m ≠ 0 and T ≠ 0, at this time, the source image frames can be mapped by cyclically using the map images carried by the map video, the source image frames with the relative play timestamps of T1+ T2+ nxt in the source video can be mapped according to the image information carried by the map image frames with the relative play timestamp of T1 in the map video, that is, the source image frames with the relative play timestamps of T1+ T2+ nxt can be mapped by using the map images carried by the same map image frame, and therefore, the relative play timestamps of the map image frames can be determined according to the rule and the relative play timestamp of the source image frame mapped by any map image.
Step 402b3, the server generates a map video based on the at least one map image and the relative play time stamp of each map image frame.
After determining the relative play time stamp of the map image frame carrying the information of the map image, the map video may be generated according to the relative play time stamp of each map image frame and each map image. The implementation process of determining the mapping video according to the relative playing time stamp of the mapping image frame is referred to, and is not described herein again.
And step 403, the server sends the map video to the terminal.
After the server generates the map video, the server can send the map video to the terminal, so that the terminal can perform map processing on the source video based on the information of the map image in the map video. The server may send the mapping video to the terminal immediately after generating the mapping video, may send the mapping video to the terminal when the terminal requests to update the resource of the mapping video, and may send the mapping video to the terminal after receiving a request that the terminal requests to use the mapping video.
Step 404, the terminal obtains a source video comprising a plurality of source image frames.
The source image frames in the source video and the corresponding map image frames comprise the same number of image pixel points.
When a user needs to map a certain video, the user can execute a specified operation in the terminal to trigger the operation of indicating the terminal to map the specified video. When the video parameters of the video appointed by the user meet the appointed conditions required to be met by using the mapping function, the video appointed by the user is the source video. When the video parameters of the video specified by the user do not meet the specified conditions required to be met by using the mapping function, the video specified by the user can be preprocessed first, so that the preprocessed video meets the specified conditions, and at the moment, the preprocessed video is the source video.
Step 405, the terminal carries out mapping processing on a source image frame in the source video based on the information of at least one mapping image carried by the mapping video.
Corresponding to two realizations of step 402, there can also be the following two realizations of step 405:
corresponding to the first implementation manner of step 402, mapping processing may be performed on the (i + m + nxt) th source image frame in the plurality of source image frames based on information of the map image carried by the ith map image frame in the plurality of map image frames.
Corresponding to the second implementation manner of step 402, a source image frame with a relative playing timestamp of T1+ T2+ nxt in a plurality of source image frames may be subjected to mapping processing based on information of a mapping image carried by a mapping image frame with a relative playing timestamp of T1 in the plurality of mapping image frames.
The video is subjected to mapping processing by using the mapping image, namely: and synthesizing the map image and the image frame of the video, so that the image frame after the synthesis process displays the map image at the map position and displays the original content of the image frame at the non-map position. Therefore, in the first and second realizable manners of this step 405, the implementation of mapping the source image frame according to the information of the map image in the map image frame may include: the method comprises the steps of obtaining a reference position of each image pixel point used for representing a map image in any map image frame, and updating the pixel value of the image pixel point at any reference position in a source image frame corresponding to any map image frame based on the pixel value of the image pixel point at any reference position in any map image frame.
When the pixel value of the image pixel point in the source image frame is updated based on the pixel value of the image pixel point at any reference position in the map image frame, the following situations are at least provided:
in the first case, when there are image pixels corresponding to multiple image pixels in a map image frame, the pixel value of the image pixel at any reference position in the source image frame corresponding to any map image frame may be updated to the pixel value of the image pixel at any reference position in any map image frame.
For example, when the resolution of the map image frame is the same as that of the source image frame and the size of the map image frame is not larger than that of the source image frame, there may be pixels in the source image frame corresponding to a plurality of image pixels in the map image frame one to one, and then an image pixel at any reference position in the source image frame corresponding to any map image frame may be updated to a pixel value of an image pixel at any reference position in any map image frame.
It should be noted that when image pixels corresponding to image pixels in the map image frame in a one-to-one manner exist in the source image frame, and when the map image frame is used to perform mapping processing on the source image frame, because image pixels corresponding to image pixels in the map image frame do not need to be determined in the source image frame, additional processing does not need to be performed on the map image frame and the source image frame, and therefore, the influence on the performance of the terminal can be further reduced.
In the second case, when there is no one-to-one correspondence between a plurality of image pixel points in the source image frame and a plurality of image pixel points in the map image frame, there may be no image pixel point at any reference position in the source image frame corresponding to some map image frames, and at this time, the pixel value of the image pixel point near any reference position in the source image frame may be updated according to the pixel value of the image pixel point at any reference position in any map image frame.
In addition, when performing mapping processing on the source image frame, parameters such as the position or size of the mapping image in the source image frame may also be adjusted according to application requirements, which is not specifically limited in the embodiment of the present disclosure.
In summary, the video mapping method provided by the embodiment of the present disclosure may also have various implementation manners based on the update of the source video to the pixel values of the image pixels near the any reference position in the source image frame. For example, the pixel values of a circle of image pixels around the any reference position in the source image frame may all be updated to the pixel value of an image pixel at any reference position in the any map image frame, or the pixel values of image pixels near the any reference position in the source image frame may be updated according to a specified weight and the pixel value of an image pixel at any reference position in the any map image frame.
The method comprises the steps of generating a mapping video comprising a plurality of mapping image frames by using at least one mapping image used by line mapping processing and video parameters of a source video, and sending the mapping video to a terminal, so that the terminal can perform mapping processing on the source video based on information of the mapping image in the mapping video, and when the terminal performs mapping processing on the source video, the source video and the mapping video are loaded into the terminal, thereby reducing the loading load of the terminal and reducing the influence of the loading process on the performance of the terminal.
It should be noted that the order of the steps of the video mapping method provided by the embodiment of the present disclosure may be appropriately adjusted, and the steps may also be correspondingly increased or decreased according to the situation. Any method that can be easily conceived by those skilled in the art within the technical scope of the present disclosure is covered by the protection scope of the present disclosure, and thus, the detailed description thereof is omitted.
The embodiment of the present disclosure provides a video mapping apparatus, as shown in fig. 7, where the video mapping apparatus is applied to a server, and the video mapping apparatus 70 may include:
an obtaining module 701, configured to obtain at least one map image used for mapping a source video, where each map image is used for mapping to a source image frame of the source video.
A generating module 702, configured to generate a map video including multiple map image frames based on at least one map image and video parameters of a source video, where in at least one of the multiple map image frames, each map image frame carries information of one or more map images.
The sending module 703 is configured to send the map video to the terminal, so that the terminal performs map processing on the source video based on information of at least one map image carried in the map video.
To sum up, in the video mapping device provided in the embodiment of the present disclosure, the generation module generates the mapping video including a plurality of mapping image frames based on at least one mapping image used for mapping the source video and the video parameters of the source video, and the transmission module transmits the mapping video to the terminal, so that the terminal can map the source video based on the information of the mapping image in the mapping video, and when the terminal performs mapping on the source video, the source video and the mapping video are loaded into the terminal, thereby reducing the loading load of the terminal and reducing the influence of the loading process on the terminal performance.
Optionally, as shown in fig. 8, the generating module 702 includes:
a determining sub-module 7021 for determining the frame order of each target source image frame for the mapped image map among a plurality of source image frames comprised by the source video.
A generating sub-module 7022 is used for generating a map video based on the at least one map image and the frame order of each target source image frame.
Optionally, the generating sub-module 7022 is specifically configured to:
and determining the frame sequence of the map image frame carrying the information of the map image mapped to any target source image frame in the map video based on the frame sequence of any target source image frame.
A map video is generated based on the at least one map image and the frame order of each map image frame.
Optionally, the generating sub-module 7022 is specifically configured to:
based on the frame order of each target source image frame and the frame rate of the source video, a relative play time stamp of any target source image frame relative to the start play time of the source video is determined.
And determining a relative playing time stamp of the starting playing time of the map image frame relative to the map video, wherein the map image frame carries the information of the map image mapped to any target source image frame, based on the relative playing time stamp of any target source image frame.
Generating a map video based on the at least one map image and the relative play timestamps of each map image frame.
To sum up, in the video mapping device provided in the embodiment of the present disclosure, the generation module generates the mapping video including a plurality of mapping image frames based on at least one mapping image used for mapping the source video and the video parameters of the source video, and the transmission module transmits the mapping video to the terminal, so that the terminal can map the source video based on the information of the mapping image in the mapping video, and when the terminal performs mapping on the source video, the source video and the mapping video are loaded into the terminal, thereby reducing the loading load of the terminal and reducing the influence of the loading process on the terminal performance.
In addition, after the mapping video is generated in the server, the server provides the mapping video to the terminal, so that the management of the mapping video is facilitated, and the development process of the video mapping method is simplified.
An embodiment of the present disclosure provides a video mapping apparatus, as shown in fig. 9, where the video mapping apparatus is applied to a terminal, and the video mapping apparatus 90 may include:
the obtaining module 901 is configured to obtain a map video including multiple map image frames, where in at least one of the multiple map image frames, each map image frame carries information of one or more map images.
The obtaining module 901 is further configured to obtain a source video including a plurality of source image frames.
A processing module 902, configured to perform mapping processing on a source image frame in a source video based on information of at least one mapping image carried in the mapping video.
To sum up, in the video mapping device provided in the embodiment of the present disclosure, the obtaining module obtains the source video and the mapping video, and the processing module performs mapping processing on the source video based on information of the mapping image in the mapping video, so that when the terminal performs mapping processing on the source video, the source video and the mapping video are loaded into the terminal, which reduces the load of the terminal and reduces the influence of the loading process on the terminal performance.
Optionally, the processing module 902 is specifically configured to:
mapping processing is carried out on the (i + m + n multiplied by T) th source image frames in the source image frames based on mapping image information carried by the ith mapping image frame in the mapping image frames, wherein i and T are positive integers less than or equal to the total frame number of the mapping image frames in the mapping video, m and n are integers more than or equal to 0, and m is less than or equal to the total frame number of the source image frames in the source video.
Or based on the information of the map images carried by the map image frames with the relative playing time stamp of T1 in the plurality of map image frames, performing mapping processing on the source image frames with the relative playing time stamp of T1+ T2+ nxT in the plurality of source image frames, wherein the relative playing time stamp of any image frame is the time difference between the playing time of any image frame and the initial playing time of the video where any image frame is located, both T1 and T are positive numbers less than or equal to the total duration of the map video, T2 is greater than or equal to 0, and T2 is less than or equal to the total playing duration of the source video.
Optionally, the processing module 902 is specifically configured to:
and acquiring a reference position of each image pixel point used for representing the map image in any map image frame in the map video.
And updating the pixel value of the image pixel point at any reference position in the source image frame corresponding to any map image frame based on the pixel value of the image pixel point at any reference position in any map image frame.
Optionally, any source image frame is at the same resolution as the corresponding map image frame.
To sum up, in the video mapping device provided in the embodiment of the present disclosure, the obtaining module obtains the source video and the mapping video, and the processing module performs mapping processing on the source video based on information of the mapping image in the mapping video, so that when the terminal performs mapping processing on the source video, the source video and the mapping video are loaded in the terminal, which reduces a loading load of the terminal and reduces an influence of a loading process on a terminal performance.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described apparatuses, modules and sub-modules may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
Embodiments of the present disclosure also provide a computer-readable storage medium, which may be a non-volatile computer-readable storage medium. The computer readable storage medium has stored therein instructions that, when executed on a server, cause the server to perform the video mapping method provided by the above-described method embodiments.
Embodiments of the present disclosure also provide a computer-readable storage medium, which may be a non-volatile computer-readable storage medium. The computer readable storage medium has stored therein instructions that, when run on a terminal, cause the terminal to perform the video mapping method provided by the above-described method embodiment.
The embodiment of the present disclosure further provides a computer device, which includes a memory and a processor, where the memory stores a computer program that can be run on the processor, and when the processor executes the computer program, the video mapping method provided by the foregoing method embodiment is implemented.
Alternatively, the computer device may be a terminal. Fig. 10 shows a block diagram of a terminal 1000 according to an exemplary embodiment of the present invention. The terminal 1000 can be a portable mobile terminal such as: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion video Experts compression standard Audio Layer 3), an MP4 player (Moving Picture Experts Group Audio Layer IV, motion video Experts compression standard Audio Layer 4), a notebook computer, or a desktop computer. Terminal 1000 can also be referred to as a user equipment, portable terminal, laptop terminal, desktop terminal, or the like among other names.
In general, terminal 1000 can include: a processor 1001 and a memory 1002.
The processor 1001 may include one or more processing cores, such as 4-core processors, 8-core processors, and so on. The processor 1001 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 1001 may also include a main processor, which is a processor for Processing data in a wake state and is also called a Central Processing Unit (CPU), and a coprocessor. A coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 1001 may be integrated with a GPU (Graphics Processing Unit) that is responsible for rendering and drawing content that needs to be displayed on the display screen. In some embodiments, the processor 1001 may further include an AI (Artificial Intelligence) processor for processing a computing operation related to machine learning.
Memory 1002 may include one or more computer-readable storage media, which may be non-transitory. The memory 1002 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in the memory 1002 is used to store at least one instruction for execution by the processor 1001 to implement the video mapping method provided by the method embodiments of the present disclosure.
In some embodiments, terminal 1000 can also optionally include: a peripheral interface 1003 and at least one peripheral. The processor 1001, memory 1002 and peripheral interface 1003 may be connected by a bus or signal line. Various peripheral devices may be connected to peripheral interface 1003 via a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 1004, display screen 1005, camera assembly 1006, audio circuitry 1007, positioning assembly 1008, and power supply 1009.
Peripheral interface 1003 may be used to connect at least one peripheral associated with I/O (Input/Output) to processor 1001 and memory 1002. In some embodiments, processor 1001, memory 1002, and peripheral interface 1003 are integrated on the same chip or circuit board. In some other embodiments, any one or two of the processor 1001, the memory 1002, and the peripheral interface 1003 may be implemented on separate chips or circuit boards, which are not limited by this embodiment.
The Radio Frequency circuit 1004 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 1004 communicates with communication networks and other communication devices via electromagnetic signals. The radio frequency circuit 1004 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 1004 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuit 1004 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: the world wide web, metropolitan area networks, intranets, various generations of mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the radio frequency circuit 1004 may also include NFC (Near Field Communication) related circuits, which are not limited by this disclosure.
A display screen 1005 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 1005 is a touch display screen, the display screen 1005 also has the ability to capture touch signals on or over the surface of the display screen 1005. The touch signal may be input to the processor 1001 as a control signal for processing. At this point, the display screen 1005 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, display 1005 can be one, providing a front panel of terminal 1000. In other embodiments, display 1005 can be at least two, respectively disposed on different surfaces of terminal 1000 or in a folded design. In still other embodiments, display 1005 can be a flexible display disposed on a curved surface or on a folded surface of terminal 1000. Even more, the display screen 1005 may be arranged in a non-rectangular irregular figure, i.e., a shaped screen. The Display screen 1005 may be an LCD (Liquid Crystal Display) Display screen or an OLED (Organic Light-Emitting Diode) Display screen.
The camera assembly 1006 is used to capture images or video. Optionally, the camera assembly 1006 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, the camera assembly 1006 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuit 1007 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals into the processor 1001 for processing or inputting the electric signals into the radio frequency circuit 1004 for realizing voice communication. For stereo capture or noise reduction purposes, multiple microphones can be provided, one at each location of terminal 1000. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 1001 or the radio frequency circuit 1004 into sound waves. The loudspeaker can be a traditional film loudspeaker and can also be a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, the audio circuit 1007 may also include a headphone jack.
The positioning component 1008 is utilized to locate a current geographic Location of the terminal 1000 for navigation or LBS (Location Based Service). The Positioning component 1008 can be a Positioning component based on the Global Positioning System (GPS) in the united states, the beidou System in china, or the galileo System in russia.
Power supply 1009 is used to supply power to various components in terminal 1000. The power source 1009 may be alternating current, direct current, disposable battery, or rechargeable battery. When the power source 1009 includes a rechargeable battery, the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery can also be used to support fast charge technology.
In some embodiments, terminal 1000 can also include one or more sensors 1010. The one or more sensors 1010 include, but are not limited to: acceleration sensor 1011, gyro sensor 1012, pressure sensor 1013, fingerprint sensor 1014, optical sensor 1015, and proximity sensor 1016.
Acceleration sensor 1011 can detect acceleration magnitudes on three coordinate axes of a coordinate system established with terminal 1000. For example, the acceleration sensor 1011 may be used to detect components of the gravitational acceleration in three coordinate axes. The processor 1001 may control the touch display screen 1005 to display a user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 1011. The acceleration sensor 1011 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 1012 may detect a body direction and a rotation angle of the terminal 1000, and the gyro sensor 1012 and the acceleration sensor 1011 may cooperate to acquire a 3D motion of the user on the terminal 1000. From the data collected by the gyro sensor 1012, the processor 1001 may implement the following functions: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization while shooting, game control, and inertial navigation.
Pressure sensor 1013 may be disposed on a side bezel of terminal 1000 and/or underneath touch display 1005. When pressure sensor 1013 is disposed on a side frame of terminal 1000, a user's grip signal on terminal 1000 can be detected, and processor 1001 performs left-right hand recognition or shortcut operation according to the grip signal collected by pressure sensor 1013. When the pressure sensor 1013 is disposed at a lower layer of the touch display screen 1005, the processor 1001 controls the operability control on the UI interface according to the pressure operation of the user on the touch display screen 1005. The operability control comprises at least one of a button control, a scroll bar control, an icon control, and a menu control.
The fingerprint sensor 1014 is used to collect a fingerprint of the user, and the processor 1001 identifies the user according to the fingerprint collected by the fingerprint sensor 1014, or the fingerprint sensor 1014 identifies the user according to the collected fingerprint. Upon identifying that the user's identity is a trusted identity, the processor 1001 authorizes the user to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying, and changing settings, etc. Fingerprint sensor 1014 can be disposed on the front, back, or side of terminal 1000. When a physical key or vendor Logo is provided on terminal 1000, fingerprint sensor 1014 can be integrated with the physical key or vendor Logo.
The optical sensor 1015 is used to collect the ambient light intensity. In one embodiment, the processor 1001 may control the display brightness of the touch display screen 1005 according to the intensity of the ambient light collected by the optical sensor 1015. Specifically, when the ambient light intensity is high, the display luminance of the touch display screen 1005 is adjusted high. When the ambient light intensity is low, the display brightness of the touch display screen 1005 is turned down. In another embodiment, the processor 1001 may also dynamically adjust the shooting parameters of the camera assembly 1006 according to the intensity of the ambient light collected by the optical sensor 1015.
Proximity sensor 1016, also known as a distance sensor, is typically disposed on a front panel of terminal 1000. Proximity sensor 1016 is used to gather the distance between a user and the front face of terminal 1000. In one embodiment, touch display 1005 is controlled by processor 1001 to switch from a bright screen state to a dark screen state when proximity sensor 1016 detects that the distance between the user and the front surface of terminal 1000 is gradually decreasing. When proximity sensor 1016 detects that the distance between the user and the front of terminal 1000 is gradually increased, touch display screen 1005 is controlled by processor 1001 to switch from a breath-screen state to a bright-screen state.
Those skilled in the art will appreciate that the configuration shown in FIG. 10 is not intended to be limiting and that terminal 1000 can include more or fewer components than shown, or some components can be combined, or a different arrangement of components can be employed.
The embodiment of the present disclosure further provides a computer device, which includes a memory and a processor, where the memory stores a computer program that can be run on the processor, and when the processor executes the computer program, the video mapping method provided by the foregoing method embodiment is implemented.
Alternatively, the computer device may be a server. Fig. 11 is a schematic diagram illustrating a configuration of a server according to an example embodiment. The server 1100 includes a Central Processing Unit (CPU)1101, a system memory 1104 including a Random Access Memory (RAM)1102 and a Read Only Memory (ROM)1103, and a system bus 1105 connecting the system memory 1104 and the central processing unit 1101. The server 1100 also includes a basic input/output system (I/O system) 1106, which facilitates transfer of information between devices within the computer, and a mass storage device 1107 for storing an operating system 1113, application programs 1114, and other program modules 1115.
The basic input/output system 1106 includes a display 1108 for displaying information and an input device 1109 such as a mouse, keyboard, etc. for user input of information. Wherein the display 1108 and the input device 1109 are connected to the central processing unit 1101 through an input output controller 1110 connected to the system bus 1105. The basic input/output system 1106 may also include an input/output controller 1110 for receiving and processing input from a number of other devices, such as a keyboard, mouse, or electronic stylus. Similarly, input-output controller 1110 also provides output to a display screen, a printer, or other type of output device.
The mass storage device 1107 is connected to the central processing unit 1101 through a mass storage controller (not shown) connected to the system bus 1105. The mass storage device 1107 and its associated computer-readable media provide non-volatile storage for the server 1100. That is, the mass storage device 1107 may include a computer-readable medium (not shown) such as a hard disk or CD-ROM drive.
Computer readable media may include computer storage media and communication media. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer storage media includes RAM, ROM, EPROM, EEPROM, flash memory or other solid state memory technology, CD-ROM, DVD, or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices. Of course, those skilled in the art will appreciate that computer storage media is not limited to the foregoing. The system memory 1104 and mass storage device 1107 described above may be collectively referred to as memory.
The server 1100 may also operate with remote computers connected to the network through a network, such as the internet, according to various embodiments of the invention. That is, the server 1100 may connect to the network 1112 through the network interface unit 1111 that is coupled to the system bus 1105, or may connect to other types of networks or remote computer systems (not shown) using the network interface unit 1111.
The memory further includes one or more programs, the one or more programs are stored in the memory, and the central processing unit 1101 implements the video mapping method provided by the above method embodiment by executing the one or more programs.
Embodiments of the present disclosure also provide a computer program product containing instructions, which when run on a server, cause the server to execute the video mapping method provided by the above method embodiments.
The embodiment of the present disclosure further provides a computer program product containing instructions, which when run on a terminal, causes the terminal to execute the video mapping method provided by the above method embodiment.
It will be understood by those skilled in the art that all or part of the steps of implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, and the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only exemplary of the present disclosure and is not intended to limit the present disclosure, so that any modification, equivalent replacement, or improvement made within the spirit and principle of the present disclosure should be included in the scope of the present disclosure.

Claims (11)

1. A video mapping method is applied to a server and is characterized by comprising the following steps:
the method comprises the steps of obtaining at least one mapping image used for mapping a source video, wherein each mapping image is used for mapping to a source image frame of the source video;
generating a map video comprising a plurality of map image frames based on the at least one map image and video parameters of the source video, each map image frame carrying information of one or more map images in at least one of the plurality of map image frames;
sending the map video to a terminal, so that the terminal loads the source video and the map video when performing map processing on the source video, and further performing map processing on the source video based on information of at least one map image carried by the map video;
wherein the generating a map video comprising a plurality of map image frames based on the at least one map image and video parameters of the source video comprises:
determining the frame sequence of at least one map image frame carrying the information of the map image in the map video;
generating a blank video, wherein the video parameters of the blank video and the pixel values of all image pixel points in each image frame in the blank video are default values;
according to the frame sequence of each map image frame, updating the pixel values of image pixels of the image frame at the frame sequence in the blank video according to the information of the map image to be carried in the corresponding map image frame, wherein the positions of the pixels of the updated pixel values are determined according to the positions of the maps of the mapped images of the source video;
and after the pixel points of the image frames in the blank video are updated according to each chartlet image frame, modifying the video parameters of the blank video according to the video parameters of the specified conditions met by the source video to obtain the chartlet video.
2. The method of claim 1, wherein the determining a frame order of a map image frame carrying information of a map image in a map video comprises:
determining a frame order of each target source image frame for a mapped image map among a plurality of source image frames comprised by the source video;
determining the frame order of a map image frame carrying information of a map image mapped to any target source image frame in the map video based on the frame order of the target source image frame.
3. The method of claim 1, wherein determining the frame order of the map image frames carrying the information of the map image in the map video comprises:
determining a frame order of each target source image frame for a mapped image map among a plurality of source image frames comprised by the source video;
determining a relative playing time stamp of any target source image frame relative to the starting playing time of the source video based on the frame sequence of each target source image frame and the frame rate of the source video;
and determining the relative playing time stamp of the image frame of the map carrying the information of the map image mapped to any target source image frame relative to the initial playing time of the map video based on the relative playing time stamp of any target source image frame.
4. A video mapping method is applied to a terminal, and is characterized by comprising the following steps:
the method comprises the steps of obtaining a mapping video comprising a plurality of mapping image frames, wherein in at least one mapping image frame in the plurality of mapping image frames, each mapping image frame carries information of one or more mapping images;
acquiring a source video comprising a plurality of source image frames;
when the source video is subjected to mapping processing, loading the source video and the mapping video, and further carrying out mapping processing on a source image frame in the source video based on information of at least one mapping image carried by the mapping video;
wherein a map video of the plurality of map image frames is generated by:
determining the frame sequence of at least one map image frame carrying the information of the map image in the map video;
generating a blank video, wherein the video parameters of the blank video and the pixel values of all image pixel points in each image frame in the blank video are default values;
according to the frame sequence of each map image frame, updating the pixel values of image pixels of the image frame at the frame sequence in the blank video according to the information of the map image to be carried in the corresponding map image frame, wherein the positions of the pixels of the updated pixel values are determined according to the positions of the maps of the mapped images of the source video;
after the pixel points of the image frames in the blank video are updated according to each chartlet image frame, modifying the video parameters of the blank video according to the video parameters of the specified conditions met by the source video to obtain the chartlet video.
5. The method according to claim 4, wherein the mapping a source image frame in the source video based on the information of at least one mapping image carried by the mapping video comprises:
mapping the ith + m + nxT source image frames in the source image frames based on the information of mapping images carried by the ith mapping image frame in the mapping image frames, wherein i and T are positive integers less than or equal to the total number of mapping image frames in the mapping video, m and n are integers greater than or equal to 0, and m is less than or equal to the total number of source image frames in the source video;
or based on the information of the map images carried by the map image frames with the relative playing time stamps of T1 in the map image frames, performing map processing on source image frames with the relative playing time stamps of T1+ T2+ nxt in the source image frames, where the relative playing time stamp of any image frame is a time difference between the playing time of any image frame and the initial playing time of the video where the image frame is located, the T1 and the T are both positive numbers less than or equal to the total time length of the map video, the T2 is greater than or equal to 0, and the T2 is less than or equal to the total playing time length of the source video.
6. The method according to claim 4, wherein the mapping a source image frame in the source video based on information of at least one mapping image carried by the mapping video comprises:
acquiring a reference position of each image pixel point used for representing the map image in any map image frame in the map video;
and updating the pixel values of the image pixel points at any reference position in the source image frame corresponding to any map image frame based on the pixel values of the image pixel points at any reference position in any map image frame.
7. The method of any of claims 4 to 6, wherein any source image frame is at the same resolution as the corresponding map image frame.
8. A video mapping device applied to a server is characterized by comprising:
the system comprises an acquisition module, a mapping module and a mapping module, wherein the acquisition module is used for acquiring at least one mapping image used for mapping a source video, and each mapping image is used for mapping to a source image frame of the source video;
a generating module, configured to generate a map video including multiple map image frames based on the at least one map image and video parameters of the source video, where in at least one of the multiple map image frames, each map image frame carries information of one or more map images;
the sending module is used for sending the mapping video to a terminal so that the terminal loads the source video and the mapping video when mapping the source video, and further mapping the source video based on information of at least one mapping image carried by the mapping video;
wherein the generating module is configured to:
determining the frame order of at least one map image frame carrying information of the map image in the map video;
generating a blank video, wherein the video parameters of the blank video and the pixel values of all image pixel points in each image frame in the blank video are default values;
according to the frame sequence of each map image frame, updating the pixel values of image pixels of the image frame at the frame sequence in the blank video according to the information of the map image to be carried in the corresponding map image frame, wherein the positions of the pixels of the updated pixel values are determined according to the positions of the maps of the mapped images of the source video;
and after the pixel points of the image frames in the blank video are updated according to each chartlet image frame, modifying the video parameters of the blank video according to the video parameters of the specified conditions met by the source video to obtain the chartlet video.
9. A video mapping device applied to a terminal is characterized by comprising:
the system comprises an acquisition module, a storage module and a processing module, wherein the acquisition module is used for acquiring a map video comprising a plurality of map image frames, and each map image frame carries information of one or more map images in at least one map image frame in the plurality of map image frames;
the acquisition module is further used for acquiring a source video comprising a plurality of source image frames;
the processing module is used for loading the source video and the mapping video when mapping processing is carried out on the source video, and further mapping processing is carried out on a source image frame in the source video on the basis of information of at least one mapping image carried by the mapping video;
wherein the map video of the plurality of map image frames is generated by:
determining the frame order of at least one map image frame carrying information of the map image in the map video;
generating a blank video, wherein the video parameters of the blank video and the pixel values of all image pixel points in each image frame in the blank video are default values;
according to the frame sequence of each map image frame, updating the pixel values of image pixels of the image frame at the frame sequence in the blank video according to the information of the map image to be carried in the corresponding map image frame, wherein the positions of the pixels of the updated pixel values are determined according to the positions of the maps of the mapped images of the source video;
and after the pixel points of the image frames in the blank video are updated according to each chartlet image frame, modifying the video parameters of the blank video according to the video parameters of the specified conditions met by the source video to obtain the chartlet video.
10. A computer-readable storage medium having instructions stored therein, which when run on a computer, cause the computer to perform the video mapping method of any of claims 1 to 3 or perform the video mapping method of any of claims 4 to 7.
11. A computer device comprising a memory and a processor, the memory having stored thereon a computer program operable on the processor, wherein the processor, when executing the computer program, implements the video mapping method of any of claims 1 to 3 or performs the video mapping method of any of claims 4 to 7.
CN201911239406.4A 2019-12-06 2019-12-06 Video mapping method and device, computer equipment and storage medium Active CN110971840B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911239406.4A CN110971840B (en) 2019-12-06 2019-12-06 Video mapping method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911239406.4A CN110971840B (en) 2019-12-06 2019-12-06 Video mapping method and device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN110971840A CN110971840A (en) 2020-04-07
CN110971840B true CN110971840B (en) 2022-07-26

Family

ID=70033221

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911239406.4A Active CN110971840B (en) 2019-12-06 2019-12-06 Video mapping method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN110971840B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112822544B (en) * 2020-12-31 2023-10-20 广州酷狗计算机科技有限公司 Video material file generation method, video synthesis method, device and medium
CN112929683A (en) * 2021-01-21 2021-06-08 广州虎牙科技有限公司 Video processing method and device, electronic equipment and storage medium

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006049480A1 (en) * 2004-11-03 2006-05-11 Rimantas Pleikys Method and system of data storage
CN101035279A (en) * 2007-05-08 2007-09-12 孟智平 Method for using the information set in the video resource
JP2008306520A (en) * 2007-06-08 2008-12-18 Softbank Bb Corp Information distribution system, portable telephone terminal, and method of controlling playback of video content in portable telephone terminal
EP2131363A1 (en) * 2008-06-06 2009-12-09 NTT DoCoMo, Inc. Video editing system, video editing server and communication terminal
WO2012087735A1 (en) * 2010-12-22 2012-06-28 Thomson Licensing Method and system for sending video edit information
WO2012119554A1 (en) * 2011-03-10 2012-09-13 中兴通讯股份有限公司 Method and system for implementing multimedia messages based on video server
KR20170002831A (en) * 2015-06-30 2017-01-09 주식회사 벽우 Video editing systems and a driving method using video project templates
CN107888962A (en) * 2016-09-30 2018-04-06 乐趣株式会社 Video editing system and method
KR20180041879A (en) * 2016-10-17 2018-04-25 (주)와토시스 Method for editing and apparatus thereof
CN108521578A (en) * 2018-05-15 2018-09-11 北京奇虎科技有限公司 It can textures region, the method for realizing textures in video in a kind of detection video
CN109474844A (en) * 2017-09-08 2019-03-15 腾讯科技(深圳)有限公司 Video information processing method and device, computer equipment
CN109640146A (en) * 2018-12-28 2019-04-16 鸿视线科技(北京)有限公司 The preview of distributed multi-user audio video synchronization, broadcasting, editing system and method

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006049480A1 (en) * 2004-11-03 2006-05-11 Rimantas Pleikys Method and system of data storage
CN101035279A (en) * 2007-05-08 2007-09-12 孟智平 Method for using the information set in the video resource
JP2008306520A (en) * 2007-06-08 2008-12-18 Softbank Bb Corp Information distribution system, portable telephone terminal, and method of controlling playback of video content in portable telephone terminal
EP2131363A1 (en) * 2008-06-06 2009-12-09 NTT DoCoMo, Inc. Video editing system, video editing server and communication terminal
WO2012087735A1 (en) * 2010-12-22 2012-06-28 Thomson Licensing Method and system for sending video edit information
WO2012119554A1 (en) * 2011-03-10 2012-09-13 中兴通讯股份有限公司 Method and system for implementing multimedia messages based on video server
KR20170002831A (en) * 2015-06-30 2017-01-09 주식회사 벽우 Video editing systems and a driving method using video project templates
CN107888962A (en) * 2016-09-30 2018-04-06 乐趣株式会社 Video editing system and method
KR20180041879A (en) * 2016-10-17 2018-04-25 (주)와토시스 Method for editing and apparatus thereof
CN109474844A (en) * 2017-09-08 2019-03-15 腾讯科技(深圳)有限公司 Video information processing method and device, computer equipment
CN108521578A (en) * 2018-05-15 2018-09-11 北京奇虎科技有限公司 It can textures region, the method for realizing textures in video in a kind of detection video
CN109640146A (en) * 2018-12-28 2019-04-16 鸿视线科技(北京)有限公司 The preview of distributed multi-user audio video synchronization, broadcasting, editing system and method

Also Published As

Publication number Publication date
CN110971840A (en) 2020-04-07

Similar Documents

Publication Publication Date Title
CN110674022B (en) Behavior data acquisition method and device and storage medium
CN108063981B (en) Method and device for setting attributes of live broadcast room
CN110368689B (en) Game interface display method, system, electronic equipment and storage medium
CN110841285B (en) Interface element display method and device, computer equipment and storage medium
CN110278464B (en) Method and device for displaying list
CN108093307B (en) Method and system for acquiring playing file
CN108174275B (en) Image display method and device and computer readable storage medium
CN110740340B (en) Video live broadcast method and device and storage medium
CN112965683A (en) Volume adjusting method and device, electronic equipment and medium
CN107896337B (en) Information popularization method and device and storage medium
CN110147503B (en) Information issuing method and device, computer equipment and storage medium
CN110032384B (en) Resource updating method, device, equipment and storage medium
CN109697113B (en) Method, device and equipment for requesting retry and readable storage medium
CN113384880A (en) Virtual scene display method and device, computer equipment and storage medium
CN110971840B (en) Video mapping method and device, computer equipment and storage medium
CN111083554A (en) Method and device for displaying live gift
CN110839174A (en) Image processing method and device, computer equipment and storage medium
CN112770177B (en) Multimedia file generation method, multimedia file release method and device
CN111131272B (en) Scheduling method, device and system of stream server, computing equipment and storage medium
CN111008083B (en) Page communication method and device, electronic equipment and storage medium
CN111064657B (en) Method, device and system for grouping concerned accounts
CN109714628B (en) Method, device, equipment, storage medium and system for playing audio and video
CN109688064B (en) Data transmission method and device, electronic equipment and storage medium
CN111275607A (en) Interface display method and device, computer equipment and storage medium
CN110889060A (en) Webpage display method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant