CN116347155A - Video generation method, device and equipment - Google Patents

Video generation method, device and equipment Download PDF

Info

Publication number
CN116347155A
CN116347155A CN202111582800.5A CN202111582800A CN116347155A CN 116347155 A CN116347155 A CN 116347155A CN 202111582800 A CN202111582800 A CN 202111582800A CN 116347155 A CN116347155 A CN 116347155A
Authority
CN
China
Prior art keywords
image
initial image
determining
matching degree
mirror
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111582800.5A
Other languages
Chinese (zh)
Inventor
靳潇杰
许丁
沈晓辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lemon Inc Cayman Island
Original Assignee
Lemon Inc Cayman Island
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lemon Inc Cayman Island filed Critical Lemon Inc Cayman Island
Priority to CN202111582800.5A priority Critical patent/CN116347155A/en
Publication of CN116347155A publication Critical patent/CN116347155A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/435Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47205End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for manipulating displayed content, e.g. interacting with MPEG-4 objects, editing locally
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/8146Monomedia components thereof involving graphical data, e.g. 3D object, 2D graphics
    • H04N21/8153Monomedia components thereof involving graphical data, e.g. 3D object, 2D graphics comprising still images, e.g. texture, background image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/816Monomedia components thereof involving special video data, e.g 3D video

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Computer Graphics (AREA)
  • Studio Devices (AREA)

Abstract

The disclosure provides a video generation method, device and equipment, wherein the method comprises the following steps: acquiring a video generation request, wherein the video generation request comprises at least one initial image; acquiring a salient image and a falling image of the initial image; acquiring first image information of the salient image and second image information of the falling image; and determining a target mirror mode of the initial image according to the first image information and the second image information, and generating the video according to the target mirror mode of the initial image. The display effect of the video is improved.

Description

Video generation method, device and equipment
Technical Field
The disclosure relates to the technical field of image processing, and in particular relates to a video generation method, device and equipment.
Background
When the terminal equipment generates the video with the image, the image can be displayed in a mirror mode, so that the image display effect in the video is improved.
Currently, when a terminal device generates a video with an image of a mirror, the terminal device can process the image in the video screen according to a preset mirror mode. For example, if the preset mirror mode is from left to right, the mirror mode of generating the image in the video at the terminal device is from left to right, but a plurality of images are usually added in the video, and the images can only be displayed according to the fixed mirror mode, so that the matching degree between the mirror mode of the image in the video and the image is low, and further the display effect of the video is poor.
Disclosure of Invention
The disclosure provides a video generation method, device and equipment, which are used for solving the technical problem of poor video display effect in the prior art.
In a first aspect, the present disclosure provides a video generation method, the method comprising:
acquiring a video generation request, wherein the video generation request comprises at least one initial image;
acquiring a salient image and a falling image of the initial image;
acquiring first image information of the salient image and second image information of the falling image;
and determining a target mirror mode of the initial image according to the first image information and the second image information, and generating the video according to the target mirror mode of the initial image.
In a second aspect, the present disclosure provides a video generating apparatus, including a first acquisition module, a second acquisition module, a third acquisition module, and a determination module, where:
the first acquisition module is used for acquiring a video generation request, wherein the video generation request comprises at least one initial image;
the second acquisition module is used for acquiring a significant image and a falling image of the initial image;
the third acquisition module is used for acquiring the first image information of the significant image and the second image information of the falling image;
The determining module is used for determining a target mirror mode of the initial image according to the first image information and the second image information, and generating the video according to the target mirror mode of the initial image.
In a third aspect, an embodiment of the present disclosure provides an electronic device, including: a processor and a memory;
the memory stores computer-executable instructions;
the processor executes computer-executable instructions stored in the memory, causing the at least one processor to perform the video generation method as described above in the first aspect and the various possible designs of the first aspect.
In a fourth aspect, embodiments of the present disclosure provide a computer-readable storage medium having stored therein computer-executable instructions which, when executed by a processor, implement the video generation method according to the first aspect and the various possible designs of the first aspect.
In a fifth aspect, embodiments of the present disclosure provide a computer program product comprising a computer program which, when executed by a processor, implements the video generation method according to the first aspect and the various possible designs of the first aspect.
The disclosure provides a video generation method, a device and equipment, wherein the video generation request comprises at least one initial image, a salient image and a falling image of the initial image are acquired, first image information of the salient image and second image information of the falling image are acquired, a target mirror mode of the initial image is determined according to the first image information and the second image information, and a video is generated according to the target mirror mode of the initial image. According to the method, the terminal equipment can determine the mirror mode of the initial image according to the first image information of the salient image and the second image information of the falling image of the initial image, and the first image information and the second image information can accurately reflect the characteristics of the initial image, so that the terminal equipment can accurately determine the mirror mode of the initial image, and further the display effect of the video is improved.
Drawings
Fig. 1 is a schematic view of an application scenario provided in an embodiment of the present disclosure;
fig. 2 is a schematic flow chart of a video generating method according to an embodiment of the disclosure;
FIG. 3 is a schematic illustration of a salient image provided by an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of a boundary distance provided by an embodiment of the present disclosure;
Fig. 5 is a flowchart of another video generating method according to an embodiment of the present disclosure;
fig. 6 is a schematic view of a frame-dropping image expansion provided in an embodiment of the disclosure;
fig. 7 is another expanded view of a falling image provided in an embodiment of the disclosure;
fig. 8 is a process schematic diagram of a video generating method according to an embodiment of the disclosure;
fig. 9 is a schematic structural diagram of a video generating apparatus according to an embodiment of the present disclosure;
fig. 10 is a schematic structural diagram of an electronic device according to an embodiment of the disclosure.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present disclosure as detailed in the accompanying claims.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
In the related art, when a video with an image is generated, a terminal device can display the image in a mirror mode, so that the display effect of the image in the video is improved. For example, images in a video may be presented from left to right, top to bottom, etc. Currently, when a terminal device generates a video with an image of a mirror, the terminal device can process the image in the video according to a preset mirror mode. For example, if the mirror mode preset by the terminal device is from top to bottom, all images in the video generated by the terminal device are displayed according to the mirror mode from top to bottom. However, a plurality of images are generally added in the video, and the image characteristics of each image are different, and when the terminal equipment generates the video, the images are processed only according to a fixed mirror mode, so that the matching degree between the mirror mode corresponding to the images and the images is lower, and the display effect of the video is poorer.
In order to solve the technical problem of poor video display effect in the related art, the embodiment of the disclosure provides a video generation method, which comprises the steps of obtaining a video generation request, wherein the video generation request comprises at least one initial image, obtaining a salient image and a falling image of the initial image, obtaining first image information of the salient image and second image information of the falling image, determining the matching degree between the initial image and a plurality of preset mirror modes according to the first image information and the second image information, determining a target mirror mode in the plurality of preset mirror modes according to the matching degree between the initial image and the plurality of preset mirror modes, and generating a video according to the target mirror mode of the initial image. Therefore, the terminal equipment can determine the mirror mode of the initial image according to the first image information of the salient image and the second image information of the falling image of the initial image, and the first image information and the second image information can accurately reflect the characteristics of the initial image, so that the terminal equipment can accurately determine the mirror mode of the initial image, and further the display effect of the video is improved.
Next, an application scenario of the embodiment of the present disclosure will be described with reference to fig. 1.
Fig. 1 is a schematic view of an application scenario provided in an embodiment of the present disclosure. Referring to fig. 1, a video generation request and a terminal device are included. Wherein the video generation request includes the image a. When the terminal equipment receives the video generation request, the terminal equipment can determine the information of the obvious image and the information of the falling image of the image A according to the image A in the video generation request, and determine that the mirror mode of the image A is from left to right according to the information of the obvious image and the information of the falling image, so as to generate a target video, wherein the target video comprises the image A, and the display mode of the image A is from left to right. Therefore, the characteristic of the image A can be accurately reflected by the remarkable image information and the falling image information, so that the terminal equipment can accurately determine the mirror mode of the image A, and further the display effect of the target video is improved.
The following describes the technical solutions of the present disclosure and how the technical solutions of the present disclosure solve the above technical problems in detail with specific embodiments. The following embodiments may be combined with each other, and the same or similar concepts or processes may not be described in detail in some embodiments. Embodiments of the present disclosure will be described below with reference to the accompanying drawings.
Fig. 2 is a flowchart of a video generating method according to an embodiment of the present disclosure. Referring to fig. 2, the method may include:
s201, acquiring a video generation request.
The execution body of the embodiment of the disclosure may be a terminal device, or may be a video generating device disposed in the terminal device, where the video generating device may be implemented by software, or may be implemented by a combination of software and hardware. Optionally, the terminal device is any device having a video processing function and/or a display function. For example, the terminal device may be a mobile phone, a notebook computer, a desktop computer, or the like.
The video generation request includes at least one initial image. Alternatively, the initial image may be any type of image. For example, the initial image may be a landscape type graphic, a character type image, a pictorial type image, or the like. It may appear that the resolution of the initial image may be any resolution. Optionally, the video generation request may include one initial image or multiple initial images, and when the video generation request includes multiple initial images, the formats of any two initial images may be the same or different. For example, if the video generation request includes an initial image a and an initial image B, the aspect ratio of both the initial image a and the initial image B may be 4 to 3, or the aspect ratio of the initial image a may be 4 to 3, and the aspect ratio of the initial image B may be 16 to 9.
Alternatively, the terminal device may receive a video generation request input by the user. For example, the user may add a plurality of initial images to the video software and click on a video generation control in a video software page, and the terminal device may receive a video generation request according to an operation of the user. For example, if the user adds an initial image a and an initial image B to the video software and clicks the video generation control, the terminal device may receive a video generation request, where the video generation request includes the initial image a and the initial image B.
S202, acquiring a salient image and a falling image of the initial image.
The saliency image of the initial image is used to indicate the image saliency of the initial image. Wherein, the image salience of the initial image is used for reflecting the importance degree of human eyes in the region of the initial image. For example, if swan and lake water are included in an image, the image of the area including swan in the image is a salient image of the image. For example, in an actual application process, the image generally includes a main display feature and a non-main display feature (for example, swan is a main display feature, lake water and sky are secondary display features), and the terminal device needs to acquire the main display feature in the image, that is, a significant image corresponding to the image.
Optionally, the terminal device may acquire the salient image of the initial image through a preset algorithm. For example, the terminal device may obtain a salient image corresponding to the initial image through a salient analysis algorithm based on the low-level visual features.
Next, a salient image corresponding to the original image will be described in detail with reference to fig. 3.
Fig. 3 is a schematic view of a salient image provided by an embodiment of the present disclosure. Please refer to fig. 3, which includes an image a and an image B. The image A is an initial image, and the image B is a significant image corresponding to the initial image. The initial image includes a long and narrow roadway and houses on both sides of the roadway. The primary display area of the initial image is an area including a lane and a house in the middle of the image a, so that the salient image corresponding to the initial image is shown in fig. 3, and the white area of the image B is the salient image corresponding to the image a.
The dropped image is a partial image displayed at the end of the original image mirror. For example, when the initial image is subjected to the mirror-transporting process, the complete initial image is not displayed in the display screen of the terminal device, if the mirror-transporting mode corresponding to the initial image is from left to right, the left partial image of the initial image is displayed in the display screen of the terminal device, the right partial image of the initial image is displayed at last, and the right partial image of the initial image displayed at last by the terminal device is the falling image corresponding to the initial image.
Alternatively, the terminal device may determine the landing image from the initial image. For example, the initial image is the same as the center of the dropped image, and the size of the dropped image is 80% of the size of the initial image. For example, in the coordinate system, if the center coordinates of the initial image are (1, 1), and the length and width of the initial image are both 1, the center coordinates of the dropped image corresponding to the initial image are (1, 1), and the length and width of the dropped image are both 0.8. Alternatively, the size of the dropped image may be any size set by the user, which is not limited by the embodiment of the present disclosure. For example, the center of the dropped image may be different from the center of the original image, and the size of the dropped image may be any value such as 60%,70% of the original image size.
S203, acquiring first image information of the salient image and second image information of the falling image.
Optionally, the first image information of the salient image includes a first size of the salient image and a first position of the salient image in the initial image. The first size is the length and width of the minimum envelope rectangle corresponding to the salient image. For example, the first size may be an aspect ratio of the salient image. For example, if the salient image has a width of 4 and a length of 3, the first size of the salient image is the aspect ratio of 4:3.
Alternatively, when the terminal device acquires the salient image of the initial image, the first size of the salient image may be determined. For example, in the actual application process, when the terminal device obtains the salient image through the preset algorithm, the salient image is usually an irregular graph, and the terminal device may obtain a minimum envelope rectangle corresponding to the salient image, and further determine the aspect ratio of the minimum envelope rectangle corresponding to the salient image as the first size of the salient image.
The first location may be a location of the salient image in the initial image. Wherein the first location may be the coordinates of the salient image in the initial image. Alternatively, when the terminal device determines the salient image corresponding to the initial image, the terminal device may determine the position of the salient image in the initial image. For example, the salient image of the terminal device may be located at the left side, the right side, the upper side, the lower side, the middle side and the like of the terminal device, and the terminal device may establish a coordinate system in the initial image, so as to determine the position of the minimum envelope rectangle corresponding to the salient image as the first position of the salient image in the initial image.
The second image information of the falling image includes a second size of the falling image and a second position of the falling image in the initial image. Wherein the second dimension is the length and width dimensions of the dropped image. For example, the second dimension may be an aspect ratio of the drop image. For example, if the width of the dropped image is 4 and the length is 3, the second size of the dropped image is the aspect ratio of the dropped image of 4:3.
The second location may be the location of the drop image in the initial image. Wherein the second position may be the coordinates of the drop image in the initial image. Optionally, when the terminal device determines the dropped image corresponding to the initial image, the terminal device may determine a position of the dropped image in the initial image. For example, the terminal device may establish a coordinate system in the initial image, and determine the second position of the falling image in the initial image.
Alternatively, the second size and second position of the drop image may be determined from the initial image. For example, the terminal device may determine that the initial image is the same as the center of the dropped image, and that the size of the dropped image is 80% of the size of the initial image, so as to obtain a second size of the dropped image and a second position of the dropped image in the initial image.
S204, determining a target mirror mode of the initial image according to the first image information and the second image information.
The mirror-moving mode is a method for shooting video motion lens. For example, the image in the video captured by the user may be an image presented by a moving lens.
The target mirror mode of the initial image may be determined according to the following possible implementation: and determining the matching degree between the initial image and a plurality of preset mirror modes according to the first image information and the second image information, and determining a target mirror mode in the plurality of preset mirror modes according to the matching degree between the initial image and the plurality of preset mirror modes. Optionally, the preset mirror mode may include at least one of the following: a left-right lens transporting mode, a right-left lens transporting mode, an up-down lens transporting mode, a down-up lens transporting mode, a near-far lens transporting mode, and the like. The left and right lens transporting modes are lens transporting modes in which an initial image is displayed in a video from left to right, the right and left lens transporting modes are lens transporting modes in which the initial image is displayed in the video from right to left, the upper and lower lens transporting modes are lens transporting modes in which the initial image is displayed in the video from top to bottom, the lower and upper lens transporting modes are lens transporting modes in which the initial image is flushed down to the upper lens transporting mode in which the initial image is displayed in the video, the far and near lens transporting modes are lens transporting modes in which the initial image is displayed in the video from far to near, and the near and far lens transporting modes are lens transporting modes in which the initial image is displayed in the video from near to far.
Optionally, determining the matching degree between the initial image and the multiple preset mirror modes according to the first image information and the second image information specifically includes: and determining a height ratio, a width ratio and an aspect ratio between the falling image and the significant image according to the first size and the second size. Alternatively, the height ratio is the ratio of the height of the falling image to the height of the salient image. For example, if the height of the dropped image is 10 and the height of the salient image is 5, the ratio of the height of the dropped image to the height of the salient image is 2 to 1. Alternatively, the ratio of the longest height of the falling image to the longest height of the significant image may be determined as the ratio of the falling image to the height of the significant image. Alternatively, the width ratio is the ratio of the width of the drop image to the width of the salient image. For example, if the width of the dropped image is 10 and the width of the salient image is 5, the ratio of the width of the dropped image to the width of the salient image is 2 to 1. Alternatively, the ratio of the longest width of the falling image to the longest width of the significant image may be determined as the ratio of the falling image to the significant image. Alternatively, the aspect ratio is the ratio of the aspect ratio of the falling image to the aspect ratio of the salient image. For example, if the aspect ratio of the land image is 2 and the aspect ratio of the salient image is 1, the ratio of the aspect ratio of the land image to the aspect ratio of the salient image is 2 to 1.
Optionally, each mirror mode has a corresponding preset aspect ratio. For example, the aspect ratio of 4:3 includes 6 mirror modes, the aspect ratio of 16:9, and the aspect ratio includes 6 mirror modes, wherein the 6 mirror modes corresponding to the aspect ratio of 4:3 are the same as the 6 mirror modes corresponding to the aspect ratio of 16:9. Optionally, in the practical application process, if the aspect ratio of the salient image is closest to a preset aspect ratio, the matching degree of the 6 mirror-transporting modes corresponding to the preset aspect ratio and the initial image corresponding to the salient image is increased by 1. For example, if the aspect ratio of the salient image is 4:3, the aspect ratio is 4:3, which includes adding 1 to the matching degree of the mirror-in-6 mode and the original image.
A boundary distance between the falling image and the salient image is determined based on the first location and the second location. Optionally, the boundary distances include an upper boundary distance, a lower boundary distance, a left boundary distance, and a right boundary distance. Wherein the upper boundary distance is the dimension between the upper boundary of the dropped image and the upper boundary of the salient image. For example, if the position of the upper boundary of the falling image is 10 and the position of the upper boundary of the salient image is 8, the upper boundary distance is 2. The lower boundary distance is the dimension between the lower boundary of the dropped image and the lower boundary of the salient image. For example, if the position of the lower boundary of the falling image is 10 and the position of the lower boundary of the salient image is 8, the lower boundary distance is 2. The left boundary distance is the dimension between the left boundary of the dropped image and the left boundary of the salient image. For example, if the position of the left boundary of the dropped image is 5 and the position of the left boundary of the salient image is 4, the left boundary distance is 1. The right boundary distance is the dimension between the right boundary of the dropped image and the right boundary of the salient image. For example, if the position of the right boundary of the dropped image is 5 and the position of the right boundary of the salient image is 3, the right boundary distance is 2.
Next, a boundary distance between the falling image and the salient image will be described with reference to fig. 4.
Fig. 4 is a schematic diagram of a boundary distance according to an embodiment of the disclosure. Referring to fig. 4, an image a is included. Wherein, image A includes a falling image and a salient image. The drop image includes an upper boundary a, a lower boundary a, a left boundary a, and a bounded boundary a. The salient image includes an upper boundary a, a lower boundary a, a left boundary a, and a bounded boundary a. The size between the upper boundary A and the upper boundary a is the upper boundary distance, the size between the upper boundary A and the lower boundary a is the lower boundary distance, the size between the left boundary A and the left boundary a is the left boundary distance, and the size between the right boundary A and the right boundary a is the right boundary distance.
And determining the matching degree between the initial image and a plurality of preset mirror modes according to the height ratio, the width ratio, the length-width ratio and the boundary distance. Optionally, the matching degree between the initial image and the multiple preset mirror modes may be determined according to the following possible implementation manners: and determining a first matching degree between the initial image and each mirror mode according to the height ratio. Optionally, when determining the first matching degree, the first matching degree between the initial image and each mirror mode is determined according to the height ratio corresponding to the initial image and matching with each mirror mode respectively. For example, the first matching degree of the initial image and the mirror mode 1 is a matching degree 1, and the first matching degree of the initial image and the mirror mode 2 is a matching degree 2.
And determining a second matching degree between the initial image and each preset mirror mode according to the width ratio. Optionally, when determining the second matching degree, matching with each mirror mode according to the width ratio corresponding to the initial image, and determining the second matching degree between the initial image and each mirror mode. For example, the second matching degree of the initial image and the mirror mode 1 is a matching degree 1, and the second matching degree of the initial image and the mirror mode 2 is a matching degree 2.
And determining a third matching degree between the initial image and each preset mirror mode according to the length-width ratio value. Optionally, when determining the third matching degree, matching with each mirror mode according to the aspect ratio value corresponding to the initial image, and determining the third matching degree between the initial image and each mirror mode. For example, the third matching degree of the initial image and the mirror mode 1 is a matching degree 1, and the third matching degree of the initial image and the mirror mode 2 is a matching degree 2.
And determining a fourth matching degree between the initial image and each preset mirror mode according to the boundary distance. Optionally, when determining the fourth matching degree, matching with each mirror mode according to the boundary distance corresponding to the initial image, and determining the fourth matching degree between the initial image and each mirror mode. For example, the fourth matching degree of the initial image and the mirror mode 1 is a matching degree 1, and the fourth matching degree of the initial image and the mirror mode 2 is a matching degree 2.
And determining the matching degree between the initial image and a plurality of preset mirror modes according to at least one of the first matching degree, the second matching degree, the third matching degree and the fourth matching degree. Optionally, the terminal device may determine a sum of at least two of the first matching degree, the second matching degree, the third matching degree, and the fourth matching degree corresponding to the preset mirror mode as a matching degree between the initial image and the preset mirror mode. For example, if the first matching degree between the initial image and the mirror mode 1 is a matching degree 1, the second matching degree is a matching degree 2, the third matching degree is a matching degree 3, and the fourth matching degree is a matching degree 4, the matching degree between the initial image and the mirror mode 1 is the sum of the matching degree 1, the matching degree 2, the matching degree 3, and the matching degree 4.
Optionally, according to the matching degree between the initial image and the multiple preset lens-carrying modes, determining a target lens-carrying mode in the multiple preset lens-carrying modes, specifically: and determining a preset mirror mode with highest matching degree with the initial image as a target mirror mode. For example, the matching degree between the initial image and the left-to-right mirror mode is a matching degree 1, the matching degree between the initial image and the right-to-left mirror mode is a matching degree 2, the matching degree between the initial image and the top-to-bottom mirror mode is a matching degree 3, the matching degree between the initial image and the bottom-to-top mirror mode is a matching degree 4, the matching degree between the initial image and the far-to-near mirror mode is a matching degree 5, the matching degree between the initial image and the near-to-far mirror mode is a matching degree 6, and if the matching degree 2 is the highest matching degree, the terminal device determines that the target mirror mode corresponding to the initial image is the right-to-left mirror mode.
S205, generating a video according to a target mirror mode of the initial image.
Optionally, the display mode of the initial image included in the video is a target mirror mode corresponding to the initial image. For example, the video generation request includes an initial image a, an initial image B, and an initial image C, and if the target mirror mode of the initial image a is from left to right and the target mirror mode of the initial image B is from top to bottom, the target mirror mode of the initial image C is from far to near, the initial image a in the generated video is displayed from left to right, the initial image B is displayed from top to bottom, and the initial image C is displayed from far to near.
The embodiment of the disclosure provides a video generation method, which comprises the steps of obtaining a video generation request, wherein the video generation request comprises at least one initial image, obtaining a salient image and a falling image of the initial image, obtaining first image information of the salient image and second image information of the falling image, determining the matching degree between the initial image and a plurality of preset mirror modes according to the first image information and the second image information, determining the mirror mode with the highest matching degree with the initial image as a target mirror mode, and generating a video according to the target mirror mode of the initial image. Therefore, the terminal equipment can determine the mirror mode of the initial image according to the first image information of the salient image and the second image information of the falling image of the initial image, and the first image information and the second image information can accurately reflect the characteristics of the initial image, so that the terminal equipment can accurately determine the mirror mode of the initial image, and further the display effect of the video is improved.
The video generating method is described in detail below with reference to fig. 5 on the basis of the embodiment shown in fig. 2.
Fig. 5 is a flowchart of another video generating method according to an embodiment of the present disclosure. Referring to fig. 5, the method includes:
s501, acquiring a video generation request.
The video generation request includes at least one initial image.
It should be noted that, the execution process of step S501 may refer to step S201, and this will not be described in detail in the embodiments of the present disclosure.
S502, acquiring a salient image and a falling image of an initial image, and first image information of the salient image and second image information of the falling image.
It should be noted that, the execution process of the step S502 may refer to the step S201 and the step S202, which is not described in detail in the embodiment of the disclosure.
S503, according to the first image information and the second image information, determining the matching degree between the initial image and a plurality of preset mirror modes.
Optionally, the terminal device may determine a first matching degree, a second matching degree, a third matching degree, and a fourth matching degree between the initial image and each preset mirror mode, and further determine the matching degree between the initial image and the preset mirror mode according to at least one of the first matching degree, the second matching degree, the third matching degree, and the fourth matching degree.
Next, 4 kinds of matching degrees exist between the initial image and the preset fortune mirror, and a method for determining the 4 kinds of matching degrees is described in detail below.
First degree of matching: and a first matching degree between the initial image and a preset mirror mode.
Optionally, a first matching degree between the initial image and each mirror mode is determined according to the height ratio. Alternatively, the first matching degree may be determined according to the following possible implementation manner: if the height ratio is smaller than or equal to the first threshold, determining that the first matching degree between the initial image and the upper and lower mirror modes is a first value, determining that the first matching degree between the initial image and the lower upper mirror mode is a first value, and determining that the first matching degree between the initial image and other preset mirror modes except the upper and lower mirror modes is 0, wherein the first value is larger than 0. For example, if the height ratio between the longest height of the dropped image and the longest height of the significant image is less than or equal to 0.9, the matching degree of the initial image and the up-down mirror mode and the down-up mirror mode is determined to be 10, and the matching degree of the initial image and the other mirror modes is determined to be 0. For example, in the practical application process, if the height ratio between the falling image and the significant image is smaller than or equal to the first threshold, it is indicated that the falling image cannot completely display the significant image, and the significant image corresponding to the initial image needs a vertical display mode to display more information, so that it can be determined that the first matching degree between the initial image and the up-down mirror mode and the down-up mirror mode is higher, and the first matching degree between the initial image and other mirror modes is 0.
Second degree of matching: and a second matching degree between the initial image and a preset mirror mode.
Optionally, a second matching degree between the initial image and each mirror mode is determined according to the width ratio. Alternatively, the second degree of matching may be determined according to the following possible implementation: if the width ratio is smaller than or equal to the second threshold, determining a second matching degree between the initial image and the left and right lens transportation modes as a second value, determining a second matching degree between the initial image and the right and left lens transportation modes as a second value, and determining a second matching degree between the initial image and other preset lens transportation modes except the left and right lens transportation modes as a second value, wherein the second value is larger than 0. For example, if the width ratio between the longest width of the dropped image and the longest width of the significant image is less than or equal to 0.9, the matching degree of the initial image and the left-right and right-left mirror systems is determined to be 10, and the matching degree of the initial image and the other mirror systems is determined to be 0. For example, in the practical application process, if the width ratio between the dropped image and the significant image is smaller than or equal to the second threshold, it is indicated that the dropped image cannot completely display the significant image, and the significant image corresponding to the initial image needs to be displayed in a lateral display mode to display more information, so that it can be determined that the second matching degree between the initial image and the left-right mirror mode and the right-left mirror mode is higher, and the second matching degree between the initial image and other mirror modes is 0.
Third degree of matching: and a third matching degree between the initial image and a preset mirror mode.
Optionally, a third matching degree between the initial image and each mirror mode is determined according to the aspect ratio. Alternatively, the third matching degree may be determined according to the following possible implementation manner: if the aspect ratio is less than or equal to the third threshold, determining a third matching degree between the initial image and the near-far mirror mode as a third value, and determining a third matching degree between the initial image and other mirror modes except the near-far mirror mode as 0, wherein the third value is greater than 0. For example, if the ratio between the aspect ratio of the falling image and the aspect ratio of the significant image is less than or equal to 0.8, the matching degree of the initial image and the near-far mirror mode is determined to be 2, and the matching degree of the initial image and other mirror modes is determined to be 0. For example, in the practical application process, if the ratio of the aspect ratio between the dropped image and the significant image is smaller than or equal to the third threshold, it is indicated that the dropped image cannot completely display the significant image, and the significant image corresponding to the initial image needs to be displayed in a reduced display mode to display more information, so that it can be determined that the third matching degree between the initial image and the near-far mirror mode (i.e., the zoom lens) is higher, and the third matching degree between the initial image and other mirror modes is 0.
If the aspect ratio is greater than or equal to the third threshold, determining a third matching degree between the initial image and the near-far mirror mode as a third value, and determining a third matching degree between the initial image and other mirror modes except the near-far mirror mode as 0, wherein the third value is greater than 0. For example, if the ratio between the aspect ratio of the falling image and the aspect ratio of the significant image is greater than or equal to 0.8, the matching degree of the initial image and the near-far mirror mode is determined to be 2, and the matching degree of the initial image and other mirror modes is determined to be 0. For example, in the practical application process, if the ratio of the aspect ratio between the dropped image and the significant image is greater than or equal to the third threshold, it is indicated that the dropped image may completely display the significant image, and in order to improve the image display effect, the significant image corresponding to the initial image needs to be displayed in an enlarged display manner to clearly display the image information, so that it may be determined that the third matching degree between the initial image and the near-far mirror manner (i.e., the zoom lens) is higher, and the third matching degree between the initial image and other mirror manners is 0.
Fourth degree of matching: and a fourth matching degree between the initial image and a preset mirror mode.
Optionally, a fourth matching degree between the initial image and each mirror mode is determined according to the boundary distance. Alternatively, the fourth degree of matching may be determined according to the following possible implementation: the method comprises the steps of obtaining a first distance ratio of an upper boundary distance to a lower boundary distance and a second distance ratio of a left boundary distance to a right boundary distance, determining a first sub-matching degree of an initial image and each mirror mode according to the first distance ratio, determining a second sub-matching degree of the initial image and each mirror mode according to the second distance ratio, and determining a fourth matching degree according to the first sub-matching degree and the second sub-matching degree.
Optionally, for any one preset mirror mode, determining a fourth matching degree between the initial image and the preset mirror mode is: and the sum of the corresponding first sub-matching degree and the corresponding second sub-matching degree. For example, if the first sub-matching degree of the initial image and the preset mirror mode 1 is the matching degree 1 and the second sub-matching degree of the initial image and the preset mirror mode 1 is the matching degree 2, the fourth matching degree of the initial image and the preset mirror mode 1 is the sum of the matching degree 1 and the matching degree 2.
Alternatively, the first sub-matching degree may be determined according to the following possible implementation manner: if the first distance ratio is greater than or equal to the fourth threshold, determining that the first sub-matching degree of the initial image and the up-down mirror mode is a fourth value, and determining that the first sub-matching degree between the initial image and other mirror modes except the up-down mirror mode is 0, wherein the fourth value is greater than 0. For example, if the first distance ratio of the upper boundary distance to the lower boundary distance is greater than or equal to the fourth threshold, a first sub-matching degree between the initial image and the up-down mirror mode is determined to be 1, and a sub-matching degree between the initial image and other mirror modes is determined to be 0. For example, in the practical application process, if the upper boundary distance is 2 times of the lower boundary distance, it is indicated that the upper side of the falling frame is far from the upper side of the significant image, so that the mirror-moving mode needs to move from top to bottom, and more image information of the initial image can be displayed, and therefore, it can be determined that the first sub-matching degree of the initial image and the upper and lower mirror-moving modes is higher, and the first sub-matching degree of the initial image and other mirror-moving modes is 0.
If the first distance ratio is smaller than the fourth threshold, determining that the first sub-matching degree of the initial image and the lower upper mirror mode is a fourth value, and determining that the first sub-matching degree between the initial image and other mirror modes except the lower upper mirror mode is 0, wherein the fourth value is larger than 0. For example, if the first distance ratio of the upper boundary distance to the lower boundary distance is smaller than the fourth threshold, the first sub-matching degree between the initial image and the lower upper mirror mode is determined to be 1, and the sub-matching degree between the initial image and other mirror modes is determined to be 0. For example, in the practical application process, if the upper boundary distance is smaller than 2 times of the lower boundary distance, it is indicated that the lower side of the falling frame is far away from the lower side of the significant image, so that the mirror-moving mode needs to move from bottom to top to display more image information of the initial image, and therefore, it can be determined that the first sub-matching degree of the initial image and the lower-upper mirror-moving mode is higher, and the first sub-matching degree of the initial image and other mirror-moving modes is 0.
Alternatively, the second sub-match degree may be determined according to the following possible implementation: if the second distance ratio is greater than or equal to the fifth threshold, determining that the second sub-matching degree of the initial image and the left and right mirror modes is a fourth value, and determining that the second sub-matching degree of the initial image and other mirror modes except the left and right mirror modes is 0, wherein the fourth value is greater than 0. For example, if the second distance ratio of the left boundary distance to the right boundary distance is greater than or equal to the fifth threshold, a second sub-matching degree between the initial image and the left and right mirror modes is determined to be 1, and a sub-matching degree between the initial image and other mirror modes is determined to be 0. For example, in the practical application process, if the left boundary distance is 2 times of the right boundary distance, it is indicated that the left side of the falling frame is far from the left side of the significant image, so that the mirror-moving mode needs to move from left to right, and more image information of the initial image can be displayed, so that it can be determined that the second sub-matching degree of the initial image and the left-right mirror-moving mode is higher, and the second sub-matching degree of the initial image and other mirror-moving modes is 0.
If the second distance ratio is smaller than the fifth threshold, determining that the second sub-matching degree of the initial image and the right-left mirror mode is a fourth value, and determining that the second sub-matching degree of the initial image and other mirror modes except the right-left mirror mode is 0. For example, if the ratio of the second distance between the left boundary distance and the right boundary distance is smaller than the fifth threshold, the second sub-matching degree between the initial image and the right-left mirror mode is determined to be 1, and the sub-matching degree between the initial image and other mirror modes is determined to be 0. For example, in the practical application process, if the left boundary distance is smaller than 2 times of the right boundary distance, it is indicated that the right side of the falling frame is far from the right side of the significant image, so that the mirror-moving mode needs to move from right to left, and more image information of the initial image can be displayed, so that it can be determined that the second sub-matching degree of the initial image and the right-left mirror-moving mode is higher, and the second sub-matching degree of the initial image and other mirror-moving modes is 0.
S504, determining a target lens mode in the plurality of preset lens modes according to the matching degree between the initial image and the plurality of preset lens modes.
It should be noted that, the execution process of step S504 may refer to step S204, which is not described in detail in the embodiment of the present disclosure.
S505, generating a video according to a target mirror mode of the initial image.
Optionally, the video generation request further includes a duration of each initial image and a mirror speed. Optionally, the duration is a display duration of the initial image in the video. For example, if the duration of the initial image is 1 second, the duration of the initial image displayed in the video is 1 second. The mirror-moving speed is the speed of the target mirror-moving mode corresponding to the initial image. For example, if the target mirror mode corresponding to the initial image is the left-right mirror mode and the mirror speed of the initial image is 1 cm per second, the initial image is displayed in the video from left to right at a speed of 1 cm per second.
Alternatively, the video may be generated according to the following possible implementation manner: according to the target lens moving mode, the lens falling mode, the duration and the lens moving speed of the initial image, the lens starting image corresponding to the initial image is determined, the video segment corresponding to the initial image is determined according to the duration, the lens moving speed, the target lens moving mode, the lens starting image and the lens falling image of the initial image, and the video is generated according to the video segment corresponding to each initial image.
Optionally, the dropped image is an image displayed by the terminal device at the beginning of the initial image mirror. For example, when the initial image is subjected to the mirror-transporting process, the complete initial image is not displayed in the display screen of the terminal device, if the mirror-transporting mode corresponding to the initial image is from left to right, the left partial image of the initial image is displayed in the display screen of the terminal device, the right partial image of the initial image is displayed at last, and the left partial image of the initial image displayed at first by the terminal device is the starting image corresponding to the initial image.
Alternatively, the starting image may be determined according to the following possible implementation: and determining the position direction of the starting image relative to the falling image according to the target lens conveying mode. For example, if the target mirror mode is a left-right mirror mode, the start image is positioned on the left side of the drop image. And determining the starting image according to the coordinates of the falling image, the mirror speed and the duration. For example, if the mirror speed is 1 cm per second and the duration is 1 second, the image is started to the left of the dropped image, and the size of the dropped image is shifted to the left by 1 cm, so as to obtain the position of the image.
Optionally, if the size of the obtained starting image exceeds the range of the initial image, determining that the initial image cannot be displayed by a mirror, and directly displaying the initial image in the video without adding a mirror mode.
Optionally, determining the starting image corresponding to the initial image according to the target lens-moving mode, the falling image, the duration and the lens-moving speed of the initial image, and further includes: if the target lens moving mode of the initial image is the up-down lens moving mode or the down-up lens moving mode, determining that the widths of the sub-image and the falling image corresponding to the initial image are the same as the width of the initial image. For example, since the size of the falling image is smaller than that of the initial image, if the mirror mode is a vertical mirror mode, the widths of the starting sub-image and the falling image can be expanded to be the same as that of the initial image, so that more image information of the initial image can be displayed, and further the video display effect is improved.
Next, the manner of image expansion will be described in detail with reference to fig. 6 to 7.
Fig. 6 is a schematic view of a frame-dropping image expansion provided in an embodiment of the disclosure. Referring to fig. 6, the image a is included and the image a corresponds to the frame dropping image. The mirror mode of the image A is a left-right mirror mode, and the height of the falling image is smaller than that of the image A. Therefore, when the mirror system of the image a is the landscape mirror system, the height of the dropped image can be expanded to the same value as the height of the image a, and thus the dropped image can display more image information of the image a.
Fig. 7 is another expanded view of a falling image according to an embodiment of the present disclosure. Referring to fig. 7, the image a is included and the image a corresponds to the frame dropping image. The mirror mode of the image A is an up-down mirror mode, and the length of the falling image is smaller than that of the image A. Therefore, when the mirror mode of the image a is the portrait mirror mode, the width of the dropped image can be expanded to the same value as the width of the image a, and thus the dropped image can display more image information of the image a.
If the target lens moving mode of the initial image is a left-right lens moving mode or a right-left lens moving mode, determining that the image heights of the starting image and the falling image corresponding to the initial image are the same as the image height of the initial image. For example, since the size of the falling image is smaller than that of the initial image, if the mirror mode is a transverse mirror mode, the heights of the falling image and the falling image can be expanded to be the same as that of the initial image, so that more image information of the initial image can be displayed, and further the video display effect is improved.
Optionally: according to the duration time of the initial image, the mirror-moving speed, the target mirror-moving mode, the starting image and the falling image, determining a video segment corresponding to the initial image, wherein the video segment comprises the following specific steps: if the target mirror mode of at least two adjacent initial images is the same, updating the aspect ratio of the starting images corresponding to the at least two initial images to be the same, and updating the aspect ratio of the falling images corresponding to the at least two initial images to be the same. For example, if the mirror mode of two continuous initial images in the video is the left and right mirror mode, the aspect ratio of the initial image a is 4 to 3, and the aspect ratio of the initial image B is 16 to 9, the aspect ratios of the initial image a and the initial image B can be unified to eliminate the separation error, thereby improving the video display effect.
The embodiment of the disclosure provides a video generation method, which comprises the steps of obtaining a video generation request, wherein the video generation request comprises at least one initial image, obtaining a salient image and a falling image of the initial image, obtaining first image information of the salient image and second image information of the falling image, determining the matching degree between the initial image and a plurality of preset mirror modes according to the first image information and the second image information, determining the mirror mode with the highest matching degree with the initial image as a target mirror mode, and generating a video according to the target mirror mode of the initial image. Therefore, the terminal equipment can determine the mirror mode of the initial image according to the first image information of the salient image and the second image information of the falling image of the initial image, and the first image information and the second image information can accurately reflect the characteristics of the initial image, so that the terminal equipment can accurately determine the mirror mode of the initial image, and further the display effect of the video is improved.
With reference to fig. 8, the procedure of the video generating method will be described below.
Fig. 8 is a process schematic diagram of a video generating method according to an embodiment of the disclosure. Please refer to fig. 8, which includes a video generation request and a terminal device. The video generation request comprises an image A and duration time of the image A: 1 second, and a mirror speed of image a of 1 cm per second. The terminal equipment determines a salient image and a falling image corresponding to the image A according to the image A, and determines that the mirror mode corresponding to the image A is a left-right mirror mode according to the falling image and the salient image.
Referring to fig. 8, the terminal device determines that the mirror mode of the image a is a left-right mirror mode, and determines that the image of the image a is a 1 cm position on the left side of the image according to the image drop, and the terminal device expands the heights of the image a and the image drop to the same value as the height of the image a. The terminal device displays an image a from left to right in the display screen, the display duration of the image a being 1 second. Therefore, the terminal equipment can determine the mirror mode of the initial image according to the first image information of the salient image and the second image information of the falling image of the initial image, and the first image information and the second image information can accurately reflect the characteristics of the initial image, so that the terminal equipment can accurately determine the mirror mode of the initial image, and further the display effect of the video is improved.
Fig. 9 is a schematic structural diagram of a video generating apparatus according to an embodiment of the present disclosure. Referring to fig. 9, the video generating apparatus 10 includes a first acquisition module 11, a second acquisition module 12, a third acquisition module 13, and a determination module 14, wherein:
the first obtaining module 11 is configured to obtain a video generation request, where the video generation request includes at least one initial image;
the second acquiring module 12 is configured to acquire a salient image and a falling image of the initial image;
the third acquiring module 13 is configured to acquire first image information of the salient image and second image information of the falling image;
the determining module 14 is configured to determine a target mirror mode of the initial image according to the first image information and the second image information, and generate the video according to the target mirror mode of the initial image.
In one possible implementation, the determining module 14 is specifically configured to:
determining the matching degree between the initial image and a plurality of preset mirror modes according to the first image information and the second image information;
and determining the target lens moving mode in the preset lens moving modes according to the matching degree between the initial image and the preset lens moving modes.
In one possible implementation, the determining module 14 is specifically configured to:
determining a height ratio, a width ratio and an aspect ratio between the falling image and the salient image according to the first size and the second size;
determining a boundary distance between the falling image and the salient image according to the first position and the second position, wherein the boundary distance comprises: upper boundary distance, lower boundary distance, left boundary distance, and right boundary distance;
and determining the matching degree between the initial image and a plurality of preset mirror modes according to the height ratio, the width ratio, the length-width ratio and the boundary distance.
In one possible implementation, the determining module 14 is specifically configured to:
determining a first matching degree between the initial image and each preset mirror mode according to the height ratio;
determining a second matching degree between the initial image and each preset mirror mode according to the width ratio;
determining a third matching degree between the initial image and each preset mirror mode according to the aspect ratio;
determining a fourth matching degree between the initial image and each preset mirror mode according to the boundary distance;
And determining the matching degree between the initial image and a plurality of preset mirror modes according to at least one of the first matching degree, the second matching degree, the third matching degree and the fourth matching degree.
In one possible implementation, the determining module 14 is specifically configured to:
if the height ratio is smaller than or equal to a first threshold value, determining a first matching degree between the initial image and the up-down mirror mode as a first value, and,
determining a first matching degree of the initial image and the lower-upper mirror mode as the first value, and,
and determining that a first matching degree between the initial image and other preset moving mirrors except the up-down moving mirror mode and the down-up moving mirror mode is 0, wherein the first value is larger than 0.
In one possible implementation, the determining module 14 is specifically configured to:
if the width ratio is smaller than or equal to a second threshold value, determining a second matching degree of the initial image and the left-right mirror mode as a second value, and
determining a second matching degree of the initial image and the right-left mirror mode as the second value, and,
and determining that a second matching degree between the initial image and other preset mirrors except the left-right mirror mode and the right-left mirror mode is 0, wherein the second value is larger than 0.
In one possible implementation, the determining module 14 is specifically configured to:
if the aspect ratio is smaller than or equal to a third threshold, determining a third matching degree of the initial image and the near-far mirror mode as a third value, and determining a third matching degree of the initial image and other mirror modes except the near-far mirror mode as 0, wherein the third value is larger than 0;
and if the aspect ratio is greater than the third threshold, determining that the matching degree of the initial image and the far and near mirror mode is the third value, and determining that the third matching degree of the initial image and other mirror modes except the far and near mirror mode is 0.
In one possible implementation, the determining module 14 is specifically configured to:
acquiring a first distance ratio of the upper boundary distance to the lower boundary distance and a second distance ratio of the left boundary distance to the right boundary distance;
determining a first sub-matching degree of the initial image and each preset lens-carrying mode according to the first distance ratio;
determining a second sub-matching degree of the initial image and each preset lens-carrying mode according to the second distance ratio;
For any one preset mirror mode, determining the fourth matching degree of the initial image and the preset mirror mode is as follows: and the sum of the corresponding first sub-matching degree and the corresponding second sub-matching degree.
In one possible implementation, the determining module 14 is specifically configured to:
if the first distance ratio is greater than or equal to a fourth threshold, determining that the first sub-matching degree of the initial image and the up-down mirror mode is a fourth value, and determining that the first sub-matching degree between the initial image and other mirror modes except the up-down mirror mode is 0, wherein the fourth value is greater than 0;
and if the first distance ratio is smaller than the fourth threshold value, determining that the first sub-matching degree of the initial image and the lower and upper mirror modes is the fourth value, and determining that the first sub-matching degree of the initial image and other mirror modes except the lower and upper mirror modes is 0.
In one possible implementation, the determining module 14 is specifically configured to:
if the second distance ratio is greater than or equal to a fifth threshold, determining that the second sub-matching degree of the initial image and the left and right mirror modes is a fourth value, and determining that the second sub-matching degree of the initial image and other mirror modes except the left and right mirror modes is 0, wherein the fourth value is greater than 0;
And if the second distance ratio is smaller than the fifth threshold, determining that the second sub-matching degree of the initial image and the right-left mirror mode is the fourth value, and determining that the second sub-matching degree of the initial image and other mirror modes except the right-left mirror mode is 0.
In one possible implementation, the determining module 14 is specifically configured to:
and determining the sum of at least two of the first matching degree, the second matching degree, the third matching degree and the fourth matching degree corresponding to the preset mirror mode as the matching degree between the initial image and the preset mirror mode.
In one possible implementation, the determining module 14 is specifically configured to:
determining a starting image corresponding to the initial image according to a target lens moving mode of the initial image, the falling image, the duration and the lens moving speed;
determining a video segment corresponding to the initial image according to the duration of the initial image, the mirror speed, the target mirror mode, the starting image and the falling image;
and generating the video according to the video segment corresponding to each initial image.
In one possible implementation, the determining module 14 is specifically configured to:
If the target lens moving mode of the initial image is an up-down lens moving mode or a down-up lens moving mode, determining that the image widths of the starting image and the falling image corresponding to the initial image are the same as the image width of the initial image;
if the target lens moving mode of the initial image is a left-right lens moving mode or a right-left lens moving mode, determining that the image heights of the starting image and the falling image corresponding to the initial image are the same as the image height of the initial image.
In one possible implementation, the determining module 14 is specifically configured to:
if the target mirror mode of at least two adjacent initial images is the same, updating the aspect ratio of the starting images corresponding to the at least two initial images to be the same and the aspect ratio of the falling images corresponding to the at least two initial images to be the same.
The video generating apparatus provided in this embodiment may be used to execute the technical solution of the foregoing method embodiment, and its implementation principle and technical effects are similar, and this embodiment will not be described herein again.
Fig. 10 is a schematic structural diagram of an electronic device according to an embodiment of the disclosure. Referring to fig. 10, a schematic structural diagram of an electronic device 900 suitable for implementing embodiments of the present disclosure is shown, where the electronic device 900 may be a terminal device or a server. The terminal device may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a digital broadcast receiver, a personal digital assistant (Personal Digital Assistant, PDA for short), a tablet (Portable Android Device, PAD for short), a portable multimedia player (Portable Media Player, PMP for short), an in-vehicle terminal (e.g., an in-vehicle navigation terminal), and the like, and a fixed terminal such as a digital TV, a desktop computer, and the like. The electronic device shown in fig. 10 is merely an example and should not be construed to limit the functionality and scope of use of the disclosed embodiments.
As shown in fig. 10, the electronic apparatus 900 may include a processing device (e.g., a central processor, a graphics processor, or the like) 901, which may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 902 or a program loaded from a storage device 908 into a random access Memory (Random Access Memory, RAM) 903. In the RAM 903, various programs and data necessary for the operation of the electronic device 900 are also stored. The processing device 901, the ROM 902, and the RAM 903 are connected to each other through a bus 904. An input/output (I/O) interface 905 is also connected to the bus 904.
In general, the following devices may be connected to the I/O interface 905: input devices 906 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, and the like; an output device 907 including, for example, a liquid crystal display (Liquid Crystal Display, LCD for short), a speaker, a vibrator, and the like; storage 908 including, for example, magnetic tape, hard disk, etc.; and a communication device 909. The communication means 909 may allow the electronic device 900 to communicate wirelessly or by wire with other devices to exchange data. While fig. 10 shows an electronic device 900 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may be implemented or provided instead.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flowcharts. In such an embodiment, the computer program may be downloaded and installed from a network via the communication device 909, or installed from the storage device 908, or installed from the ROM 902. When executed by the processing device 901, performs the above-described functions defined in the methods of the embodiments of the present disclosure.
It should be noted that the computer readable medium described in the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present disclosure, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
The computer readable medium may be contained in the electronic device; or may exist alone without being incorporated into the electronic device.
The computer-readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to perform the methods shown in the above-described embodiments.
Computer program code for carrying out operations of the present disclosure may be written in one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a local area network (Local Area Network, LAN for short) or a wide area network (Wide Area Network, WAN for short), or it may be connected to an external computer (e.g., connected via the internet using an internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units involved in the embodiments of the present disclosure may be implemented by means of software, or may be implemented by means of hardware. The name of the unit does not in any way constitute a limitation of the unit itself, for example the first acquisition unit may also be described as "unit acquiring at least two internet protocol addresses".
The functions described above herein may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), an Application Specific Standard Product (ASSP), a system on a chip (SOC), a Complex Programmable Logic Device (CPLD), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
In a first aspect, according to one or more embodiments of the present disclosure, there is provided a video generating method, including:
acquiring a video generation request, wherein the video generation request comprises at least one initial image;
acquiring a salient image and a falling image of the initial image;
acquiring first image information of the salient image and second image information of the falling image;
and determining a target mirror mode of the initial image according to the first image information and the second image information, and generating the video according to the target mirror mode of the initial image.
According to one or more embodiments of the present disclosure, determining a target mirror mode of the initial image according to the first image information and the second image information includes:
determining the matching degree between the initial image and a plurality of preset mirror modes according to the first image information and the second image information;
and determining the target lens moving mode in the preset lens moving modes according to the matching degree between the initial image and the preset lens moving modes.
According to one or more embodiments of the present disclosure, the first image information includes a first size of the salient image and a first position of the salient image in the initial image; the second image information comprises a second size of the falling image and a second position of the falling image in the initial image;
According to the first image information and the second image information, determining the matching degree between the initial image and a plurality of preset mirror modes comprises the following steps:
determining a height ratio, a width ratio and an aspect ratio between the falling image and the salient image according to the first size and the second size;
determining a boundary distance between the falling image and the salient image according to the first position and the second position, wherein the boundary distance comprises: upper boundary distance, lower boundary distance, left boundary distance, and right boundary distance;
and determining the matching degree between the initial image and a plurality of preset mirror modes according to the height ratio, the width ratio, the length-width ratio and the boundary distance.
According to one or more embodiments of the present disclosure, determining a matching degree between the initial image and a plurality of preset mirror modes according to the height ratio, the width ratio, the aspect ratio and the boundary distance includes:
determining a first matching degree between the initial image and each preset mirror mode according to the height ratio;
determining a second matching degree between the initial image and each preset mirror mode according to the width ratio;
Determining a third matching degree between the initial image and each preset mirror mode according to the aspect ratio;
determining a fourth matching degree between the initial image and each preset mirror mode according to the boundary distance;
and determining the matching degree between the initial image and a plurality of preset mirror modes according to at least one of the first matching degree, the second matching degree, the third matching degree and the fourth matching degree.
According to one or more embodiments of the present disclosure, determining a first matching degree between an initial image and each preset mirror mode according to the height ratio includes:
if the height ratio is smaller than or equal to a first threshold value, determining a first matching degree between the initial image and the up-down mirror mode as a first value, and,
determining a first matching degree of the initial image and the lower-upper mirror mode as the first value, and,
and determining that a first matching degree between the initial image and other preset moving mirrors except the up-down moving mirror mode and the down-up moving mirror mode is 0, wherein the first value is larger than 0.
According to one or more embodiments of the present disclosure, determining a second matching degree between the initial image and each preset mirror mode according to the width ratio includes:
If the width ratio is smaller than or equal to a second threshold value, determining a second matching degree of the initial image and the left-right mirror mode as a second value, and
determining a second matching degree of the initial image and the right-left mirror mode as the second value, and,
and determining that a second matching degree between the initial image and other preset mirrors except the left-right mirror mode and the right-left mirror mode is 0, wherein the second value is larger than 0.
According to one or more embodiments of the present disclosure, determining a third matching degree between the initial image and each preset mirror mode according to the aspect ratio value includes:
if the aspect ratio is smaller than or equal to a third threshold, determining a third matching degree of the initial image and the near-far mirror mode as a third value, and determining a third matching degree of the initial image and other mirror modes except the near-far mirror mode as 0, wherein the third value is larger than 0;
and if the aspect ratio is greater than the third threshold, determining that the matching degree of the initial image and the far and near mirror mode is the third value, and determining that the third matching degree of the initial image and other mirror modes except the far and near mirror mode is 0.
According to one or more embodiments of the present disclosure, determining a fourth matching degree between the initial image and each preset mirror mode according to the boundary distance includes:
acquiring a first distance ratio of the upper boundary distance to the lower boundary distance and a second distance ratio of the left boundary distance to the right boundary distance;
determining a first sub-matching degree of the initial image and each preset lens-carrying mode according to the first distance ratio;
determining a second sub-matching degree of the initial image and each preset lens-carrying mode according to the second distance ratio;
for any one preset mirror mode, determining the fourth matching degree of the initial image and the preset mirror mode is as follows: and the sum of the corresponding first sub-matching degree and the corresponding second sub-matching degree.
According to one or more embodiments of the present disclosure, determining a first sub-matching degree of the initial image and each preset mirror mode according to the first distance ratio includes:
if the first distance ratio is greater than or equal to a fourth threshold, determining that the first sub-matching degree of the initial image and the up-down mirror mode is a fourth value, and determining that the first sub-matching degree between the initial image and other mirror modes except the up-down mirror mode is 0, wherein the fourth value is greater than 0;
And if the first distance ratio is smaller than the fourth threshold value, determining that the first sub-matching degree of the initial image and the lower and upper mirror modes is the fourth value, and determining that the first sub-matching degree of the initial image and other mirror modes except the lower and upper mirror modes is 0.
According to one or more embodiments of the present disclosure, determining a second sub-matching degree of the initial image and each preset mirror mode according to the second distance ratio includes:
if the second distance ratio is greater than or equal to a fifth threshold, determining that the second sub-matching degree of the initial image and the left and right mirror modes is a fourth value, and determining that the second sub-matching degree of the initial image and other mirror modes except the left and right mirror modes is 0, wherein the fourth value is greater than 0;
and if the second distance ratio is smaller than the fifth threshold, determining that the second sub-matching degree of the initial image and the right-left mirror mode is the fourth value, and determining that the second sub-matching degree of the initial image and other mirror modes except the right-left mirror mode is 0.
According to one or more embodiments of the present disclosure, a mirror mode is preset for any one; determining the matching degree between the initial image and the preset mirror mode according to at least one of the first matching degree, the second matching degree, the third matching degree and the fourth matching degree comprises the following steps:
And determining the sum of at least two of the first matching degree, the second matching degree, the third matching degree and the fourth matching degree corresponding to the preset mirror mode as the matching degree between the initial image and the preset mirror mode.
In accordance with one or more embodiments of the present disclosure, the video generation request further includes a duration of each initial image and a mirror speed; generating the video according to the target mirror mode of the initial image, including:
determining a starting image corresponding to the initial image according to a target lens moving mode of the initial image, the falling image, the duration and the lens moving speed;
determining a video segment corresponding to the initial image according to the duration of the initial image, the mirror speed, the target mirror mode, the starting image and the falling image;
and generating the video according to the video segment corresponding to each initial image.
According to one or more embodiments of the present disclosure, determining a starting image corresponding to the initial image according to a target lens-moving mode of the initial image, the falling image, the duration, and the lens-moving speed includes:
if the target lens moving mode of the initial image is an up-down lens moving mode or a down-up lens moving mode, determining that the image widths of the starting image and the falling image corresponding to the initial image are the same as the image width of the initial image;
If the target lens moving mode of the initial image is a left-right lens moving mode or a right-left lens moving mode, determining that the image heights of the starting image and the falling image corresponding to the initial image are the same as the image height of the initial image.
According to one or more embodiments of the present disclosure, determining a video segment corresponding to the initial image according to a duration of the initial image, a mirror speed, a target mirror mode, the starting image and the falling image includes:
if the target mirror mode of at least two adjacent initial images is the same, updating the aspect ratio of the starting images corresponding to the at least two initial images to be the same and the aspect ratio of the falling images corresponding to the at least two initial images to be the same.
In a second aspect, according to one or more embodiments of the present disclosure, there is provided a video generating apparatus including a first acquisition module, a second acquisition module, a third acquisition module, and a determination module, wherein:
the first acquisition module is used for acquiring a video generation request, wherein the video generation request comprises at least one initial image;
the second acquisition module is used for acquiring a significant image and a falling image of the initial image;
The third acquisition module is used for acquiring the first image information of the significant image and the second image information of the falling image;
the determining module is used for determining a target mirror mode of the initial image according to the first image information and the second image information, and generating the video according to the target mirror mode of the initial image.
In one possible implementation, the determining module 14 is specifically configured to:
determining the matching degree between the initial image and a plurality of preset mirror modes according to the first image information and the second image information;
and determining the target lens moving mode in the preset lens moving modes according to the matching degree between the initial image and the preset lens moving modes.
In one possible implementation manner, the determining module is specifically configured to:
determining a height ratio, a width ratio and an aspect ratio between the falling image and the salient image according to the first size and the second size;
determining a boundary distance between the falling image and the salient image according to the first position and the second position, wherein the boundary distance comprises: upper boundary distance, lower boundary distance, left boundary distance, and right boundary distance;
And determining the matching degree between the initial image and a plurality of preset mirror modes according to the height ratio, the width ratio, the length-width ratio and the boundary distance.
In one possible implementation manner, the determining module is specifically configured to:
determining a first matching degree between the initial image and each preset mirror mode according to the height ratio;
determining a second matching degree between the initial image and each preset mirror mode according to the width ratio;
determining a third matching degree between the initial image and each preset mirror mode according to the aspect ratio;
determining a fourth matching degree between the initial image and each preset mirror mode according to the boundary distance;
and determining the matching degree between the initial image and a plurality of preset mirror modes according to at least one of the first matching degree, the second matching degree, the third matching degree and the fourth matching degree.
In one possible implementation manner, the determining module is specifically configured to:
if the height ratio is smaller than or equal to a first threshold value, determining a first matching degree between the initial image and the up-down mirror mode as a first value, and,
Determining a first matching degree of the initial image and the lower-upper mirror mode as the first value, and,
and determining that a first matching degree between the initial image and other preset moving mirrors except the up-down moving mirror mode and the down-up moving mirror mode is 0, wherein the first value is larger than 0.
In one possible implementation manner, the determining module is specifically configured to:
if the width ratio is smaller than or equal to a second threshold value, determining a second matching degree of the initial image and the left-right mirror mode as a second value, and
determining a second matching degree of the initial image and the right-left mirror mode as the second value, and,
and determining that a second matching degree between the initial image and other preset mirrors except the left-right mirror mode and the right-left mirror mode is 0, wherein the second value is larger than 0.
In one possible implementation manner, the determining module is specifically configured to:
if the aspect ratio is smaller than or equal to a third threshold, determining a third matching degree of the initial image and the near-far mirror mode as a third value, and determining a third matching degree of the initial image and other mirror modes except the near-far mirror mode as 0, wherein the third value is larger than 0;
And if the aspect ratio is greater than the third threshold, determining that the matching degree of the initial image and the far and near mirror mode is the third value, and determining that the third matching degree of the initial image and other mirror modes except the far and near mirror mode is 0.
In one possible implementation manner, the determining module is specifically configured to:
acquiring a first distance ratio of the upper boundary distance to the lower boundary distance and a second distance ratio of the left boundary distance to the right boundary distance;
determining a first sub-matching degree of the initial image and each preset lens-carrying mode according to the first distance ratio;
determining a second sub-matching degree of the initial image and each preset lens-carrying mode according to the second distance ratio;
for any one preset mirror mode, determining the fourth matching degree of the initial image and the preset mirror mode is as follows: and the sum of the corresponding first sub-matching degree and the corresponding second sub-matching degree.
In one possible implementation manner, the determining module is specifically configured to:
if the first distance ratio is greater than or equal to a fourth threshold, determining that the first sub-matching degree of the initial image and the up-down mirror mode is a fourth value, and determining that the first sub-matching degree between the initial image and other mirror modes except the up-down mirror mode is 0, wherein the fourth value is greater than 0;
And if the first distance ratio is smaller than the fourth threshold value, determining that the first sub-matching degree of the initial image and the lower and upper mirror modes is the fourth value, and determining that the first sub-matching degree of the initial image and other mirror modes except the lower and upper mirror modes is 0.
In one possible implementation manner, the determining module is specifically configured to:
if the second distance ratio is greater than or equal to a fifth threshold, determining that the second sub-matching degree of the initial image and the left and right mirror modes is a fourth value, and determining that the second sub-matching degree of the initial image and other mirror modes except the left and right mirror modes is 0, wherein the fourth value is greater than 0;
and if the second distance ratio is smaller than the fifth threshold, determining that the second sub-matching degree of the initial image and the right-left mirror mode is the fourth value, and determining that the second sub-matching degree of the initial image and other mirror modes except the right-left mirror mode is 0.
In one possible implementation manner, the determining module is specifically configured to:
and determining the sum of at least two of the first matching degree, the second matching degree, the third matching degree and the fourth matching degree corresponding to the preset mirror mode as the matching degree between the initial image and the preset mirror mode.
In one possible implementation manner, the determining module is specifically configured to:
determining a starting image corresponding to the initial image according to a target lens moving mode of the initial image, the falling image, the duration and the lens moving speed;
determining a video segment corresponding to the initial image according to the duration of the initial image, the mirror speed, the target mirror mode, the starting image and the falling image;
and generating the video according to the video segment corresponding to each initial image.
In one possible implementation manner, the determining module is specifically configured to:
if the target lens moving mode of the initial image is an up-down lens moving mode or a down-up lens moving mode, determining that the image widths of the starting image and the falling image corresponding to the initial image are the same as the image width of the initial image;
if the target lens moving mode of the initial image is a left-right lens moving mode or a right-left lens moving mode, determining that the image heights of the starting image and the falling image corresponding to the initial image are the same as the image height of the initial image.
In one possible implementation manner, the determining module is specifically configured to:
If the target mirror mode of at least two adjacent initial images is the same, updating the aspect ratio of the starting images corresponding to the at least two initial images to be the same and the aspect ratio of the falling images corresponding to the at least two initial images to be the same.
In a third aspect, an embodiment of the present disclosure provides an electronic device, including: a processor and a memory;
the memory stores computer-executable instructions;
the processor executes computer-executable instructions stored in the memory, causing the at least one processor to perform the video generation method as described above in the first aspect and the various possible designs of the first aspect.
In a fourth aspect, embodiments of the present disclosure provide a computer-readable storage medium having stored therein computer-executable instructions which, when executed by a processor, implement the video generation method according to the first aspect and the various possible designs of the first aspect.
In a fifth aspect, embodiments of the present disclosure provide a computer program product comprising a computer program which, when executed by a processor, implements the video generation method according to the first aspect and the various possible designs of the first aspect.
The foregoing description is only of the preferred embodiments of the present disclosure and description of the principles of the technology being employed. It will be appreciated by persons skilled in the art that the scope of the disclosure referred to in this disclosure is not limited to the specific combinations of features described above, but also covers other embodiments which may be formed by any combination of features described above or equivalents thereof without departing from the spirit of the disclosure. Such as those described above, are mutually substituted with the technical features having similar functions disclosed in the present disclosure (but not limited thereto).
Moreover, although operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. In certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limiting the scope of the present disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are example forms of implementing the claims.

Claims (19)

1. A video generation method, comprising:
acquiring a video generation request, wherein the video generation request comprises at least one initial image;
acquiring a salient image and a falling image of the initial image;
acquiring first image information of the salient image and second image information of the falling image;
and determining a target mirror mode of the initial image according to the first image information and the second image information, and generating the video according to the target mirror mode of the initial image.
2. The method of claim 1, wherein determining a target mirror mode for the initial image based on the first image information and the second image information comprises:
determining the matching degree between the initial image and a plurality of preset mirror modes according to the first image information and the second image information;
And determining the target lens moving mode in the preset lens moving modes according to the matching degree between the initial image and the preset lens moving modes.
3. The method of claim 2, wherein the first image information includes a first size of the salient image and a first position of the salient image in the initial image; the second image information comprises a second size of the dropped image and a second position of the dropped image in the initial image.
4. A method according to claim 3, wherein determining the degree of matching between the initial image and a plurality of preset mirror modes based on the first image information and the second image information comprises:
determining a height ratio, a width ratio and an aspect ratio between the falling image and the salient image according to the first size and the second size;
determining a boundary distance between the falling image and the salient image according to the first position and the second position, wherein the boundary distance comprises: upper boundary distance, lower boundary distance, left boundary distance, and right boundary distance;
And determining the matching degree between the initial image and a plurality of preset mirror modes according to the height ratio, the width ratio, the length-width ratio and the boundary distance.
5. The method of claim 3, wherein determining a degree of matching between the initial image and a plurality of predetermined mirror modes based on the height ratio, width ratio, aspect ratio, and the boundary distance comprises:
determining a first matching degree between the initial image and each preset mirror mode according to the height ratio;
determining a second matching degree between the initial image and each preset mirror mode according to the width ratio;
determining a third matching degree between the initial image and each preset mirror mode according to the aspect ratio;
determining a fourth matching degree between the initial image and each preset mirror mode according to the boundary distance;
and determining the matching degree between the initial image and a plurality of preset mirror modes according to at least one of the first matching degree, the second matching degree, the third matching degree and the fourth matching degree.
6. The method of claim 5, wherein determining a first degree of matching between the initial image and each of the predetermined mirror modes based on the height ratio comprises:
If the height ratio is smaller than or equal to a first threshold value, determining a first matching degree between the initial image and the up-down mirror mode as a first value, and,
determining a first matching degree of the initial image and the lower-upper mirror mode as the first value, and,
and determining that a first matching degree between the initial image and other preset moving mirrors except the up-down moving mirror mode and the down-up moving mirror mode is 0, wherein the first value is larger than 0.
7. The method of claim 5, wherein determining a second degree of matching between the initial image and each of the predetermined mirror modes based on the width ratio comprises:
if the width ratio is smaller than or equal to a second threshold value, determining a second matching degree of the initial image and the left-right mirror mode as a second value, and
determining a second matching degree of the initial image and the right-left mirror mode as the second value, and,
and determining that a second matching degree between the initial image and other preset mirrors except the left-right mirror mode and the right-left mirror mode is 0, wherein the second value is larger than 0.
8. The method of claim 5, wherein determining a third degree of matching between the initial image and each of the predetermined mirror modes based on the aspect ratio values comprises:
If the aspect ratio is smaller than or equal to a third threshold, determining a third matching degree of the initial image and the near-far mirror mode as a third value, and determining a third matching degree of the initial image and other mirror modes except the near-far mirror mode as 0, wherein the third value is larger than 0;
and if the aspect ratio is greater than the third threshold, determining that the matching degree of the initial image and the far and near mirror mode is the third value, and determining that the third matching degree of the initial image and other mirror modes except the far and near mirror mode is 0.
9. The method of claim 5, wherein determining a fourth degree of matching between the initial image and each of the predetermined mirror modes based on the boundary distance comprises:
acquiring a first distance ratio of the upper boundary distance to the lower boundary distance and a second distance ratio of the left boundary distance to the right boundary distance;
determining a first sub-matching degree of the initial image and each preset lens-carrying mode according to the first distance ratio;
determining a second sub-matching degree of the initial image and each preset lens-carrying mode according to the second distance ratio;
For any one preset mirror mode, determining the fourth matching degree of the initial image and the preset mirror mode is as follows: and the sum of the corresponding first sub-matching degree and the corresponding second sub-matching degree.
10. The method of claim 9, wherein determining a first sub-match of the initial image to each of the predetermined mirror modes based on the first distance ratio comprises:
if the first distance ratio is greater than or equal to a fourth threshold, determining that the first sub-matching degree of the initial image and the up-down mirror mode is a fourth value, and determining that the first sub-matching degree between the initial image and other mirror modes except the up-down mirror mode is 0, wherein the fourth value is greater than 0;
and if the first distance ratio is smaller than the fourth threshold value, determining that the first sub-matching degree of the initial image and the lower and upper mirror modes is the fourth value, and determining that the first sub-matching degree of the initial image and other mirror modes except the lower and upper mirror modes is 0.
11. The method of claim 9, wherein determining a second sub-match of the initial image to each of the predetermined mirror modes based on the second distance ratio comprises:
If the second distance ratio is greater than or equal to a fifth threshold, determining that the second sub-matching degree of the initial image and the left and right mirror modes is a fourth value, and determining that the second sub-matching degree of the initial image and other mirror modes except the left and right mirror modes is 0, wherein the fourth value is greater than 0;
and if the second distance ratio is smaller than the fifth threshold, determining that the second sub-matching degree of the initial image and the right-left mirror mode is the fourth value, and determining that the second sub-matching degree of the initial image and other mirror modes except the right-left mirror mode is 0.
12. The method of any of claims 5-11, wherein determining a degree of matching between the initial image and the plurality of preset mirror modes based on at least one of the first degree of matching, the second degree of matching, the third degree of matching, and the fourth degree of matching comprises:
and determining the sum of at least two of the first matching degree, the second matching degree, the third matching degree and the fourth matching degree corresponding to any one preset mirror mode as the matching degree between the initial image and the preset mirror mode.
13. The method of any of claims 1-12, wherein the video generation request further comprises a duration and a mirror speed for each initial image; generating the video according to the target mirror mode of the initial image, including:
determining a starting image corresponding to the initial image according to a target lens moving mode of the initial image, the falling image, the duration and the lens moving speed;
determining a video segment corresponding to the initial image according to the duration of the initial image, the mirror speed, the target mirror mode, the starting image and the falling image;
and generating the video according to the video segment corresponding to each initial image.
14. The method of claim 13, wherein determining a starting image corresponding to the initial image based on the target mirror mode of the initial image, the falling image, the duration, and the mirror speed comprises:
if the target lens moving mode of the initial image is an up-down lens moving mode or a down-up lens moving mode, determining that the image widths of the starting image and the falling image corresponding to the initial image are the same as the image width of the initial image;
If the target lens moving mode of the initial image is a left-right lens moving mode or a right-left lens moving mode, determining that the image heights of the starting image and the falling image corresponding to the initial image are the same as the image height of the initial image.
15. The method of claim 13 or 14, wherein determining the video segment corresponding to the initial image based on the duration of the initial image, the mirror speed, the target mirror mode, the start image, and the drop image, comprises:
if the target mirror mode of at least two adjacent initial images is the same, updating the aspect ratio of the starting images corresponding to the at least two initial images to be the same and the aspect ratio of the falling images corresponding to the at least two initial images to be the same.
16. The video generation device is characterized by comprising a first acquisition module, a second acquisition module, a third acquisition module and a determination module, wherein:
the first acquisition module is used for acquiring a video generation request, wherein the video generation request comprises at least one initial image;
the second acquisition module is used for acquiring a significant image and a falling image of the initial image;
The third acquisition module is used for acquiring the first image information of the significant image and the second image information of the falling image;
the determining module is used for determining a target mirror mode of the initial image according to the first image information and the second image information, and generating the video according to the target mirror mode of the initial image.
17. An electronic device, comprising: a processor and a memory;
the memory stores computer-executable instructions;
the processor executing computer-executable instructions stored in the memory, causing the processor to perform the video generation method of any one of claims 1 to 15.
18. A computer readable storage medium having stored therein computer executable instructions which, when executed by a processor, implement the video generation method of any of claims 1 to 15.
19. A computer program product comprising a computer program which, when executed by a processor, implements the video generation method of any of claims 1 to 15.
CN202111582800.5A 2021-12-22 2021-12-22 Video generation method, device and equipment Pending CN116347155A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111582800.5A CN116347155A (en) 2021-12-22 2021-12-22 Video generation method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111582800.5A CN116347155A (en) 2021-12-22 2021-12-22 Video generation method, device and equipment

Publications (1)

Publication Number Publication Date
CN116347155A true CN116347155A (en) 2023-06-27

Family

ID=86879251

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111582800.5A Pending CN116347155A (en) 2021-12-22 2021-12-22 Video generation method, device and equipment

Country Status (1)

Country Link
CN (1) CN116347155A (en)

Similar Documents

Publication Publication Date Title
CN111242881B (en) Method, device, storage medium and electronic equipment for displaying special effects
CN110796664B (en) Image processing method, device, electronic equipment and computer readable storage medium
US20220277481A1 (en) Panoramic video processing method and apparatus, and storage medium
CN111405173B (en) Image acquisition method and device, point reading equipment, electronic equipment and storage medium
CN110211195B (en) Method, device, electronic equipment and computer-readable storage medium for generating image set
CN111258519B (en) Screen split implementation method, device, terminal and medium
WO2022247630A1 (en) Image processing method and apparatus, electronic device and storage medium
CN111461968B (en) Picture processing method, device, electronic equipment and computer readable medium
US11494961B2 (en) Sticker generating method and apparatus, and medium and electronic device
CN116310036A (en) Scene rendering method, device, equipment, computer readable storage medium and product
US20240112299A1 (en) Video cropping method and apparatus, storage medium and electronic device
CN114445269A (en) Image special effect processing method, device, equipment and medium
WO2023138441A1 (en) Video generation method and apparatus, and device and storage medium
WO2023025085A1 (en) Audio processing method and apparatus, and device, medium and program product
CN113963000B (en) Image segmentation method, device, electronic equipment and program product
US11810336B2 (en) Object display method and apparatus, electronic device, and computer readable storage medium
CN116347155A (en) Video generation method, device and equipment
CN115619904A (en) Image processing method, device and equipment
CN111461964B (en) Picture processing method, device, electronic equipment and computer readable medium
CN113961280B (en) View display method and device, electronic equipment and computer readable storage medium
CN114742934A (en) Image rendering method and device, readable medium and electronic equipment
CN111862342A (en) Texture processing method and device for augmented reality, electronic equipment and storage medium
CN114419298A (en) Virtual object generation method, device, equipment and storage medium
CN112395826B (en) Text special effect processing method and device
CN112991147B (en) Image processing method, device, electronic equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination