WO2021139359A1 - 图像处理方法、装置、电子设备及存储介质 - Google Patents

图像处理方法、装置、电子设备及存储介质 Download PDF

Info

Publication number
WO2021139359A1
WO2021139359A1 PCT/CN2020/125078 CN2020125078W WO2021139359A1 WO 2021139359 A1 WO2021139359 A1 WO 2021139359A1 CN 2020125078 W CN2020125078 W CN 2020125078W WO 2021139359 A1 WO2021139359 A1 WO 2021139359A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
images
sequence
motion vector
vector data
Prior art date
Application number
PCT/CN2020/125078
Other languages
English (en)
French (fr)
Inventor
王恺
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Publication of WO2021139359A1 publication Critical patent/WO2021139359A1/zh
Priority to US17/718,318 priority Critical patent/US11989814B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/802D [Two Dimensional] animation, e.g. using sprites
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/01Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level
    • H04N7/0135Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level involving interpolation processes
    • H04N7/014Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level involving interpolation processes involving the use of motion vectors
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/30Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers
    • A63F13/35Details of game servers
    • A63F13/355Performing operations on behalf of clients with restricted processing capabilities, e.g. servers transform changing game scene into an encoded video stream for transmitting to a mobile phone or a thin client
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/45Controlling the progress of the video game
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Definitions

  • This application relates to the field of image processing technology, and relates to an image processing method, device, electronic equipment, and storage medium.
  • this application proposes an image processing method, device, electronic equipment, and computer-readable storage medium to improve the above-mentioned problems.
  • This application provides an image processing method.
  • the method includes: acquiring a first sequence of images and motion vector data corresponding to each frame of the first sequence of images; based on the motion vector data, the first sequence Image and the slow-down multiple to generate an inserted image matching the slow-down multiple; wherein the number of the inserted image corresponds to the slow-down multiple; insert the inserted image into the playback sequence of the first sequence of images To obtain a second sequence of images; play the second sequence of images.
  • the present application provides an image processing device, the device includes: a data acquisition unit for acquiring a first sequence of images and motion vector data corresponding to each frame of the first sequence of images; an image generation unit for Based on the motion vector data, the first sequence of images, and the slow-down multiple, an insertion image matching the slow-down multiple is generated; wherein the number of the inserted images corresponds to the slow-down multiple; an image configuration unit, The inserted image is used to insert the inserted image into the play sequence of the first sequence of images to obtain a second sequence of images; the image play unit is used to play the second sequence of images.
  • the present application provides an electronic device including a processor and a memory; one or more programs are stored in the memory and configured to be executed by the processor to implement the above method.
  • the present application provides a computer-readable storage medium in which a program code is stored, and the above-mentioned method is executed when the program code is run by a processor.
  • the image processing method, device, electronic equipment, and storage medium provided by this application generate an inserted image matching the slowdown multiple based on motion vector data, a first sequence of images, and a slowdown multiple, and insert the first sequence of images into playback
  • the inserted image is dynamically generated based on the slowdown multiple and the motion vector data to reduce the production cost and shorten the time-consuming production of dynamic effects, thereby improving the development efficiency.
  • Fig. 1 shows a schematic diagram of a motion vector in an embodiment of the present application
  • Fig. 2 shows a schematic diagram of a motion vector in an embodiment of the present application
  • Fig. 3 shows a schematic diagram of object movement in an embodiment of the present application
  • Fig. 4 shows a flowchart of an image processing method proposed in an embodiment of the present application
  • FIG. 5 shows a schematic diagram of inserting images into a playback sequence in the embodiment shown in FIG. 4;
  • Fig. 6 shows a schematic diagram of a second sequence of images in the embodiment shown in Fig. 4;
  • FIG. 7 shows a flowchart of an image processing method proposed by an embodiment of the present application.
  • FIG. 8 shows a flowchart of an implementation manner of S220 in the image processing method provided in FIG. 7;
  • FIG. 9 shows a schematic diagram of reference motion vector data in an embodiment of the present application.
  • FIG. 10 shows a schematic diagram of a basic image and a motion vector map in an embodiment of the present application
  • FIG. 11 shows a schematic diagram of pixel correspondence in an embodiment of the present application.
  • FIG. 12 shows a flowchart of an implementation manner of S230 in the image processing method provided in FIG. 7;
  • FIG. 13 shows a schematic diagram of generating an inserted image in two adjacent images in an image processing method proposed by an embodiment of the present application
  • FIG. 14 shows a schematic diagram of inserting reference motion vector data corresponding to an image in an image processing method proposed by an embodiment of the present application
  • FIG. 15 shows a flowchart of an image processing method proposed by an embodiment of the present application.
  • FIG. 16 shows a schematic diagram of a configuration interface in an embodiment of the present application.
  • FIG. 17 shows a flowchart of an image processing method proposed by an embodiment of the present application.
  • FIG. 18 shows a flowchart of an image processing method proposed by an embodiment of the present application.
  • FIG. 19 shows a schematic diagram of an explosion effect in a game scene in an embodiment of the present application.
  • FIG. 20 shows a schematic diagram of effect comparison before and after explosion effect processing in a game scene in an embodiment of the present application
  • FIG. 21 shows a comparison effect diagram of the image processing method provided by the embodiment of the present application and the number of images required to be produced in the related technology
  • FIG. 22 shows a structural block diagram of an image processing device proposed in an embodiment of the present application.
  • FIG. 23 shows a structural block diagram of an image processing device proposed in an embodiment of the present application.
  • FIG. 24 shows a structural block diagram of an image processing device proposed in an embodiment of the present application.
  • FIG. 25 shows a structural block diagram of an image processing device proposed by an embodiment of the present application.
  • FIG. 26 shows a structural block diagram of an electronic device of the present application for executing the image processing method according to an embodiment of the present application
  • FIG. 27 shows a storage unit for storing or carrying program code for implementing the image processing method according to the embodiment of the present application according to an embodiment of the present application.
  • the slow down multiple characterizes the multiple that extends the playing time of the dynamic effect. For example, if the slowdown factor is 2, it means that the playing time of the dynamic effect needs to be extended by 2 times. For example, if the original dynamic effect has a playback duration of 2 seconds, if the slowdown factor is 2, the playback duration will be extended to 4 seconds, and if the slowdown factor is 5, the playback duration will be extended accordingly. To 10 seconds.
  • Motion vector The motion vector characterizes the displacement of the target pixel in the image.
  • the target pixel can refer to any pixel in the image, or it can refer to a pixel in a content block in the image.
  • the target pixel is a pixel in the image
  • the position of the pixel 10 in the image 20 of the previous frame in Figure 1 is (a, b)
  • the position in the frame image 30 is (c, d)
  • the motion vector characterizes the displacement between the content block and the best matching block
  • the best matching block refers to the next frame of image 30 and the previous one.
  • the content block 21 of the frame image 20 matches the content block 31 with the highest degree of matching.
  • the content block may include multiple pixels, and in the embodiment of the present application, the displacement of the pixel at the center point of the content block may be used as the displacement of the content block.
  • the center point can be a geometric center.
  • the position of the pixel at the center point of the content block 21 is (a, b), and the position of the pixel at the center point of the best matching block 31 is (c, d), then the content block and the most The motion vector between the best matching blocks is (ac, db).
  • the content block can be understood as representing an area with substantial meaning in the image.
  • the person's head is an area with entity meaning
  • the entity meaning is that the image content of the area represents the person's head
  • the head area can be used as a content block .
  • the person's hand is also an area with physical meaning, and the hand area can be used as a content block.
  • the target pixel in the embodiment of the present application can be understood as each pixel in the image, and can also be understood as the pixel in the content block in the foregoing content.
  • the motion vector data involved in the embodiments of the present application represents data carrying motion vectors, and the data may be in a text format or a picture format.
  • the dynamic effects involved in some virtual scenes require higher production costs.
  • it is achieved by making key frames or by making sequence images.
  • the key frame is equivalent to the original picture in the dynamic effect, and refers to the frame where the key action in the movement or change of the object is located.
  • the dynamic effect to be displayed will be decomposed, and then the dynamic effect will be decomposed into multiple actions, each of which can be used as a frame of image, and then the images corresponding to one action are combined It is a sequence image.
  • the dynamic effect corresponding to the sequence image can be displayed.
  • each frame of the dynamic effect needs to be produced by the developer through the development tool in the early stage, which will cause higher production costs.
  • the dynamic effect of super slow motion has a higher frame rate (the number of frames displayed per second) than the dynamic effect of the normal type.
  • the frame rate of the dynamic effect of the normal type is 30fps
  • the frame rate of the super slow motion type The frame rate of the dynamic effects can be 240fps or even higher.
  • 30fps means playing 30 frames of images per second
  • 240 means playing 240 frames of images per second.
  • the developer needs to use the development tool to create more images to insert into The ordinary dynamic effect of the image sequence to adapt to the increase in frame rate.
  • the pre-made images will be stored in the resource file.
  • the resource file When the resource file is stored in the terminal device used by the user, the resource file will occupy more storage space of the terminal device and reduce the utilization rate of the storage space.
  • the applicant proposes the image processing method, device, electronic device, and computer-readable storage medium provided by the embodiments of this application to realize that after the original sequence image has been produced by the development tool, the original sequence image needs to be characterized
  • the dynamic effect or can also be understood as the dynamic effect achieved
  • the original sequence image can be produced according to the corresponding corresponding to each frame of the image
  • the motion vector data is produced to obtain the inserted image, and inserted into the original sequence image playback sequence to obtain a sequence image that includes more images, and then the inserted image is dynamically generated based on the slowdown multiple and the motion vector data, without the need for any more
  • the image processing method provided by the embodiments of this application can be implemented by the terminal/server alone; it can also be implemented by the terminal and the server in cooperation.
  • the terminal collects a request to slow down the playback of the first sequence of images (including the slowdown multiple), responsible for the image processing method described below to obtain the second sequence of images and play them; the terminal collects the request to slow down the playback of the first sequence of images (including the slowdown multiple), sends the request to the server, and the server receives After the request is received, the image processing method is executed to obtain the second sequence of images, and the second sequence of images is sent to the terminal to play the second sequence of images.
  • the electronic device provided by the embodiments of the application for implementing the image processing method described below can be various types of terminal devices or servers, where the server can be an independent physical server or a server composed of multiple physical servers.
  • a cluster or distributed system can also be a cloud server that provides cloud computing services;
  • the terminal can be a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart speaker, a smart watch, etc., but it is not limited to this.
  • the terminal and the server can be directly or indirectly connected through wired or wireless communication, which is not limited in this application.
  • a sequence of images characterizes the process of moving an object from one end of the screen to the other end for illustration.
  • the object 32 passes through multiple positions during the movement.
  • the object 32 starts to move from the first position, passes through the second position and the third position, and reaches the fourth position.
  • the picture of the object 32 at each position can be represented by a frame of image, and it can be understood that each frame of the image in the sequence of images represents a position of the object.
  • a sequence image includes 4 frames of images, where the first frame of image corresponds to the picture at the first position, the second frame of image corresponds to the picture at the second position, and the third frame of image corresponds to the picture at the third position.
  • the fourth frame of image corresponds to the picture at the fourth position.
  • the pixel that characterizes the object in each frame of image can be used as the target pixel, and the object in each frame of image can be regarded as the target pixel in each frame of image moving during the movement.
  • the target pixel 33 the pixel that characterizes the object 32
  • the movement of the object 32 from the first position to the fourth position can also be regarded as the target pixel 33 in the first position to the fourth position.
  • Position movement taking the target pixel 33 (the pixel that characterizes the object 32) in FIG. 3 as an example, the movement of the object 32 from the first position to the fourth position can also be regarded as the target pixel 33 in the first position to the fourth position.
  • a new image can be generated as an inserted image by acquiring the motion vector of the target pixel, so as to solve the aforementioned technical problem.
  • FIG. 4 is a flowchart of an image processing method according to an embodiment of the application, and the method includes:
  • S110 Acquire a first sequence of images and motion vector data corresponding to each frame of the first sequence of images.
  • the first sequence of images can be understood as a sequence of images that characterizes the dynamic effect of the target.
  • the first sequence of images may include multiple frames of images, and when the multiple frames of images are drawn to the image display interface for display, the target dynamic effect can be displayed on the image display interface.
  • the first sequence of images is the basis for subsequent generation of new images. As a way, the first sequence of images can be produced by developers through development tools.
  • the motion vector data corresponding to each frame of the image in the first sequence of images represents the displacement of the target pixel in the corresponding image compared to the corresponding pixel in the adjacent image, or the comparison of the content block in the corresponding image.
  • the adjacent image may be the previous frame of image adjacent to the corresponding image, or the next frame of image adjacent to the corresponding image.
  • the motion vector data corresponding to the first frame of image is all 0, because in the first frame of image The target pixel of has not yet moved, and no displacement has occurred.
  • the motion vector data corresponding to the second frame of image can represent the displacement of the target pixel in the second frame of image compared to the position of the corresponding pixel in the first frame of image.
  • the motion vector data corresponding to the third frame image can represent the displacement of the target pixel in the third frame image compared to the position of the corresponding pixel in the second frame image.
  • the motion vector data corresponding to the fourth frame image can be It characterizes the displacement of the position of the target pixel in the fourth frame of image compared to the position of the corresponding pixel in the third frame of image.
  • S120 Based on the motion vector data, the first sequence of images, and the slow-down multiple, generate an inserted image matching the slow-down multiple; wherein the number of inserted images corresponds to the slow-down multiple.
  • the first sequence of images can represent a dynamic effect during the display process.
  • slowing down the speed can be understood as a multiple by which the dynamic effect represented by the first sequence of images is slowed down, or can be understood as a multiple by which the playing time of the dynamic effect represented by the first sequence of images is prolonged.
  • the playback duration of the dynamic effect represented by the first sequence of images in the display process is 2 seconds
  • the slowdown factor is 2
  • the playback duration of the dynamic effect represented by the first sequence of images in the display process is 4 seconds
  • the slowdown factor is 3
  • the frame rate is a factor that affects the user's visual experience.
  • the frame rate can be understood as the number of frames of images played per second, or as the number of frames of images refreshed per second. A higher frame rate can get smoother and more realistic animations. Then in the case of extending the playing time of the dynamic effect represented by the first sequence of images, if the number of frames in the first sequence of images is not increased at the same time, the number of frames played per second will decrease, which will cause a freeze sense.
  • the corresponding frame rate is 30 fps (the number of frames displayed per second).
  • the corresponding playback duration will be extended to 8 seconds, so without inserting a new image, the corresponding frame rate will become 7.5fps. Therefore, in order to achieve the technical effect that the first sequence of images can still maintain the visual effect while playing at a slower rate, a new image can be generated based on the motion vector data, the first sequence of images, and the slowdown multiple, and a new image can be generated as an inserted image. Insert image to insert the first sequence of images.
  • the motion vector data of the inserted image to be generated can be obtained first as the reference motion vector data.
  • the reference motion vector data characterizes the displacement of the target pixel in the corresponding image relative to a certain frame of image in the first sequence of images, or characterizes the displacement of the target pixel in the corresponding image relative to the previously generated inserted image, and then After the reference motion vector data is generated, the target pixel can be moved according to the generated reference motion vector data to generate an inserted image.
  • the extended playback duration corresponding to the dynamic effect is different, and the number of inserted images inserted into the first sequence of images to be generated is also different. Therefore, the number of generated inserted images needs to be matched with the slow-down multiple, so that corresponding to different slow-down multiples, the original visual effect can be maintained, and even the original visual effect can be improved.
  • S130 Insert the inserted image into the playback sequence of the first sequence of images to obtain a second sequence of images.
  • each playback position represents a time sequence position that is played in the playback time sequence. For example, if the playback position corresponding to the inserted image is between the first frame of image and the second frame of image in the first sequence of images, it means that the first frame of image will be played first and then played in the subsequent display process. Insert the image, and then play the second frame of image.
  • the first sequence of images 40a including 6 frames of images as an example
  • one frame of image can be inserted between every two adjacent frames of images, that is, a total of Insert 5 frames of images.
  • the 5 frames of inserted images generated corresponding to the first sequence of images 40a include an inserted image 51, an inserted image 52, an inserted image 53, an inserted image 54 and an inserted image 55.
  • the position of the illustration corresponding to each inserted image is the position pointed to by the corresponding arrow, that is, the position in the sequence that is subsequently played in the playing sequence.
  • the obtained second sequence image may be as shown in FIG. 6.
  • the insertion image 51, the insertion image 52, and the insertion image shown in FIG. 5 have been inserted into the generated second sequence image 40b.
  • the image 53, the insertion image 54, and the insertion image 55 have been inserted into the generated second sequence image 40b.
  • S140 Play the second sequence of images.
  • playing the second sequence of images can be understood as playing the multiple frames of images included in the second sequence of images in order to achieve the dynamics represented by the second sequence of images. effect.
  • the second sequence is played when the image 40b is displayed, the images in the second image sequence 40b will be played sequentially from left to right to show the corresponding dynamic effects.
  • the image processing method provided by the embodiment of the application generates inserts whose number matches the slowdown multiple based on the motion vector data, the first sequence image, and the slowdown multiple after acquiring the first sequence image and the motion vector data Image, and insert the inserted image into the playback sequence of the first sequence of images to obtain the second sequence of images.
  • the dynamics represented by the first sequence of images After the first sequence of images has been produced, the dynamics represented by the first sequence of images When the effect is slowed down and the visual effect is maintained at the same time, the inserted image is produced according to the motion vector data corresponding to each frame of the image in the first sequence of images, and inserted into the playback sequence of the first sequence of images to obtain more
  • the second sequence of images of the image further realizes that there is no need to use development tools to produce more images for insertion into the first image sequence, so as to reduce the production cost and shorten the time-consuming production of dynamic effects.
  • FIG. 7 shows a flowchart of an image processing method according to an embodiment of the present application.
  • the method includes:
  • S210 Acquire a first sequence of images and motion vector data corresponding to each frame of the first sequence of images.
  • S220 According to the motion vector data and the slow-down multiple, generate reference motion vector data matching the slow-down multiple; wherein, the number of reference motion vector data corresponds to the slow-down multiple.
  • the reference motion vector data refers to the motion vector data corresponding to the inserted image, which can be generated according to the motion vector data corresponding to each frame of the image in the first sequence of images and the slowdown factor, and then the inserted image is generated subsequently based on the reference motion vector data.
  • each reference motion vector data corresponds to an inserted image, that is to say, the number of reference motion vector data corresponds to the number of inserted images generated subsequently.
  • generating reference motion vector data that matches the slow-down multiple may include:
  • S221 Acquire a target displacement, where the target displacement is the displacement represented by the motion vector data corresponding to the subsequent display image in every two adjacent images in the first sequence of images.
  • S222 Obtain the ratio of the target displacement to the slow-down multiple, obtain the number of inserted images between every two adjacent images according to the slow-down multiple, and use the ratio as the inserted image between every two adjacent images According to the corresponding reference motion vector data, the reference motion vector data matching the slowdown multiple is obtained.
  • every two adjacent images can be used as One interval is used to generate reference motion vector data.
  • a target displacement may be generated for each interval as the target displacement corresponding to the interval.
  • the first frame image and the second frame image can be regarded as an interval
  • the second frame image and the third frame image can be regarded as an interval
  • the third frame image and the fourth frame image can be regarded as an interval
  • the fifth frame of image is regarded as an interval
  • the fifth frame of image and the sixth frame of image are regarded as an interval.
  • the motion vector data corresponding to the second frame image Displacement is represented by the motion vector data corresponding to the second frame image Displacement. It can be understood that the motion vector data corresponding to each frame of image represents the displacement of the target pixel in each frame of image, and the target displacement includes the displacement of each pixel in the target pixel.
  • the motion vector of the first pixel is (a2-a1, b2-b1)
  • the displacement of the first pixel in the X-axis is a2-a1
  • the displacement in the Y-axis is The displacement is b2-b1
  • the final calculated target displacement includes the displacement a2-a1 of the first pixel in the X axis and the displacement a2-a1 in the Y axis, so that the target pixel can be obtained by the aforementioned method
  • the displacement of each pixel is included, and the displacement represented by the motion vector data corresponding to the second frame image is obtained as the target displacement corresponding to the interval.
  • the target pixel also includes a second pixel
  • the position of the second pixel in the first frame is (c1, d1)
  • the position of the second pixel in the second frame of image is (c2, d2)
  • the displacement in the X-axis is c2-c1
  • the displacement in the Y-axis is d2-d1.
  • the calculated target displacement includes the displacement a2-a1 of the first pixel in the X-axis, the displacement b2-b1 in the Y-axis, and the displacement of the second pixel in the X-axis c2-c1, Displacement d2-d1 in the Y axis.
  • the movement of each pixel in the first sequence of images has a certain integrity. Please refer to FIG. 3 again.
  • the pixels that make up the object 32 except for the pixel 33 can also be regarded as moving from the first position to the fourth position. .
  • the displacement of each pixel constituting the object 32 is the same. Therefore, the displacement of a certain pixel of the target pixel in the subsequent display image can be directly used as the displacement represented by the motion vector data corresponding to the subsequent display image, and then the target displacement corresponding to each interval can be obtained.
  • the target pixel is the target pixel of the foregoing first frame image and the second frame image.
  • the calculated target displacement includes the displacement a2-a1 of the first pixel in the X-axis, the displacement b2-b1 in the Y-axis, and the displacement of the second pixel in the X-axis c2-c1, in the Y-axis
  • the displacement of d2-d1 then in the case of movement with a certain integrity, c2-c1 and a2-a1 are the same, d2-d1 and b2-b1 are the same, and then the first frame of image and
  • the target displacements corresponding to the second frame image composition interval are a2-a1 (X-axis direction) and b2-b1 (Y-axis direction).
  • the number of images inserted in every two adjacent images is related to the current slowdown multiple.
  • the larger the corresponding slowdown factor for the playback duration of the dynamic effect represented by the first sequence of images the longer the playback duration of the dynamic effect, and the more inserted images that need to be generated. .
  • the reference motion vector data corresponding to the inserted image to be generated can be determined according to the ratio of the target displacement to the slowdown factor.
  • the target displacement can include the displacement of each pixel in the target pixel between each adjacent image.
  • each target displacement needs to be The respective displacements of each included pixel are respectively compared with the slowdown multiples to calculate the reference motion vector data corresponding to each pixel in the inserted image, and then the reference motion vector data corresponding to the inserted image is obtained.
  • the motion vector of the pixel 11 included in the target pixel in the current frame image 41 is (0.5, 0.5)
  • the motion vector represents the position of the pixel 11 in the current frame image 41 compared to the previous frame.
  • the displacement of the position in the image 42 in the X-axis direction and the Y-axis direction are both 0.5.
  • the ratio 0.25 is used as the reference motion vector data of the pixel 11 among the target pixels included in the inserted image to be generated.
  • the inserted image to be generated is the inserted image 56
  • the calculated ratio is 0.25
  • the displacement in the direction and the Y-axis direction is 0.25.
  • the number of reference motion vector data that needs to be generated in each interval can be determined according to the formula that x ⁇ N is less than y.
  • the symbol " ⁇ " represents the product operation
  • x is the ratio of the target displacement to the slowdown multiple
  • y is the target displacement
  • N is the largest integer that makes x ⁇ N smaller than y.
  • the displacement of the pixel 11 in the X-axis direction is 0.5, which means that the displacement represented by the motion vector data corresponding to the subsequent display image is 0.5, and the target displacement is 0.5.
  • the ratio is 0.16, then based on the formula x ⁇ N is less than y, and N is the largest integer that makes x ⁇ N less than y. , N is 3.
  • the number of newly generated reference motion vector data can be directly determined according to the original frame number of the first sequence of images. In this way, if the original image frame number of the first sequence image is m, then if the slowdown factor is determined to be n, a total of m ⁇ n frames of images can be obtained for the new sequence image that is subsequently generated. Then the inserted images to be generated have a total of m ⁇ nm frames. In this case, the number of reference motion vector data that needs to be generated is m ⁇ nm, then the number of inserted images that need to be generated between two adjacent frames of images can be (m ⁇ nm)/(m-1).
  • (m ⁇ n-m)/(m-1) cannot be an integer.
  • the quotient of (m ⁇ nm)/(m-1) can be used as the quotient between every two adjacent frames of images.
  • the number of inserted images that need to be generated, and the obtained remainder is randomly inserted between any two frames of the first sequence of images, or inserted into the last frame of the first sequence of images for display.
  • the adaptive calculation obtains a quotient of 1, and then determines that in 6 frames of images, the number of inserted images that needs to be generated between every two adjacent frames is 1 frame, and for the remaining 1
  • the frame image can be configured to be generated between any two frames of the original 6 frame images of the first sequence of images, or generated after the original 6 frame images are configured.
  • the original 6 frames of images include the first frame of image, the second frame of image, the third frame of image, the fourth frame of image, the fifth frame of image, and the sixth frame of image. Then if it is determined that the remaining 1 frame of image is placed between the first frame of image and the second frame of image for generation, the interpolated image to be generated between the first frame of image and the second frame of image is 2 frames , And the inserted image generated between the other two adjacent frames is 1 frame.
  • the motion vector data corresponding to each frame of the image in the first sequence of images can be stored in a variety of ways.
  • the corresponding motion vector data in each frame of image can be stored in the form of a data table.
  • a map can be made to carry the corresponding motion vector data in each frame of image.
  • the texture is an image with the same outline as the image in the sequence image.
  • the left image 101 in FIG. 10 is a basic image including a sequence image
  • the right image 102 is a corresponding texture map
  • the contours of the object in the texture map and the object in the sequence image are the same.
  • the basic image includes a dynamic effect to be achieved and is decomposed into multiple actions
  • the multiple actions respectively correspond to image content.
  • a block 60 in FIG. 10 corresponds to an action in the dynamic effect.
  • a block 60 can correspond to a frame of image in the sequence diagram.
  • the basic image in Figure 10 represents the dynamic effect of a smaller star moving from the left side of the screen, and another larger star moving from the bottom to the top of the screen.
  • the first sequence of images including multiple frames of images can be obtained by cutting the content in the basic image.
  • the value of the designated color channel of each pixel in the map is used to characterize the motion vector of the corresponding pixel in the first sequence of images.
  • the pixel 70 in one frame of image in the sequence image and the pixel 80 in the texture map are corresponding pixels. Then the value of the designated color channel corresponding to the pixel 80 is used to characterize the motion vector of the pixel 70. Similarly, the pixel 71 in a frame of image in the sequence image and the pixel 81 in the texture map are corresponding pixels, then the value of the designated color channel corresponding to the pixel 81 is used to represent the motion vector of the pixel 71.
  • the designated color channel can be a red channel and a green channel, where the red channel is used to characterize the displacement of the pixel in the X-axis direction, and the green channel is used to characterize the displacement of the pixel in the Y-axis direction.
  • the motion vector data corresponding to each frame of the image in the first sequence of images represents the displacement of the target pixel in each frame of image compared to the target pixel in the previous frame of image.
  • the inserted image can be generated based on the way of pixel movement.
  • the generation of inserted images whose number matches the slowdown multiple based on the first sequence image and the reference motion vector data includes:
  • S232 Move the target pixel in the target image corresponding to each reference motion vector data according to the respective reference motion vector data to obtain the inserted image corresponding to each reference motion vector data, and use the current reference motion vector data to move the target pixel in the target image.
  • the set of insertion images corresponding to the motion vector data is used as the insertion image matching the slowdown multiple.
  • the image 43 and the image 44 are two adjacent frames of images forming an interval in the first sequence.
  • the image 57, the image 58, and the image 59 are inserted images to be generated.
  • the reference motion vector data corresponding to the image 57 is the current reference motion vector data.
  • the reference motion vector data corresponding to each of the image 57, the image 58, and the image 59 to be generated is based on the displacement and the slowdown factor represented by the corresponding motion vector data of the image 44.
  • the displacement represented by the motion vector data of the image 44 is relative to the displacement in the image 43.
  • the target image corresponding to the reference motion vector data corresponding to the image 57 is the image 43.
  • the displacement represented by the reference motion vector data corresponding to the image 58 is relative to the displacement in the image 57.
  • the reference motion vector data corresponding to the image 58 is the current reference motion vector data
  • the current reference motion The target image corresponding to the vector data is image 57.
  • the target image corresponding to the current reference motion vector data is the image 58.
  • the target pixel represented by each reference motion vector data is compared with each adjacent
  • the displacement of the first displayed image in the two images can already be determined. For example, as shown in Fig.
  • the motion vector of the pixel 11 included in the target pixel in the image 44 is (0.5, 0.5), then if the slowdown factor is determined to be 3 times, it can be obtained that the pixel 11 is in the image 57,
  • the motion vectors corresponding to the positions in 58 and 59 are both (0.16, 0.16, 0.16), which means that the displacement of the position of pixel 11 in image 58 compared to the position in image 43 is 0.32, and the position of pixel 11 is The displacement of the position in the image 59 compared to the position in the image 43 is 0.48.
  • the target image corresponding to the reference motion vector data in every two adjacent images can be configured in every two adjacent images. Display the image first.
  • the inserted image can be generated by moving pixels.
  • the motion vector (0, 0) is that the pixel 11 is the corresponding motion vector in the image 41.
  • the pixel 11 is directed in the direction indicated by the arrow on the X-axis and Y-axis. If the directions are moved by 0.16, the position of the pixel 11 in FIG. 57 can be obtained. In this way, the target pixel in the image 43 can be moved to the position in the image 57, and then the image content corresponding to the image 57 can be generated.
  • the position of the target pixel can be moved on the basis of the generated image 57, that is, continue to move 0.16 pixel unit in the direction shown by the arrow, or the target pixel can be moved 0.32 on the basis of the image 43 to The image content corresponding to the image 58 is generated.
  • the motion vector data represents the displacement of the target pixel in the image
  • the corresponding motion vector data in each frame of the image can be carried by making a texture with the same image content.
  • the value of the designated color channel of each pixel in the map is used to characterize the motion vector of the corresponding pixel in the first sequence of images.
  • the unit of the obtained displacement is the unit of the value of the color channel, and then when determining the actual movement distance of the target pixel in the image, it is necessary to multiply the obtained displacement by a conversion factor to obtain the pixel as the unit The moving distance.
  • the conversion factor may be determined according to the bit color of the image, and the larger the bit color of the image, the larger the corresponding conversion factor.
  • the bit color of the image represents the number of bits occupied by each color channel. For example, for a 24-bit color image, each color channel (red channel, green channel, and blue channel) occupies 8 bits.
  • S240 Insert the inserted image into the playback sequence of the first sequence of images to obtain a second sequence of images.
  • the image processing method provided by the embodiments of the present application realizes that after the first sequence of images has been produced, when the dynamic effects represented by the first sequence of images need to be slowed down while maintaining the visual effects, According to the motion vector data and the slow-down multiple, generate reference motion vector data whose number matches the slow-down multiple, and then move the target pixel based on the first sequence image and the reference motion vector data, and then generate the number and the zoom Insert images with slow multiple matching to be inserted into the playback sequence of the first sequence of images to obtain a second sequence of images that includes more images, thereby realizing that there is no need to use development tools to make more images for insertion into the first image sequence In the image to reduce production costs.
  • FIG. 15 shows a flowchart of an image processing method applied to a game client according to an embodiment of the present application.
  • the method includes:
  • the slowdown factor can be configured by the developer in the development stage of the first sequence of images in advance, or it can be configured by the user in the game client where the first sequence of images is displayed.
  • Configuration For example, the image processing method provided by the embodiment of the present application can be executed by a game client, and the game client can be configured with a configuration interface, then the game client can display the configuration interface after detecting the user's trigger operation, so that the user can Configure the slowdown multiple you need.
  • the configuration interface can be used in multiple ways to slow down the user input by multiples.
  • the configuration interface includes a first control and a second control slidable on the first control, and obtaining the dynamic effect parameter input in the configuration interface includes: obtaining the second control in response to a touch operation The position after sliding; the value corresponding to the position is used as the slow down multiple. For example, if the value corresponding to the location is 2, the slowdown factor can be obtained as 2 times.
  • FIG. 16 shows a game interface 99 of the game client.
  • a configuration interface 98 can be displayed in response to a user's operation, and a first control 97 and a configuration interface 98 can be displayed in the configuration 98.
  • the second control 96 slides on the first control 97.
  • the user can touch the second control 96 to slide on the first control 97 by dragging, and different positions on the first control 97 correspond to different values, and the game client can detect and obtain the second control in response to the touch.
  • the position after the control operation slides, the value corresponding to the position is used as the input slowdown multiple.
  • the game client can directly display an input box and a confirmation control on the configuration interface, so that the user can directly manually input the required slowdown multiple in the input box, and then click the confirmation control.
  • the game client detects that the confirmation control is touched, the data obtained from the input box is used as the slowdown multiple.
  • the electronic device on which the game client is installed will configure a special file storage area for the game client to store files corresponding to the game client.
  • a configuration file corresponding to the game client is correspondingly stored in the file storage area, and the configuration file can record related configuration information of the game client.
  • the configured screen resolution, the configured sound effects, and the configured operation mode, etc. can also be configured with a slowdown multiple correspondingly. Then, after obtaining the slowdown multiple, the game client may also store the obtained slowdown multiple in the configuration file, so as to update the previously stored slowdown multiple.
  • the slowdown multiple stored in the configuration file can be understood as the slowdown multiple used in the process of actually generating the inserted image.
  • S330 Acquire the first sequence of images and the motion vector data corresponding to each frame of the first sequence of images.
  • S340 Based on the motion vector data, the first sequence of images, and the slow-down multiple, generate an inserted image that matches the slow-down multiple.
  • S350 Insert the inserted image into the playback sequence of the first sequence of images to obtain a second sequence of images.
  • S310 to S320, and S350 may be executed at different stages.
  • the first sequence of images represents a dynamic effect.
  • representation is the dynamic effect of an object flying from one end to the other.
  • the dynamic effect represented by the second sequence of images is the same as the content represented by the first sequence of images. The main reason is that the playing duration of the dynamic effect represented by the second sequence of images is different from the dynamic effect represented by the second sequence of images.
  • the first sequence of images and the motion vector data corresponding to each frame of the first sequence of images are acquired to generate the second sequence of images, which may cause The display of the second sequence of images has a sense of delay.
  • each frame of the image in the first sequence of images has been produced in advance, but the newly generated inserted image needs to be rendered in real time based on the first sequence of images because of slowing down.
  • the rendering process needs to consume the processing resources (CPU computing resources or GPU computing resources) of the electronic device where the game client is located. If the processing resources at the time are relatively tight, then the rendering efficiency of the inserted image will be low, and then This causes a delay in the display of the second sequence of images.
  • the game client can, when detecting that the slowdown factor has changed, even if it is not currently in a scene where the second sequence of images needs to be loaded, it can preliminarily base on the motion vector data and the first sequence.
  • Image and slow down multiple generate the number of inserted images matching the slow down multiple, and then generate the second sequence of images in advance, so that when the second sequence of images needs to be displayed, the second sequence of images can be played directly to improve the dynamic effect playback Real-time.
  • the slower speed can be real-time input on the configuration interface, thereby making Real-time control of the playback speed of the dynamic effects to be displayed by the first sequence of images improves the interactivity in the dynamic effect display process, and further improves the user experience.
  • FIG. 17 shows a flowchart of an image processing method applied to a game client according to an embodiment of the present application.
  • the method includes:
  • Obtaining the slowdown multiple based on the external data interface can be understood as receiving the slowdown multiple transmitted through the external data interface.
  • the electronic device where the game client is located can run a plug-in to configure the configuration information of multiple game clients in a centralized manner, so that the user does not have to separately perform the configuration information of multiple game clients in sequence. Configuration, thereby reducing the repetition of users' operations, and improving user experience.
  • the game client that executes the image processing method provided by the embodiment of the present application is client A
  • the electronic device where client A is installed includes client B and client C in addition to client A.
  • a plug-in A is configured in the electronic device, and the plug-in A can communicate with the client A, the client B, and the client C through process communication.
  • the user can configure the game interface resolution, game sound effects, and dynamic effect slowdown multiples in plug-in A.
  • plug-in A obtains the user-configured game interface resolution, game sound effects, and dynamic effect slowdown multiples, it can use this information as configuration information and synchronize it to client A, client B, and client C based on process communication.
  • Respective external data interfaces so that client A, client B, and client C can obtain the configuration information transmitted by plug-in A through their respective external data interfaces, so that the configuration information in the configuration file can be updated.
  • S420 Obtain the first sequence of images and the motion vector data corresponding to each frame of the first sequence of images.
  • S430 Based on the motion vector data, the first sequence of images, and the slow-down multiple, generate an inserted image that matches the slow-down multiple.
  • S440 Insert the inserted image into the playback sequence of the first sequence of images to obtain a second sequence of images.
  • the image processing method provided by the embodiments of the present application can be used by an external program through a game client corresponding to the external speed in the embodiment of the present application.
  • the real-time input of the data interface makes it possible to control the playback speed of the dynamic effect to be displayed in the first sequence of images in real time, which improves the flexibility of the slow-down multiple configuration during the dynamic effect display process, thereby also improving the user experience.
  • the image processing method provided by the embodiments of the present application can be implemented by the terminal/server alone; it can also be implemented by the terminal and the server in cooperation.
  • the terminal collects a request to slow down the playback of the first sequence of images (including the slowdown multiple and the first sequence of images), and sends the request to the server.
  • the server After receiving the request, the server obtains the first sequence of images and motion vector data, and Based on the motion vector data, the first sequence of images, and the slow-down multiple, an inserted image whose number matches the slow-down multiple is generated, and the inserted image is inserted into the playback sequence of the first sequence of images to obtain a second sequence of images , And send the second sequence of images to the terminal to play the second sequence of images on the terminal, so that after the first sequence of images has been produced, the dynamic effects represented by the first sequence of images need to be slowed down And while maintaining the visual effect, the inserted image is produced according to the motion vector data corresponding to each frame of the image in the first sequence of images, and inserted into the playback sequence of the first sequence of images to obtain a second sequence that includes more images. Sequence images, thereby realizing that there is no need to use
  • FIG. 18 shows a flowchart of an image processing method according to an embodiment of the present application. The method includes:
  • a basic sequence of images Before generating the first sequence of images, a basic sequence of images can be pre-made as the original material, and then a part of the basic sequence of images can be selected as the first sequence of images.
  • the accuracy can be understood as the resolution of the image.
  • a basic sequence image with fixed total pixels if you want to obtain more sequence images based on the basic sequence image, the resolution of each frame of the sequence image will be relatively lower. For example, for example, a basic sequence image with a total of 2048 ⁇ 2048 pixels, if it contains an 8 ⁇ 8 sequence diagram (64 frames in total), then the corresponding single frame image has 256 ⁇ 256 pixels. And if it contains a 16 ⁇ 16 sequence diagram (256 frames), then the pixel of the single frame obtained is 128 ⁇ 128.
  • the resolution of a single frame can be defined according to actual requirements, and then the number of frames included in the first sequence of images can be obtained.
  • S530 Generate the first sequence of images in the first image generation environment.
  • S540 Generate the motion vector data corresponding to each frame of the first image sequence in the first image generation environment.
  • S540 may be executed after S530, or may be executed synchronously with S530.
  • S550 Input the first sequence of images and the motion vector data into the second image generation environment, and output the texture data carrying the first sequence of images, the motion vector data, and the slowing factor.
  • a scalar parameter can be configured in the material data (material) to store the slowdown multiple, so as to make a material template.
  • the scalar parameter can facilitate the external program to identify the parameter storing the slow-down multiple, and then access or modify the slow-down multiple through the external program.
  • dynamic parameters dynamic paremeter
  • the dynamic parameters can be configured in the material data, and the dynamic parameters can be used to call the slowdown multiple in the cascade (particle editor) system in the second image generation environment while creating the particle effect.
  • the scalar parameter when configured in the material data, it can be realized by updating the parameter about the slowdown multiple in the scalar parameter, and the slow down multiple in the material data is updated to realize real-time dynamic comparison.
  • the playing rhythm of the dynamic effect represented by the first sequence of images is controlled.
  • S560 Read the material data to obtain the first sequence of images and the motion vector data corresponding to each frame of the first sequence of images.
  • S580 Based on the motion vector data, the first sequence of images, and the slow-down multiple, generate an inserted image that matches the slow-down multiple.
  • S590 Insert the inserted image into the playback sequence of the first sequence of images to obtain a second sequence of images.
  • the image processing method provided in the embodiment of the present application processes the state effects of the game scene to obtain super slow motion dynamic effects through the accompanying drawings.
  • FIG. 19 shows a game scene of a real-time battle game, in which the explosion effect after the bomb is thrown out (the position pointed by the arrow in the figure) is played.
  • the frame rate of the explosion effect of the normal type is usually lower than that of the explosion effect of the super slow motion type, which means that the explosion effect of the normal type will be played in a short time.
  • the explosion effect can have a longer playing time to make the overall explosion effect change more smoothly.
  • the image in the upper row of FIG. 20 is the first sequence of images that characterizes the explosion effect of the general class
  • the image in the lower row is the second sequence of images obtained by the image processing method provided by the embodiment of the application.
  • the image corresponding to time t1 and the image corresponding to time t2 in the upper row of images can be taken as an interval, and then the inserted image in this interval can be obtained based on the scheme provided in the foregoing embodiment, and similarly can be obtained
  • the image corresponding to time t2 and the image corresponding to time t3 are inserted images in the interval, and the image corresponding to time t3 and the image corresponding to time t4 can be obtained.
  • the inserted image in this interval can be obtained, so as to obtain the characterization of the super slow motion explosion
  • the second sequence of images for the effect can be taken as an interval, and then the inserted image in this interval can be obtained based on the scheme provided in the foregoing embodiment, and similarly can be obtained
  • the upper row is the normal explosion effect at t1, t2, t3, and t4 after the explosion
  • the lower row is the super slow motion explosion effect at t1, t2, and t2 after the explosion.
  • the renderings at t3 and t4. It can be seen from Figure 20 that the explosion effect of the ordinary type represented by the first sequence of images is almost over at time t4, while the explosion effect of the super slow motion type represented by the second sequence of images is still at time t4. It is not much different from the explosion effect at time t1. In this case, the explosion effect of super slow motion can undergo more frame image transformations after time t4 and then change to the normal explosion effect of the upper row.
  • the image at time t4 which in turn makes the overall explosion effect change smoother.
  • more frames of images may include inserted images obtained based on the solution provided in the embodiments of the present application.
  • Figure 21 shows that when the slowdown factor is 5 times, the The effect of the comparison between the number of images to be produced in the solution provided by the embodiments of the present application and the number of images to be produced in the related technology.
  • the dashed box 94 indicates the number of images that need to be produced corresponding to the solution provided in this embodiment of the application.
  • the tool makes more images, so only the 64 frames of the first sequence of images originally produced by the development tool and the corresponding textures (64 frames) of each frame of the first sequence of images can be slowed down by 5 times Visual effects.
  • the dashed frame 95 is the number of images that need to be produced in the related technology at a slowdown of 5. Because in the related technology, all frames of images need to be produced by development tools, so it is required when the speed is slowed down by 5 times. The number of frames produced is significantly more than the number of images in the dashed frame 94.
  • S510 to S590 may be executed by a computer installed with the first image generation environment and the second image generation environment, and S591 may be executed by the game client.
  • the game client when the game client needs to load the dynamic effect, it may start to read the material data to obtain the first sequence of images and the motion vector data corresponding to each frame of the first sequence of images.
  • the dynamic effect represented by the first sequence of images is the explosion effect of a bomb in a game scene. Then, when the game client needs to display the explosion effect, it can read the material data corresponding to the explosion effect to obtain the first sequence of images corresponding to the explosion effect, and the motion vector data corresponding to each frame of the first sequence of images.
  • the game client when the game client is started, it can start to acquire the first sequence of images and the motion vector data corresponding to each frame of the first sequence of images, which can further improve the display efficiency of the second sequence of images, so that the second sequence Images can be displayed more instantaneously to reduce the display delay of dynamic effects.
  • the game client taking the first image sequence to characterize the explosion effect of a bomb as an example, the explosion effect is triggered after a bomb is thrown by the game player, and in order to facilitate the explosion effect to be displayed without delay
  • the game client The terminal can start to acquire the first sequence of images and the motion vector data corresponding to each frame of the first sequence of images during the resource loading phase or the user login phase of the startup process, based on the motion vector data and the first sequence of images.
  • the sequence image and the slow-down multiple are generated, and the inserted images whose number matches the slow-down multiple are generated, so that the generation of the second sequence image is completed before entering the scene where the dynamic effect needs to be played.
  • S510 to S550 may be executed by a computer installed with the first image generation environment and the second image generation environment
  • S560 to S580 may be executed by the server
  • S591 may be executed by the game client.
  • the generated material data can be pre-stored in the server.
  • the server When the game client needs to play the dynamic effect corresponding to the second sequence of images, the server will read the material data according to it to obtain the motion vector data, The first sequence of images and the slow-down multiple are generated to generate inserted images whose number matches the slow-down multiple, and then the inserted images are inserted into the playback sequence of the first sequence of images to obtain the second sequence of images, and then the server sends the first sequence of images
  • the second sequence of images is sent to the game client, so that the game client can display the second sequence of images.
  • the client that generates the second sequence of images can display the second sequence of images itself, or it can be sent to the target client by the client that generates the second sequence of images, and the target client To display the second sequence of images.
  • the game client in electronic device A first triggers a certain game
  • the electronic device A may send the generated second sequence of images to the game client of the electronic device B for storage.
  • the game client in electronic device B enters the game scene later and needs to load dynamic effect A, it can directly read the second sequence of images sent by electronic device A before, without using electronic device B.
  • the game client is generated again and again.
  • the image processing method provided by the embodiments of the present application realizes that after the first sequence of images has been produced, when the dynamic effects represented by the first sequence of images need to be slowed down while maintaining the visual effects, According to the motion vector data corresponding to each frame of the first sequence of images, the inserted image is produced and inserted into the playback sequence of the first sequence of images to obtain a second sequence of images that includes more images, thereby eliminating the need to use development tools Make more images for insertion into the first image sequence to reduce production costs. Moreover, in the embodiment of the present application, the slowdown, the first sequence image, and the motion vector data can be configured in the generated material data, and then the first sequence image, the motion vector data can be collectively obtained by reading the material data later. Vector data and slow-down multiples improve the efficiency of dynamic data acquisition.
  • an image processing apparatus 600 provided by an embodiment of the present application, the apparatus 600 includes:
  • the data acquisition unit 610 is configured to acquire the first sequence of images and the motion vector data corresponding to each frame of the first sequence of images.
  • the image generating unit 620 is configured to generate an inserted image matching the slow-down multiple based on the motion vector data, the first sequence image, and the slow-down multiple; wherein the number of the inserted images corresponds to the slow-down multiple.
  • the image configuration unit 630 is configured to insert the inserted image into the playback sequence of the first sequence of images to obtain the second sequence of images.
  • the image playing unit 640 is configured to play the second sequence of images.
  • An image processing device provided by an embodiment of the present application, after acquiring a first sequence of images and motion vector data, and then based on the motion vector data, the first sequence of images, and a slowdown multiple, the generated number matches the slowdown multiple
  • the inserted image is inserted into the playback sequence of the first sequence of images, and the second sequence of images is obtained.
  • the first sequence of images needs to be characterized
  • the inserted image can be produced according to the motion vector data corresponding to each frame of the image in the first sequence of images and inserted into the playback sequence of the first sequence of images to obtain the inclusion of more
  • the second sequence of images with multiple images can further realize that there is no need to use development tools to produce more images for insertion into the first image sequence, so as to reduce the production cost.
  • the image generating unit 620 includes: a vector data generating subunit 621, configured to generate reference motion vector data matching the slowdown multiple based on the motion vector data and the slowdown multiple ; Wherein, the number of the reference motion vector data corresponds to the slowdown multiple.
  • the image generation execution subunit 622 is configured to generate an inserted image matching the slowdown multiple based on the first sequence image and the reference motion vector data.
  • the vector data generating subunit 621 is further configured to obtain a target displacement, the target displacement being the displacement represented by the motion vector data corresponding to the subsequent display image; wherein, the subsequent display image is the first sequence The image with the lower playback sequence among every two adjacent images in the image; obtain the ratio of the target displacement to the slowdown multiple; based on the slowdown multiple, obtain the number of inserted images between every two adjacent images; The ratio is used as the reference motion vector data corresponding to the inserted image between every two adjacent images, and the reference motion vector data corresponding to the inserted image is used as the reference motion vector data matching the slowdown multiple.
  • the image generation execution subunit 622 is further configured to obtain a target image corresponding to the current reference motion vector data in the process of generating the inserted image corresponding to the current reference motion vector data, and the target image is the current reference motion vector data.
  • the image corresponding to the initial position of the corresponding target pixel; the target pixel in the target image corresponding to each reference motion vector data is moved according to the corresponding reference motion vector data to obtain each current reference motion
  • the inserted image corresponding to the vector data; the set of inserted images corresponding to the current reference motion vector data is used as the inserted image matching the slowdown multiple.
  • the device 600 further includes: a parameter configuration unit 650 configured to display a configuration interface; obtain the slowdown multiple entered in the configuration interface; use the entered slowdown multiple as the Slow down multiples.
  • the configuration interface includes a first control and a second control that can slide on the first control.
  • the parameter configuration unit 650 is further configured to obtain the location of the second control after sliding in response to the touch operation. Position; the slowdown multiple corresponding to the location is used as the input slowdown multiple.
  • the parameter configuration unit 650 is further configured to obtain the slow-down multiple input by the external application program through the external data interface; use the transmission slow-down multiple as the slow-down multiple.
  • the device 600 further includes: an initial image generation unit 660, configured to generate the first sequence of images in a first image generation environment; The motion vector data corresponding to each frame of the first sequence of images; the first sequence of images and the motion vector data are input into the second image generation environment, and the output carries the first sequence of images, the motion vector data, and Slow down the texture data by multiples.
  • the data acquisition unit 610 is further configured to read the material data to acquire the first sequence of images and the motion vector data corresponding to each frame of the first sequence of images, and to read the material Data to get the slowdown multiple.
  • an embodiment of the present application also provides an electronic device 200 that can execute the foregoing image processing method.
  • the electronic device 200 includes a processor 102, a memory 104, and a network module 106.
  • the memory 104 stores a program that can execute the content in the foregoing embodiment, and the processor 102 can execute the program stored in the memory 104.
  • the processor 102 may include one or more cores for processing data and a message matrix unit.
  • the processor 102 uses various interfaces and lines to connect various parts of the entire electronic device 200, and executes by running or executing instructions, programs, code sets, or instruction sets stored in the memory 104, and calling data stored in the memory 104.
  • the processor 102 may use at least one of Digital Signal Processing (DSP), Field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA).
  • DSP Digital Signal Processing
  • FPGA Field-Programmable Gate Array
  • PDA Programmable Logic Array
  • the processor 102 may integrate one or a combination of a central processing unit (CPU), a graphics processing unit (GPU), a modem, and the like.
  • the CPU mainly processes the operating system, user interface, and application programs; the GPU is used for rendering and drawing of display content; the modem is used for processing wireless communication. It is understandable that the above-mentioned modem may not be integrated into the processor 102, but may be implemented by a communication chip alone.
  • the memory 104 may include random access memory (RAM) or read-only memory (Read-Only Memory).
  • the memory 104 may be used to store instructions, programs, codes, code sets or instruction sets.
  • the memory 104 may include a storage program area and a storage data area, where the storage program area may store instructions for implementing the operating system and instructions for implementing at least one function (such as touch function, sound playback function, image playback function, etc.) , Instructions used to implement the following various method embodiments, etc.
  • the data storage area can also store data (such as phone book, audio and video data, chat record data) created by the terminal 100 during use.
  • the network module 106 is used to receive and send electromagnetic waves, and realize the mutual conversion between electromagnetic waves and electrical signals, so as to communicate with a communication network or other devices, such as with an audio playback device.
  • the network module 106 may include various existing circuit elements for performing these functions, for example, an antenna, a radio frequency transceiver, a digital signal processor, an encryption/decryption chip, a subscriber identity module (SIM) card, a memory, etc. .
  • SIM subscriber identity module
  • the network module 106 can communicate with various networks, such as the Internet, an intranet, and a wireless network, or communicate with other devices through a wireless network.
  • the aforementioned wireless network may include a cellular telephone network, a wireless local area network, or a metropolitan area network.
  • the network module 106 can exchange information with the base station.
  • FIG. 27 shows a structural block diagram of a computer-readable storage medium provided by an embodiment of the present application.
  • the computer-readable medium 1100 stores program code, and the program code can be invoked by a processor to execute the method described in the foregoing method embodiment.
  • the computer-readable storage medium 1100 may be an electronic memory such as flash memory, EEPROM (Electrically Erasable Programmable Read Only Memory), EPROM, hard disk, or ROM.
  • the computer-readable storage medium 1100 includes a non-transitory computer-readable storage medium.
  • the computer-readable storage medium 1100 has a storage space for the program code 810 for executing any method steps in the above methods. These program codes can be read from or written into one or more computer program products.
  • the program code 1110 may be compressed in an appropriate form, for example.
  • the image processing method, device, electronic device, and storage medium provided by the present application are based on the first sequence of images and motion vector data and then based on the motion vector data and the first sequence of images. And the slowing down multiple, generating inserted images whose number matches the slowing down multiple, and inserting the inserted images into the playback sequence of the first sequence of images to obtain the second sequence of images, which enables After the first sequence of images is obtained, when the dynamic effect represented by the first sequence of images needs to be slowed down while maintaining the visual effect, the insertion can be made according to the motion vector data corresponding to each frame of the first sequence of images.
  • the image is inserted into the playback sequence of the first sequence of images to obtain a second sequence of images that includes more images, so that there is no need to use development tools to make more images for insertion into the first image sequence to reduce Production cost and shorten the time-consuming production of dynamic effects.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Signal Processing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

一种图像处理方法、装置、电子设备及存储介质。该方法包括:获取第一序列图像以及第一序列图像中每帧图像各自对应的运动矢量数据(S110);基于运动矢量数据、第一序列图像以及放慢倍数,生成与放慢倍数匹配的插入图像(S120);其中,插入图像的数量与放慢倍数对应;将插入图像插入第一序列图像的播放序列中,得到第二序列图像(S130);播放第二序列图像(S140)。

Description

图像处理方法、装置、电子设备及存储介质
相关申请的交叉引用
本申请基于申请号为202010028338.3、申请日为2020年01月10日的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此引入本申请作为参考。
技术
本申请涉及图像处理技术领域,涉及一种图像处理方法、装置、电子设备及存储介质。
背景技术
随着游戏用户的要求越来越高,电子游戏更多的朝着画面表现越来越高清、光影表现越来越真实化的趋势进行发展。
但是,在相关的动态效果展示的过程中,为了提升动态效果表现能力,需要预先通过开发工具制作更多的画面内容,造成了较高的制作成本,由于需要额外开发画面内容,从而加大了开发难度,降低了开发效率。
发明内容
鉴于上述问题,本申请提出了一种图像处理方法、装置、电子设备及计算机可读存储介质,以改善上述问题。
本申请提供了一种图像处理方法,所述方法包括:获取第一序列图像以及所述第一序列图像中每帧图像各自对应的运动矢量数据;基于所述运动矢量数据、所述第一序列图像以及放慢倍数,生成与所述放慢倍数匹配的插入图像;其中,所述插入图像的数量与所述放慢倍数对应;将所述插入图像插入所述第一序列图像的播放序列中,得到第二序列图像;播放所述第二序列图像。
本申请提供了一种图像处理装置,所述装置包括:数据获取单元,用于获取第一序列图像以及所述第一序列图像中每帧图像各自对应的运动矢量数据;图像生成单元,用于基于所述运动矢量数据、所述第一序列图像以及放慢倍数,生成与所述放慢倍数匹配的插入图像;其中,所述插入图像的数量与所述放慢倍数对应;图像配置单元,用于将所述插入图像插入所述第一序列图像的播放序列中,得到第二序列图像;图像播放单元,用于播放所述第二序列图像。
本申请提供了一种电子设备,包括处理器以及存储器;一个或多个程序被存储在所述存储器中并被配置为由所述处理器执行以实现上述的方法。
本申请提供了一种计算机可读存储介质,所述计算机可读存储介质中存储有程序代码,在所述程序代码被处理器运行时执行上述的方法。
本申请提供的一种图像处理方法、装置、电子设备及存储介质,基于运动矢量数据、第一序列图像以及放慢倍数,生成与放慢倍数匹配的插入图像,并插入第一序列图像的播放序列中,实现了动态的根据放慢倍数以及运动矢量数据来生成插入图像,以降低制作成本以及缩短动态效果制作的耗时,从而提高开发效率。
附图说明
图1示出了本申请实施例中的一种运动矢量的示意图;
图2示出了本申请实施例中的一种运动矢量的示意图;
图3示出了本申请实施例中物体移动的示意图;
图4示出了本申请实施例提出的一种图像处理方法的流程图;
图5示出了图4所示实施例中插入图像插入播放序列的示意图;
图6示出了图4所示实施例中第二序列图像的示意图;
图7示出了本申请实施例提出的一种图像处理方法的流程图;
图8示出了图7提供的图像处理方法中S220的一种实施方式的流程图;
图9示出了本申请实施例提中参考运动矢量数据的示意图;
图10示出了本申请实施例中基础图像与运动矢量贴图的示意图;
图11示出了本申请实施例中像素对应的示意图;
图12示出了图7提供的图像处理方法中S230的一种实施方式的流程图;
图13示出了本申请实施例提出的一种图像处理方法中相邻两个图像中生成插入图像的示意图;
图14示出了本申请实施例提出的一种图像处理方法中插入图像对应的参考运动矢量数据的示意图;
图15示出了本申请实施例提出的一种图像处理方法的流程图;
图16示出了本申请实施例中的一种配置界面的示意图;
图17示出了本申请实施例提出的一种图像处理方法的流程图;
图18示出了本申请实施例提出的一种图像处理方法的流程图;
图19示出了本申请实施例中的一种游戏场景中爆炸效果的示意图;
图20示出了本申请实施例中的一种游戏场景中爆炸效果处理前后的效果对比的示意图;
图21示出了本申请实施例提供的图像处理方法与相关技术中所需制作图像数量的对比效果图;
图22示出了本申请实施例提出的一种图像处理装置的结构框图;
图23示出了本申请实施例提出的一种图像处理装置的结构框图;
图24示出了本申请实施例提出的一种图像处理装置的结构框图;
图25示出了本申请实施例提出的一种图像处理装置的结构框图;
图26示出了本申请的用于执行根据本申请实施例的图像处理方法的一种电子设备的结构框图;
图27示出了本申请实施例的用于保存或者携带实现根据本申请实施例的图像处理方法的程序代码的存储单元。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
在对本申请实施例进行进一步详细说明之前,对本申请实施例中涉及的名词和术语进行说明,本申请实施例中涉及的名词和术语适用于如下的解释。
放慢倍数:放慢倍数表征的是对动态效果的播放时长进行延长的倍数。例如,若放 慢倍数为2,那么就表征需要将动态效果的播放时长延长2倍。比如,若原本动态效果的播放时长为2秒,在放慢倍数为2的情况下,就对应地将播放时长延长到4秒,在放慢倍数为5的情况下,对应的将播放时长延长到10秒。
运动矢量:运动矢量表征的是图像中目标像素的位移。而其中的目标像素可以指图像中的任一像素,也可以指图像中的一个内容块中的像素。
如图1所示,如果目标像素为图像中的像素,假设像素10为目标像素,图1中像素10在前一帧的图像20中所处的位置为(a,b),而在后一帧图像30中所处的位置为(c,d),那么该后一帧图像中像素10对应的运动矢量为(dx,dy),其中,dx表征像素10在X轴方向上的位移,dy表征像素10在Y轴方向上的位移,所以在图1所示的情况中dx=a-c,dy=d-b。
如图2所示,如果目标像素为内容块中的像素,运动矢量表征的是该内容块与最佳匹配块之间的位移,最佳匹配块指的是后一帧图像30中与前一帧图像20的内容块21匹配度最高的内容块31。需要说明的是,内容块中可以包括有多个像素,在本申请实施例中可以将内容块的中心点的像素的位移作为该内容块的位移。其中,该中心点可以为几何中心。在图2所示的内容中,内容块21的中心点的像素的位置为(a,b),最佳匹配块31的中心点的像素的位置为(c,d),则内容块与最佳匹配块之间的运动矢量为(a-c,d-b)。
其中,内容块可以理解为在图像中表征具有实体含义的区域。例如,若图像中为一个人,那么该人的头部就是一个具有实体含义的区域,实体含义就是该区域的图像内容表征的是人的头部,进而该头部区域就可以作为一个内容块。再例如,该人的手也是一个具有实体含义的区域,进而手部区域又可以作为一个内容块。
需要说明的是,本申请实施例的目标像素可以理解为图像中的每个像素,也可以理解为前述内容中的内容块中的像素。在本申请实施例中所涉及的运动矢量数据表征的是携带有运动矢量的数据,该数据可以为文本格式或者图片格式。
随着用户对视觉体验的要求越来越高,虚拟场景的画面显示越发朝着更加清晰化以及真实化的方向发展。例如,在电子游戏场景中各种游戏人物以及游戏特效都表现得越发有视觉冲击力。
而申请人在研究中发现,在一些虚拟场景中所涉及的动态效果需要较高的制作成本。例如,为了实现需要的动态效果,在相关的实现方式中通过制作关键帧的方式或者通过制作序列图像的方式来实现。
其中,在关键帧的这种方式中,关键帧相当于动态效果中的原画,指物体运动或变化中的关键动作所处的那一帧。在序列图像的这种方式中,对于要展示的动态效果会被分解,进而将该动态效果分解为多个动作,其中每个动作可以作为一帧图像,再将分别对应一个动作的图像组合起来就是一个序列图像,在对该序列图像进行播放的过程中,就可以实现展示序列图像对应的动态效果。
但是,在该相关的方式中,为了实现需要的动态效果,对于动态效果中的每一帧图像都需要开发人员前期通过开发工具进行制作,进而会造成较高的制作成本。例如,对于超级慢动作类的动态效果相比普通类的动态效果具有更高的帧率(每秒显示的帧数),例如,普通类的动态效果的帧率为30fps,而超级慢动作类的动态效果的帧率可以为240fps,甚至更高。其中,30fps表征每秒播放30帧图像,240则表征每秒播放240帧图像。那么在初始的动态效果为普通类的动态效果,且需要基于普通类的动态效果进行放慢而实现超级慢动作类的动态效果的情况下,需要开发人员通过开发工具制作更多的图像插入到该普通的动态效果的图像序列中,以适应帧率的增加。
并且,如果对于已经完成制作的动态效果需要进行调整(例如,调整播放时长), 那么就会涉及到重新制作的问题,进而会进一步的增加制作成本。例如,对于一个本身的播放时长为2秒且包括60帧图像的动态效果,如果要放慢5倍(可以理解为延长5倍播放时长)进行播放,并且同时保持原有的视觉效果的情况下,一共需要300帧图像,也就意味着需要重新制作240帧图像,进而会造成较大的制作成本。
另外,预先制作得到的图像会存储在资源文件中,当资源文件被存放于用户所使用的终端设备之后,会使得资源文件占用该终端设备更多的存储空间,降低了存储空间的利用率。
因此,申请人提出了本申请实施例提供的图像处理方法、装置、电子设备及计算机可读存储介质,以实现在已经通过开发工具制作得到原始序列图像后,在需要将原始序列图像所表征的动态效果(或者也可以理解为所达到的动态效果)进行放慢、且同时保持视觉效果或者需要更佳的视觉效果的情况下,可以根据该制作得到的原始序列图像中每帧图像各自对应的运动矢量数据制作得到插入图像,并插入到原始序列图像的播放序列中,以得到包括更多图像的序列图像,进而实现动态的根据放慢倍数以及运动矢量数据来生成插入图像,而不需要再通过开发工具制作更多用于插入到原始序列图像的图像,以降低制作成本以及缩短动态效果制作的耗时。
本申请实施例所提供的图像处理方法,可以由终端/服务器独自实现;也可以由终端和服务器协同实现,例如终端采集到需要放慢播放第一序列图像的请求时(包括放慢倍数),独自承担下文所述的图像处理方法,以得到第二序列图像,并进行播放;终端采集到需要放慢播放第一序列图像的请求(包括放慢倍数),将该请求发送至服务器,服务器接收到该请求后,执行图像处理方法得到第二序列图像,并将第二序列图像发送至终端,以进行第二序列图像的播放。
本申请实施例提供的用于实施下文所述的图像处理方法的电子设备可以是各种类型的终端设备或服务器,其中,服务器可以是独立的物理服务器,也可以是多个物理服务器构成的服务器集群或者分布式系统,还可以是提供云计算服务的云服务器;终端可以是智能手机、平板电脑、笔记本电脑、台式计算机、智能音箱、智能手表等,但并不局限于此。终端以及服务器可以通过有线或无线通信方式进行直接或间接地连接,本申请在此不做限制。
下面再对本申请实施例所涉及的方案原理进行介绍。
如图3所示,以序列图像表征的是一个物体从画面中的一端移动到另一端的过程举例进行说明。从图3中也可以看出,该物体32在移动过程中会经过多个位置,例如,物体32会从第一位置开始移动,经过第二位置以及第三位置而到达第四位置。该物体32在每个位置时的画面可以通过一帧图像来表示,进而可以理解为序列图像中的每帧图像都表征了该物体的一个位置。例如,序列图像包括4帧图像,其中,第一帧图像对应于第一位置时的画面,第二帧图像对应于第二位置时的画面,第三帧图像对应于第三位置时的画面,第四帧图像对应于第四位置时的画面。
在这种情况下,可以将每帧图像中表征该物体的像素作为目标像素,进而每帧图像中的物体在移动过程中,可以看做为每帧图像中的目标像素在移动。例如,以图3中目标像素33(表征物体32的像素)为例,物体32在前述的第一位置到第四位置的移动过程中,也可以看做目标像素33在第一位置到第四位置的移动。
在本申请实施例正是利用了上述特点,可以通过获取目标像素的运动矢量来生成新的图像作为插入图像,以解决前述提出的技术问题。
下面将结合附图对本申请实施例进行详细的介绍。
请参阅图4,图4所示为本申请一实施例提出的一种图像处理方法的流程图,该方法包括:
S110:获取第一序列图像以及该第一序列图像中每帧图像各自对应的运动矢量数据。
需要说明的是,在本申请实施例中,第一序列图像可以理解为表征目标动态效果的序列图像。该第一序列图像中可以包括有多帧图像,在该多帧图像被绘制到图像显示界面以进行显示的情况下,可以在该图像显示界面中展示该目标动态效果。在本申请实施例中,第一序列图像为后续生成新的图像的基础,作为一种方式,第一序列图像可以为开发人员通过开发工具制作得到的。
其中,该第一序列图像中每帧图像各自对应的运动矢量数据,表征的是对应图像中的目标像素相比在相邻图像中对应像素的位移,或者是对应图像中的内容块相比在相邻图像中最佳匹配块的位移。而该相邻图像可以为与对应图像相邻的前一帧图像,也可以为与对应图像相邻的后一帧图像。
例如,若第一序列图像包括有第一帧图像、第二帧图像、第三帧图像以及第四帧图像。那么在运动矢量数据表征的是每帧图像中的目标像素相比在前一帧图像中对应像素的位移的情况下,第一帧图像对应的运动矢量数据均为0,因为第一帧图像中的目标像素还未发生移动,也就未产生位移。第二帧图像对应的运动矢量数据,可以表征目标像素在第二帧图像的位置相比在第一帧图像中对应像素的位置的位移。类似的,第三帧图像对应的运动矢量数据,可以表征目标像素在第三帧图像的位置相比在第二帧图像中对应像素的位置的位移,第四帧图像对应的运动矢量数据,可以表征目标像素在第四帧图像的位置相比在第三帧图像中对应像素的位置的位移。
S120:基于该运动矢量数据、该第一序列图像以及放慢倍数,生成与该放慢倍数匹配的插入图像;其中,插入图像的数量与所述放慢倍数对应。
如前述内容所知,第一序列图像在显示过程中可以表征一动态效果。在本申请实施例中,放慢倍速可以理解为将该第一序列图像表征的动态效果放慢的倍数,或者可以理解为将该第一序列图像表征的动态效果的播放时长进行延长的倍数。
例如,若第一序列图像在显示过程中所表征的动态效果的播放时长为2秒,那么在放慢倍数为2的情况下,需要将第一序列图像表征的动态效果放慢2倍,也就是将播放时长由2秒延长到2×2=4秒。再例如,若第一序列图像在显示过程中所表征的动态效果的播放时长为4秒,那么在放慢倍数为3的情况下,需要将第一序列图像表征的动态效果放慢3倍,也就是将动态效果的播放时长由4秒延长到4×3=12秒。
在动态效果的播放过程中,帧率是影响用户视觉体验的一个因素。其中,帧率可以理解为每秒钟播放的图像的帧数,或者理解为每秒种所刷新的图像的帧数。较高的帧率可以得到更流畅、更逼真的动画。那么在将第一序列图像表征的动态效果的播放时长延长的情况下,如果不同时增加第一序列图像中图像的帧数,那么就会造成每秒所播放的帧数减少,进而造成卡顿感。
例如,若原本第一序列图像一共包括60帧图像且播放时长为2秒,那么对应的帧率为30fps(每秒显示的帧数)。而在放慢倍数为4的情况下,对应的播放时长就会延长为8秒,那么在不插入新的图像的情况下,对应的帧率就会变成7.5fps。所以,为了达到第一序列图像在放慢播放的同时依然可以保持视觉效果的技术效果,可以基于该运动矢量数据、该第一序列图像以及放慢倍数,生成新的图像作为插入图像,并将插入图像插入第一序列图像。
作为一种方式,在生成插入图像的过程中,可以先获取待生成的插入图像的运动矢量数据作为参考运动矢量数据。可以理解的是,参考运动矢量数据表征对应图像中目标像素相对在第一序列图像中的某一帧图像的位移,或者表征对应图像中目标像素相对在在先生成的插入图像中的位移,进而在生成参考运动矢量数据后,就可以根据生成的参考运动矢量数据进行目标像素的移动,以生成插入图像。
需要说明的是,对于不同的放慢倍数,动态效果所对应延长的播放时长是不同的,对应所要生成的插入到第一序列图像中的插入图像的数量也是不同的。因此,生成的插入图像的数量需要与放慢倍数匹配,以便对应不同的放慢倍数,都可以保持原有的视觉效果,甚至提升原有的视觉效果。
S130:将该插入图像插入该第一序列图像的播放序列中,得到第二序列图像。
在生成插入图像后,可以对每个插入图像对应的播放位置进行配置。将该插入图像插入该第一序列图像的播放序列中,可以理解为在原本的第一序列图像的播放序列中配置每个插入图像对应的播放位置,进而得到第二序列图像。可以理解的是,本申请实施例中,每个播放位置表示的是在播放时序上被播放的时序位置。例如,若插入图像对应的播放位置为第一序列图像中的第一帧图像和第二帧图像之间,那么也就意味着,在后续的显示过程中会先播放第一帧图像,然后播放该插入图像,再播放第二帧图像。
例如,如图5所示,以第一序列图像40a中包括有6帧图像为例,在有6帧图像的情况下,可以在每相邻两帧图像之间分别插一帧图像,即一共插入5帧图像。假设第一序列图像40a对应生成的5帧插入图像包括插入图像51、插入图像52、插入图像53、插入图像54以及插入图像55。并且,每个插入图像各自所对应的插图位置为对应箭头所指向的位置,也就是后续在播放时序中被播放的时序位置。进而所得到的第二序列图像可以为如图6所示,在图6中,在所生成的第二序列图像40b中已经插入了图5中所示出的插入图像51、插入图像52、插入图像53、插入图像54以及插入图像55。
S140:播放该第二序列图像。
在生成的第二序列图像中包括有多帧图像,那么播放第二序列图像,可以理解为对第二序列图像中包括的多帧图像依次播放,以实现播放该第二序列图像所表征的动态效果。例如,如图6所示的第二图像序列40b中,在原本的第一图像序列40a插入了插入图像51、插入图像52、插入图像53、插入图像54以及插入图像55后,播放第二序列图像40b时,就会按照由左到右的顺序依次对第二图像序列40b中的图像进行播放,以展示对应的动态效果。
本申请实施例提供的图像处理方法,通过在获取第一序列图像以及运动矢量数据后,再基于该运动矢量数据、该第一序列图像以及放慢倍数,生成数量与该放慢倍数匹配的插入图像,并将该插入图像插入该第一序列图像的播放序列中,得到第二序列图像的方式,实现了可以在已经制作得到第一序列图像后,在需要将第一序列图像所表征的动态效果进行放慢且同时保持视觉效果的情况下,根据第一序列图像中每帧图像各自对应的运动矢量数据制作得到插入图像,并插入到第一序列图像的播放序列中,以得到包括更多图像的第二序列图像,进而实现不需要再通过开发工具进行制作更多用于插入到第一图像序列中的图像,以降低制作成本以及缩短动态效果制作的耗时。
请参阅图7,图7示出了本申请实施例提出的一种图像处理方法的流程图,该方法包括:
S210:获取第一序列图像以及该第一序列图像中每帧图像各自对应的运动矢量数据。
S220:根据该运动矢量数据以及放慢倍数,生成与该放慢倍数匹配的参考运动矢量数据;其中,参考运动矢量数据的数量与放慢倍数对应。
参考运动矢量数据指的是插入图像对应的运动矢量数据,可以根据第一序列图像中每帧图像对应的运动矢量数据以及放慢倍数生成,进而后续再根据参考运动矢量数据生成插入图像。
在本申请实施例中,可以有多种方式生成数量与该放慢倍数匹配的参考运动矢量数据。需要说明的是,每个参考运动矢量数据对应于一个插入图像,也就是说参 考运动矢量数据的数量与后续生成的插入图像的数量是对应的。
如图8所示,作为一种方式,根据该运动矢量数据以及放慢倍数,生成与该放慢倍数匹配的参考运动矢量数据,可以包括:
S221:获取目标位移,该目标位移为该第一序列图像中,每两个相邻图像中在后显示图像对应的运动矢量数据所表征的位移。
S222:获取该目标位移与该放慢倍数的比值,根据该放慢倍数,获得每两个相邻图像之间的插入图像的数量,将该比值作为每两个相邻图像之间的插入图像对应的参考运动矢量数据,得到与该放慢倍数匹配的参考运动矢量数据。
其中,假设第一序列图像包括的图像为第一帧图像、第二帧图像、第三帧图像、第四帧图像、第五帧图像以及第六帧图像,可以以每两个相邻图像作为一个区间进行参考运动矢量数据的生成。在本申请实施例中,可以针对每个区间分别生成一个目标位移,作为本区间所对应的目标位移。
例如,可以将第一帧图像以及第二帧图像作为一个区间,将第二帧图像以及第三帧图像作为一个区间,将第三帧图像以及第四帧图像作为一个区间,将第四帧图像以及第五帧图像作为一个区间,以及将第五帧图像以及第六帧图像作为一个区间。而在第一帧图像以及第二帧图像组成的区间中,在后显示的图像即为第二帧图像,那么在这个区间对应的目标位移即为第二帧图像对应的运动矢量数据所表征的位移。可以理解的是,每帧图像对应的运动矢量数据表征的是每帧图像中目标像素的位移,那么目标位移所包括的是目标像素中每个像素的位移。
在这种情况下,假设目标像素中包括的第一像素在第一帧图像的位置为(a1,b1),而该第一像素在第二帧图像中的位置为(a2,b2),那么在第二帧图像对应的运动数量数据中关于该第一像素的运动矢量为(a2-a1,b2-b1),该第一像素在X轴向的位移为a2-a1,在Y轴向的位移为b2-b1,那么对于最终计算得到的目标位移就包括该第一像素在X轴向的位移a2-a1,在Y轴向的位移a2-a1,从而通过前述方式就可以获取到目标像素包括的每个像素的位移,以获取到第二帧图像对应的运动矢量数据所表征的位移作为所在区间对应的目标位移。例如,假设该目标像素还包括第二像素,第二像素在第一帧的位置为(c1,d1),而该第二像素在第二帧图像中的位置为(c2,d2),那么对于第二像素而言,在X轴向的位移为c2-c1,在Y轴向的位移为d2-d1。在这种情况下,所计算得到的目标位移包括第一像素在X轴向的位移a2-a1,在Y轴向的位移b2-b1,以及第二像素在X轴向的位移c2-c1,在Y轴向的位移d2-d1。
需要说明的是,在第一序列图像中每个像素的移动是具有一定的整体性的。请再参阅图3,物体32在第一位置到第四位置的移动过程中,而组成物体32的像素除了像素33外,其他像素也都可以看做一起在第一位置到第四位置进行移动。那么在第一位置到第四位置这种过程中,组成物体32的每个像素的位移是相同的。因此,可以直接将在后显示图像中目标像素中某个像素的位移,作为该在后显示图像对应的运动矢量数据所表征的位移,进而得到每个区间对应的目标位移。
例如,以前述包括第一像素以及第二像素的目标像素为例,该目标像素为前述第一帧图像和第二帧图像的目标像素。其中,所计算得到的目标位移包括第一像素在X轴向的位移a2-a1,在Y轴向的位移b2-b1,以及第二像素在X轴向的位移c2-c1,在Y轴向的位移d2-d1,那么在移动是具有一定的整体性的的情况下,c2-c1与a2-a1是相同的,d2-d1与b2-b1是相同的,进而可以得到第一帧图像和第二帧图像组成区间对应的目标位移为a2-a1(X轴方向)以及b2-b1(Y轴方向)。
如前述内容所示,在每两个相邻图像中插入的图像的数量是与当前放慢倍数相 关的。在一种方式中,对于第一序列图像所表征的动态效果的播放时长对应的放慢倍数越大,该动态效果所持续的播放时长也就越长,进而需要生成的插入图像就会越多。
在本申请实施例中,获取得到每个区间对应的目标位移后,就可以根据该目标位移与放慢倍数的比值来确定所要生成的插入图像对应的参考运动矢量数据。作为一种方式,目标位移可以包括的是目标像素中每个像素在每相邻图像之间的位移,那么在将目标位移与放慢倍数相比以计算比值的时候,需要将每个目标位移包括的每个像素各自的位移分别与放慢倍数相比,以计算插入图像中每个像素各自对应的参考运动矢量数据,进而得到插入图像对应的参考运动矢量数据。
例如,如图9所示,若当前帧图像41中目标像素包括的像素11的运动矢量为(0.5,0.5),该运动矢量表征像素11在当前帧图像41中的位置相比在前一帧图像42中的位置在X轴方向以及Y轴方向的位移均为0.5。那么在放慢倍数为2倍的情况下,所得到的比值为0.5/2=0.25,进而再将该比值0.25作为待生成的插入图像包括的目标像素中的像素11的参考运动矢量数据。例如,若待生成的插入图像为插入图像56,那么在计算得到比值为0.25的情况下,可以确定的是插入图像56中像素11的位置相比在该前一帧图像42中位置在X轴方向以及Y轴方向的位移为0.25。
在获取到该比值后,可以根据x×N小于y的公式来确定每个区间内需要生成的参考运动矢量数据的数量。其中,符号“×”表征乘积运算,x为目标位移与该放慢倍数的比值,y为目标位移,N为使x×N小于y成立的最大整数。
以前述像素11在X轴方向的运动为例,该像素11在X轴方向的位移为0.5,也就意味着在后显示图像对应的运动矢量数据所表征的位移为0.5,进而得到目标位移0.5,而对应的放慢倍数为2倍,那么比值为0.25,那么基于该公式x×N小于y,且N为使x×N小于y成立的最大整数,得到N为1。也就意味着,在这个区间内需要生成1个参考运动矢量数据。在这种情况下,若第一序列图像包括有6帧图像,那么一共需要生成(6-1)×1=6个参考运动矢量数据。
再例如,若得到的目标位移依然为0.5,而对应的放慢倍数为3倍,那么比值为0.16,那么基于该公式x×N小于y,且N为使x×N小于y成立的最大整数,得到N为3。也就意味着,在这个区间内需要生成3个参考运动矢量数据。在这种情况下,若第一序列图像包括有6帧图像,那么一共需要生成(6-1)×3=15个参考运动矢量数据,而若第一序列图像包括有9帧图像,那么一共需要生成(9-1)×3=24个参考运动矢量数据。
此外,作为另外一种生成数量与该放慢倍数匹配的参考运动矢量数据的方式,可以直接根据第一序列图像原有的帧数来确定所要新生成的参考运动矢量数据的数量。在这种方式下,若第一序列图像原有的图像的帧数为m,那么在确定放慢倍数为n的情况下,可以得到后续生成的新的序列图像一共有m×n帧图像,那么所要生成的插入图像一共有m×n-m帧,在这种情况下需要生成的参考运动矢量数据的数量为m×n-m,那么每相邻两帧图像之间需要生成的插入图像的数量可以为(m×n-m)/(m-1)。
需要说明的是,在一些情况下(m×n-m)/(m-1)无法得到一个整数。那么为了改善该情况,在检测到(m×n-m)/(m-1)不是整数的情况下,可以将(m×n-m)/(m-1)的商作为每相邻两帧图像之间需要生成的插入图像的数量,而将得到的余数随机插入到第一序列图像中任意两帧图像之间,或者是插入到第一序列图像中最后一帧图像之后进行显示。
例如,若第一序列图像包括有6帧图像,那么在放慢倍数为2倍的情况下,所计算得到的一共需要新生成6×2-6=6帧图像,并且会进一步的检测到6/5不为整数,那么则适应性的计算得到商为1,进而确定在6帧图像中,每相邻两帧图像之间需要生成的插入图像的数量为1帧,而对于剩余的那1帧图像则可以配置在第一序列图像原本的6帧图像中任意两帧之间生成,或者是配置在原本的6帧图像之后生成。例如,若原本的6帧图像包括有第一帧图像、第二帧图像、第三帧图像、第四帧图像、第五帧图像以及第六帧图像。那么在确定将剩余的那1帧图像配置在该第一帧图像和第二帧图像之间进行生成的情况下,第一帧图像和第二帧图像之间所要生成的插入图像则为2帧,而其他相邻两帧图像之间生成的插入图像为1帧。
需要说明的是,在本方式下,在确定每相邻两个图像之间要生成的插入图像的数量后,可以依然按照前述的方式来确定每相邻两个图像之间要生成的插入图像对应的参考运动矢量数据。
再者,需要说明的是,第一序列图像中每帧图像各自对应的运动矢量数据可以通过多种方式进行存储。
作为一种方式,可以通过数据表的方式来存储每帧图像中各自对应的运动矢量数据。作为另外一种方式,可以通过制作贴图(map)来携带每帧图像中各自对应的运动矢量数据。其中,贴图为与序列图像中的图像具有相同轮廓的图像。例如,图10中左侧图像101为包括序列图像的基础图像,而右侧图像102为对应的贴图,贴图中的物体与序列图像中的物体的轮廓是相同的。其中,基础图像中包括有所要达到的动态效果分解为多个动作的情况下,该多个动作各自对应的图像内容。其中,图10中一个方框60对应动态效果中的一个动作,在这种情况下基于图10中的基础图像生成序列图时,一个方框60就可以对应序列图中的一帧图像。
图10中的基础图像表征的一个较小的星星从画面左侧朝向移动,而另一个较大的星星从画面底部朝向顶部移动的动态效果。通过对基础图像中的内容进行剪切就可以得到包括多帧图像的第一序列图像。并且,该贴图中每个像素的指定颜色通道的值用于表征第一序列图像中对应像素的运动矢量。
例如,如图11所示,序列图像中的一帧图像中的像素70与贴图中的像素80为对应的像素。那么像素80对应的指定颜色通道的值用于表征像素70的运动矢量。类似的,序列图像中的一帧图像中的像素71与贴图中的像素81为对应的像素,那么像素81对应的指定颜色通道的值用于表征像素71的运动矢量。其中,指定颜色通道可以为红通道以及绿通道,其中红通道用于表征像素在X轴方向的位移,而绿通道用于表征像素在Y轴方向的位移。
S230:基于该第一序列图像以及该参考运动矢量数据,生成与该放慢倍数匹配的插入图像。
可以理解的是,在本申请实施例中第一序列图像中每帧图像对应的运动矢量数据,表征的是每帧图像中目标像素相比在前一帧图像中目标像素的位移。在先确定好待生成的插入图像对应的参考运动矢量数据后,就可以基于像素移动的方式来生成插入图像。
如图12所示,作为一种方式,该基于该第一序列图像以及该参考运动矢量数据,生成数量与该放慢倍数匹配的插入图像,包括:
S231:在生成当前的参考运动矢量数据对应的插入图像的过程中,获取该当前的参考运动矢量数据对应的目标图像,该目标图像为该当前的参考运动矢量数据所对应的目标像素的移动初始位置所对应的图像。
S232:将每个该参考运动矢量数据各自对应的该目标图像中的目标像素,按照各自所对应的参考运动矢量数据进行移动,得到每个所述参考运动矢量数据对应的插入图像,将当前参考运动矢量数据对应的插入图像的集合作为与放慢倍数匹配的插入图像。
例如,如图13所示,在图13所示的内容中,图像43和图像44为第一序列中组成一个区间的相邻的两帧图像。而其中的图像57、图像58以及图像59为待生成的插入图像。在生成图像57时,图像57所对应的参考运动矢量数据就为当前的参考运动矢量数据。再者,可以理解的是,基于前述的内容可知待生成的图像57、图像58以及图像59各自对应的参考运动矢量数据,是基于图像44的对应的运动矢量数据所表征的位移以及放慢倍数来确定的,而图像44的运动矢量数据所表征的位移为相对于图像43中的位移。那么在生成图像57的过程中,图像57所对应的参考运动矢量数据对应的目标图像即为图像43。而对于图像58对应的参考运动矢量数据表征的位移为相对于图像57中的位移,那么在生成图像58时,图像58对应的参考运动矢量数据为当前的参考运动矢量数据,而当前的参考运动矢量数据对应的目标图像为图像57。对应的,在生成图像59时,当前的参考运动矢量数据对应的目标图像为图像58。
再者,作为另外一种方式,在第一序列图像中每相邻的两个图像之间的参考运动矢量数据生成后,每个参考运动矢量数据各自所表征的目标像素相比每相邻的两个图像中在先显示图像的位移就已经可以确定的。例如,如图14所示,图像44中的目标像素包括的像素11的运动矢量为(0.5,0.5),那么在确定放慢倍数为3倍的情况下,可以得到像素11在图像57、图像58以及图像59中位置对应的运动矢量均为(0.16,0.16,0.16),那么也就意味着像素11在图像58中的位置相比在图像43中的位置的位移为0.32,而像素11在图像59中的位置相比在图像43中的位置的位移为0.48,对应的,对于每相邻两个图像中的参考运动矢量数据对应的目标图像可以均配置为每相邻的两个图像中在先显示图像。
在基于前述方式确定目标图像后,就可以通过移动像素的方式来生成得到插入图像。例如,请继续参阅图14,运动矢量(0,0)为像素11为在图像41中对应的运动矢量,那么在生成图像57时,将像素11朝向箭头所示的方向在X轴和Y轴方向分别移动0.16,就可以得到像素11在图57中的位置,进而通过这种方式就可以得到图像43中的目标像素移动到图像57中的位置,进而生成图像57对应的图像内容。而在生成图像58时,可以继续在生成的图像57的基础上移动目标像素的位置,即继续朝向箭头所示方向移动0.16像素单位,也可以在图像43的基础上将目标像素移动0.32,以生成图像58对应的图像内容。
需要说明的是,如前述内容所示,运动矢量数据表征的是图像中目标像素的位移,而在实施例中可以通过制作具有相同图像内容的贴图来携带每帧图像中各自对应的运动矢量数据,该贴图中每个像素的指定颜色通道的值用于表征第一序列图像中对应像素的运动矢量。在这种情况下,所得到的位移的单位为颜色通道的值的单位,进而在确定目标像素在图像中实际的移动距离时,需要将得到位移再乘以一个转换因子以得到以像素为单位的移动距离。例如,若转换因子为s,那么在计算得到像素的位移为0.25的情况下,那么该像素在图像中实际移动距离为0.25×s个像素单位。例如,在本申请实施例中,该转换因子可以根据图像的位色来确定,图像的位色越大所对应的转换因子越大。其中,图像的位色表征每个颜色通道所占的位数,例如,对于24位色的图像,每个颜色通道(红色通道、绿色通道以及蓝色通道)占8位。
S240:将该插入图像插入该第一序列图像的播放序列中,得到第二序列图像。
S250:播放该第二序列图像。
本申请实施例提供的一种图像处理方法,实现了可以在已经制作得到第一序列图像后,在需要将第一序列图像所表征的动态效果进行放慢且同时保持视觉效果的情况下,可以根据该运动矢量数据以及放慢倍数,生成数量与该放慢倍数匹配的参考运动矢量数据,然后再基于该第一序列图像以及该参考运动矢量数据进行目标像素的移动,进而生成数量与该放慢倍数匹配的插入图像,以插入到第一序列图像的播放序列中,得到包括更多图像的第二序列图像,进而实现不需要再通过开发工具进行制作更多用于插入到第一图像序列中的图像,以降低制作成本。
请参阅图15,图15示出了本申请实施例提出的一种应用于游戏客户端的图像处理方法的流程图,该方法包括:
S310:显示配置界面。
需要说明的是,在本申请实施例中放慢倍数可以预先在第一序列图像的开发阶段就由开发人员进行配置,而也可以在第一序列图像所进行显示的游戏客户端中由用户进行配置。例如,本申请实施例提供的图像处理方法可以由游戏客户端来执行,而该游戏客户端可以配置有配置界面,那么该游戏客户端可以在检测到用户的触发操作后显示配置界面,以便用户配置自己所需的放慢倍数。
S320:基于配置界面获取放慢倍数。
在本申请实施例中,配置界面可以通过多种方式来使得用户输入放慢倍数。
作为一种方式,该配置界面包括第一控件以及可在该第一控件上滑动的第二控件,该获取在该配置界面中输入的动态效果参数,包括:获取该第二控件响应触控操作滑动后所在的位置;将该所在的位置对应的数值,作为放慢倍数。例如,若所在位置对应的数值为2,则可以得到放慢倍数为2倍。
例如,如图16所示,图16所示为游戏客户端的游戏界面99,在该游戏界面99中可以响应用户的操作显示配置界面98,而在该配置98中显示有第一控件97以及可在该第一控件97上滑动的第二控件96。其中,用户可以通过拖拽的方式触控第二控件96在第一控件97上进行滑动,在第一控件97上不同的位置对应不同的数值,游戏客户端可以检测获取该第二控件响应触控操作滑动后所在的位置,将该所在的位置对应的数值,作为输入的放慢倍数。
作为另外一种方式,游戏客户端可以直接在配置界面显示一输入框以及确认控件,以便用户可以直接在该输入框中手动输入所需要的放慢倍数,再点击该确认控件。当游戏客户端检测到该确认控件被触控后,将从输入框中获取的数据作为放慢倍数。
需要说明的是,游戏客户端所安装在的电子设备会给游戏客户端配置专门的文件存储区域,以存储游戏客户端对应的文件。在这种情况下,在该文件存储区域中会对应存储有游戏客户端对应的配置文件,该配置文件可以记录游戏客户端的相关配置信息。例如,所配置的画面分辨率、所配置的音效以及所配置的操作方式等,对应的,也可以配置有放慢倍数。那么在获取到放慢倍数后,游戏客户端还可以将获取的放慢倍数存储到该配置文件中,以便对之前存储的放慢倍数进行更新。例如,配置文件中原本存储的放慢倍数为2倍,那么在检测到有新的输入的放慢倍数后,且识别到该新的输入的放慢倍数与2不同,例如为3,那么游戏客户端会将配置文件的放慢倍数由2更新为3。其中,在这种情况下,配置文件中所存储的放慢倍数可以理解为在实际生成插入图像过程中所采用的放慢倍数。
S330:获取第一序列图像以及该第一序列图像中每帧图像各自对应的运动矢量数据。
S340:基于该运动矢量数据、该第一序列图像以及放慢倍数,生成与该放慢倍数匹配的插入图像。
S350:将该插入图像插入该第一序列图像的播放序列中,得到第二序列图像。
S360:播放该第二序列图像。
需要说明的是,在本申请实施例中,S310到S320,与S350可以分别在不同的阶段执行。可以理解的是,第一序列图像表征的是一动态效果。例如,表征是物体从一端飞到另一端的动态效果。而第二序列图像表征的动态效果与第一序列图像表征动态效果的内容是相同的,主要是第二序列图像表征的动态效果的播放时长与第二序列图像表征的动态效果不同。那么如果需要加载显示第二序列图像对应的动态效果时,才开始获取第一序列图像以及第一序列图像中每帧图像各自对应的运动矢量数据,以生成第二序列图像,那么就可能会造成第二序列图像的显示有延迟感。
可以理解的是,第一序列图像中的每帧图像都是预先已经制作得到的,但是因为要进行放慢处理而新生成的插入图像是需要基于第一序列图像实时渲染得到的,而在该渲染过程中是需要消耗游戏客户端所在电子设备的处理资源(CPU的计算资源或者GPU的计算资源)的,而若当时的处理资源较为紧张,那么就会造成插入图像的渲染效率不高,进而造成第二序列图像的显示有延迟感。
那么作为一种改善该问题的方式,游戏客户端可以在检测到放慢倍数发生改变时,即使当前不处于需要加载第二序列图像的场景时,就预先基于该运动矢量数据、该第一序列图像以及放慢倍数,生成数量与该放慢倍数匹配的插入图像,进而预先生成第二序列图像,以便在需要显示第二序列图像时,可以直接对第二序列图像进行播放,提升动态效果播放的实时性。
本申请实施例提供的一种图像处理方法,除了实现降低制作成本以及缩短动态效果制作的耗时外,在本申请实施例中,该放慢倍速可以为在配置界面进行实时的输入,进而使得实时的对第一序列图像所要展示的动态效果的播放速度进行控制,提升了动态效果展示过程中的交互性,进而也提升了用户体验。
请参阅图17,图17示出了本申请实施例提出的一种应用于游戏客户端的图像处理方法的流程图,该方法包括:
S410:基于外部数据接口获取放慢倍数。
基于外部数据接口获取放慢倍数可以理解为接收通过外部数据接口传输来的放慢倍数。
作为一种方式,游戏客户端所在的电子设备中可以运行有一插件用于集中的对多个游戏客户端的配置信息进行配置,以便于用户可以不必依次对多个游戏客户端各自的配置信息分别进行配置,进而降低了用户的操作重复性,提升用户使用体验。
例如,执行本申请实施例提供的图像处理方法的游戏客户端为客户端A,而客户端A所在的电子设备除了安装有客户端A外,还有客户端B以及客户端C,而其中的电子设备中配置有插件A,该插件A可以通过进程通信的方式与客户端A、客户端B以及客户端C进行通信。在这种情况下,用户可以在插件A中配置游戏界面分辨率、游戏音效以及动态效果的放慢倍数等。插件A在获取到用户配置的游戏界面分辨率、游戏音效以及动态效果的放慢倍数后,可以将这些信息作为配置信息并基于进程通信的方式同步到客户端A、客户端B以及客户端C各自的外部数据接口,以便客户端A、客户端B以及客户端C通过各自的外部数据接口获取到插件A传输的配置信息,以便对配置文件中的配置信息进行更新。
S420:获取第一序列图像以及该第一序列图像中每帧图像各自对应的运动矢量数据。
S430:基于该运动矢量数据、该第一序列图像以及放慢倍数,生成与该放慢倍数匹配的插入图像。
S440:将该插入图像插入该第一序列图像的播放序列中,得到第二序列图像。
S450:播放该第二序列图像。
本申请实施例提供的一种图像处理方法,除了实现降低制作成本以及缩短动态效果制作的耗时外,在本申请实施例中,该放慢倍速可以为由外部程序通过游戏客户端对应的外部数据接口实时输入,进而使得可以实时的第一序列图像所要展示的动态效果的播放速度进行控制,提升了动态效果展示过程中的放慢倍数配置的灵活性,进而也提升了用户体验。
综上,本申请实施例所提供的图像处理方法,可以由终端/服务器独自实现;也可以由终端和服务器协同实现。下面对终端和服务器协同实现图像处理方案进行说明:
终端采集到需要放慢播放第一序列图像的请求(包括放慢倍数以及第一序列图像),将该请求发送至服务器,服务器接收到该请求后,获取第一序列图像以及运动矢量数据,并基于该运动矢量数据、该第一序列图像以及放慢倍数,生成数量与该放慢倍数匹配的插入图像,并将该插入图像插入该第一序列图像的播放序列中,以得到第二序列图像,并将第二序列图像发送至终端,以在终端上播放第二序列图像,从而实现了可以在已经制作得到第一序列图像后,在需要将第一序列图像所表征的动态效果进行放慢且同时保持视觉效果的情况下,根据第一序列图像中每帧图像各自对应的运动矢量数据制作得到插入图像,并插入到第一序列图像的播放序列中,以得到包括更多图像的第二序列图像,进而实现不需要再通过开发工具进行制作更多用于插入到第一图像序列中的图像,以降低制作成本以及缩短动态效果制作的耗时,从而降低开发难度,提高开发效率。
请参阅图18,图18示出了本申请实施例提出的一种图像处理方法的流程图,该方法包括:
S510:原始素材生成。
在生成第一序列图像之前可以预先制作得到基础序列图像作为原始素材,进而再从该基础序列图像中选择部分图像作为第一序列图像。
S520:定义所要生成的序列图的数量和精度。
其中,精度可以理解为图像的分辨率。
例如,在一个包括的总像素固定的基础序列图像中,那么如果要想基于该基础序列图像得到更多的序列图像,那么序列图像中每帧图像的分辨率就会相对更低。例如,例如一个总共2048×2048像素的基础序列图像,如果里面包含的是8×8排列的序列图(共64帧),那么对应得到的单帧画面的像素为256×256。而如果里面包含的是16×16排列的序列图(256帧),那么得到的单帧画面的像素为128×128。
在这种情况下,可以根据实际的需求而定义单帧画面的分辨率,进而得到第一序列图像中所包括的图像的帧数。
S530:在第一图像生成环境中生成该第一序列图像。
S540:在该第一图像生成环境中生成与该第一序列图像中每帧图像各自对应的该运动矢量数据。
需要说明的是,在本申请实施例中S540可以在S530后再执行,也可以与S530同步执行。
S550:将该第一序列图像以及该运动矢量数据输入到第二图像生成环境中,输出携带该第一序列图像、该运动矢量数据以及放慢倍数的材质数据。
作为一种方式,可以在该材质数据(material)中配置标量参数(scalar parameter)用于存储该放慢倍数,以便制作材质模板。其中,该标量参数可以便于外部程序来 识别存储放慢倍数的参数,进而通过外部程序对放慢倍数进行访问或者修改。再者,还可以在材质数据中配置动态参数(dynamic paremeter),该动态参数可以用于在制作粒子效果的同时在第二图像生成环境中的cascade(粒子编辑器)系统调用该放慢倍数。
需要说明的是,在材质数据中配置有标量参数的情况下,可以通过对标量参数中关于放慢倍数的参数进行更新进而实现,材质数据中的放慢倍数进行更新,进而实现实时动态的对第一序列图像所表征的动态效果的播放节奏进行控制。
S560:读取该材质数据,以获取该第一序列图像以及该第一序列图像中每帧图像各自对应的运动矢量数据。
S570:读取该材质数据,以获取放慢倍数。
S580:基于该运动矢量数据、该第一序列图像以及放慢倍数,生成与该放慢倍数匹配的插入图像。
S590:将该插入图像插入该第一序列图像的播放序列中,得到第二序列图像。
S591:播放该第二序列图像。
下面再通过附图对本申请实施例提供的图像处理方法对游戏场景下态效果进行处理得到超级慢动作类的动态效果进行介绍。
请参阅图19,在图19示出了一种即时战斗类游戏的游戏场景,在该游戏场景中播放了炸弹被投掷出去后的爆炸效果(图中箭头所指向的位置)。
例如,普通类的爆炸效果中通常帧率低于超级慢动作类的爆炸效果的帧率,这就意味着普通类的爆炸效果在很短的时间内就会播放完毕,而对于超级慢动作类的爆炸效果则可以具有更长的播放时长以使得整体的爆炸效果变化得较为平缓。
如图20所示,图20上面一行的图像为表征普通类的爆炸效果的第一序列图像,而靠下面一行的图像为经过本申请实施例提供的图像处理方法处理得到的第二序列图像中的部分图像。其中,例如,可以将其中上面一行图像中t1时刻对应的图像和t2时刻对应的图像作为一个区间,进而基于前述实施例中所提供的方案获取得到这个区间内的插入图像,类似的可以获取到t2时刻对应的图像和t3时刻对应的图像这个区间内的插入图像,以及可以获取到t3时刻对应的图像和t4时刻对应的图像这个区间内的插入图像,从而得到表征的超级慢动作类的爆炸效果的第二序列图像。
其中,靠上面一行的为普通类的爆炸效果分别在爆炸开始后t1、t2、t3以及t4时刻的效果图,靠下面一行的为超级慢动作类的爆炸效果分别在爆炸开始后t1、t2、t3以及t4时刻的效果图。从图20中可以看到第一序列图像所表征的普通类的爆炸效果在播放到t4时刻时已经快要结束,而对于第二序列图像所表征的超级慢动作类的爆炸效果在t4时刻时依然和t1时刻的爆炸效果差别不大,在这种情况下超级慢动作类的爆炸效果可以在t4时刻以后经历更多帧图像的变换再变化到上面一行的普通类的爆炸效果在t4时刻的图像,进而使得整体的爆炸效果变化更加平滑。其中,更多帧图像可以包括基于本申请实施例提供的方案所获取的插入图像。
需要说明的是,如图21所示,若第一序列图像包括64帧图像(每一小格代表一帧图像),在图21中示出了在放慢倍数为5倍的情况下,采用本申请实施例提供的方案制作所需制作的图像的数量与相关技术中所需要制作的图像的数量对比效果。其中,虚线框94中为本申请实施例提供的方案对应的所需制作的图像的数量,因为在本申请实施例提供的方案中插入图像是通过运动矢量数据计算生成的,所以无需再采用开发工具制作更多图像,所以仅需要原本通过开发工具制作的第一序列图像包括的64帧图像以及第一序列图像中每帧图像各自对应的贴图(也为64 帧)就可以实现放慢5倍的视觉效果。虚线框95则为在放慢倍数为5相关技术中所需要制作的图像的数量,因为在相关技术中所有的帧的图像均需要通过开发工具制作,所以在放慢5倍的情况下所需制作的帧的数量明显多于虚线框94中的图像的数量。
需要说明的是,在本申请实施例中,作为一种方式,S510到S590可以由安装有第一图像生成环境以及第二图像生成环境的计算机执行,S591可以由游戏客户端来执行。
例如,游戏客户端可以在需要加载该动态效果时,开始读取材质数据以获取第一序列图像以及该第一序列图像中每帧图像各自对应的运动矢量数据。例如,该第一序列图像表征的动态效果为游戏场景中炸弹的爆炸效果。那么游戏客户端在需要展示爆炸效果时,可以读取爆炸效果对应的材质数据以获取该爆炸效果对应的第一序列图像,以及该第一序列图像中每帧图像各自对应的运动矢量数据。
例如,游戏客户端可以在启动时,就开始获取第一序列图像以及该第一序列图像中每帧图像各自对应的运动矢量数据,可以进一步的提升第二序列图像的显示效率,使得第二序列图像可以被更加即时的进行显示,以降低动态效果的显示延时。例如,还是以第一图像序列表征炸弹的爆炸效果为例,对于该爆炸效果是在有炸弹被游戏玩家投掷出去后才会触发,而为了便于该爆炸效果可以无延时的进行显示,游戏客户端可以在还在启动过程中的资源加载阶段或者用户登录阶段就开始获取第一序列图像以及该第一序列图像中每帧图像各自对应的运动矢量数据,以基于该运动矢量数据、该第一序列图像以及放慢倍数,生成数量与该放慢倍数匹配的插入图像,以便在进入需要播放该动态效果的场景之前就体现完成对第二序列图像的生成。
作为另外一种实施方式,S510到S550可以由安装有第一图像生成环境以及第二图像生成环境的计算机执行,S560到S580可以由服务器来执行,S591可以由游戏客户端来执行。在这种方式下,生成的材质数据可以预先存储到该服务器中,在游戏客户端需要播放第二序列图像对应的动态效果时,由服务器根据来读取材质数据以获取到该运动矢量数据、该第一序列图像以及放慢倍数,生成数量与该放慢倍数匹配的插入图像,进而再将该插入图像插入该第一序列图像的播放序列中,得到第二序列图像,然后服务器再将第二序列图像发送到游戏客户端,以便于游戏客户端显示第二序列图像。
需要说明的是,在执行S591时,可以由生成第二序列图像的客户端本身对第二序列图像进行显示,也可以由生成第二序列图像的客户端发送给目标客户端,由目标客户端来显示第二序列图像。例如,若有处于同一个局域网中的电子设备A以及电子设备B,电子设备A和电子设备B中均安装有同一游戏客户端,那么在电子设备A中的游戏客户端因为先触发某个游戏场景,而生成有该游戏场景下的动态效果A对应的第二序列图像后,电子设备A可以将所生成的第二序列图像发送给电子设备B的游戏客户端进行存储。在这种情况下电子设备B中的游戏客户端在后进入该游戏场景而需要加载动态效果A时,就可以直接读取之前电子设备A发送过来的第二序列图像,进而不用电子设备B中的游戏客户端再次重复生成。
本申请实施例提供的一种图像处理方法,实现了可以在已经制作得到第一序列图像后,在需要将第一序列图像所表征的动态效果进行放慢且同时保持视觉效果的情况下,可以根据第一序列图像中每帧图像各自对应的运动矢量数据制作得到插入图像插入到第一序列图像的播放序列中,以得到包括更多图像的第二序列图像,进而实现不需要再通过开发工具进行制作更多用于插入到第一图像序列中的图像,以降低制作成本。并且, 在本申请实施例中,该放慢倍速、第一序列图像以及该运动矢量数据可以配置在生成的材质数据中,进而后续通过读取材质数据就可以集中获取到第一序列图像、运动矢量数据以及放慢倍数,提升了动数据获取的效率。
请参阅图22,本申请实施例提供的一种图像处理装置600,该装置600包括:
数据获取单元610,配置为获取第一序列图像以及该第一序列图像中每帧图像各自对应的运动矢量数据。
图像生成单元620,配置为基于该运动矢量数据、该第一序列图像以及放慢倍数,生成与该放慢倍数匹配的插入图像;其中,所述插入图像的数量与所述放慢倍数对应。
图像配置单元630,配置为将该插入图像插入该第一序列图像的播放序列中,得到第二序列图像。
图像播放单元640,配置为播放该第二序列图像。
本申请实施例提供的一种图像处理装置,通过在获取第一序列图像以及运动矢量数据后,再基于该运动矢量数据、该第一序列图像以及放慢倍数,生成数量与该放慢倍数匹配的插入图像,并将该插入图像插入该第一序列图像的播放序列中,得到第二序列图像的方式,实现了可以在已经制作得到第一序列图像后,在需要将第一序列图像所表征的动态效果进行放慢且同时保持视觉效果的情况下,可以根据第一序列图像中每帧图像各自对应的运动矢量数据制作得到插入图像插入到第一序列图像的播放序列中,以得到包括更多图像的第二序列图像,进而实现不需要再通过开发工具进行制作更多用于插入到第一图像序列中的图像,以降低制作成本。
作为一种方式,如图23所示,该图像生成单元620,包括:矢量数据生成子单元621,配置为根据该运动矢量数据以及放慢倍数,生成与该放慢倍数匹配的参考运动矢量数据;其中,所述参考运动矢量数据的数量与所述放慢倍数对应。图像生成执行子单元622,配置为基于该第一序列图像以及该参考运动矢量数据,生成与该放慢倍数匹配的插入图像。
在这种方式下,矢量数据生成子单元621,还配置为获取目标位移,该目标位移为在后显示图像对应的运动矢量数据所表征的位移;其中,所述在后显示图像为第一序列图像中每两个相邻图像中播放次序靠后的图像;获取该目标位移与该放慢倍数的比值;基于该放慢倍数,获得每两个相邻图像之间的插入图像的数量;将该比值作为每两个相邻图像之间的插入图像对应的参考运动矢量数据,将插入图像对应的参考运动矢量数据作为与该放慢倍数匹配的参考运动矢量数据。
图像生成执行子单元622,还配置为在生成当前的参考运动矢量数据对应的插入图像的过程中,获取该当前的参考运动矢量数据对应的目标图像,该目标图像为该当前的参考运动矢量数据所对应的目标像素的初始位置所对应的图像;将每个该参考运动矢量数据各自对应的该目标图像中的目标像素,按照各自所对应的参考运动矢量数据进行移动,得到每个当前参考运动矢量数据对应的插入图像;将当前参考运动矢量数据对应的插入图像的集合作为与放慢倍数匹配的插入图像。
作为一种方式,如图24所示,该装置600,还包括:参数配置单元650,配置为显示配置界面;获取在该配置界面中输入的放慢倍数;将该输入的放慢倍数作为该放慢倍数。例如,该配置界面包括第一控件以及可在该第一控件上滑动的第二控件,在这种情况下,参数配置单元650,还配置为获取该第二控件响应触控操作滑动后所在的位置;将该所在的位置对应的放慢倍数,作为输入的放慢倍数。
作为另外一种方式,参数配置单元650,还配置为获取外部应用程序通过外部数据接口输入的放慢倍数;将该传输的放慢倍数作为该放慢倍数。
作为一种方式,如图25所示,该装置600,还包括:初始图像生成单元660,配置为在第一图像生成环境中生成该第一序列图像;在该第一图像生成环境中生成与该第一序列图像中每帧图像各自对应的该运动矢量数据;将该第一序列图像以及该运动矢量数据输入到第二图像生成环境中,输出携带该第一序列图像、该运动矢量数据以及放慢倍数的材质数据。在这种方式下,数据获取单元610,还配置为读取该材质数据,以获取该第一序列图像,以及该第一序列图像中每帧图像各自对应的运动矢量数据,以及读取该材质数据,以获取放慢倍数。
需要说明的是,本申请中装置实施例与前述方法实施例是相互对应的,装置实施例中具体的原理可以参见前述方法实施例中的内容,此处不再赘述。
下面将结合图26对本申请提供的一种电子设备进行说明。
请参阅图26,基于上述的图像处理方法,本申请实施例还提供的一种包括可以执行前述图像处理方法的电子设备200。电子设备200包括处理器102、存储器104、以及网络模块106。其中,该存储器104中存储有可以执行前述实施例中内容的程序,而处理器102可以执行该存储器104中存储的程序。
其中,处理器102可以包括一个或者多个用于处理数据的核以及消息矩阵单元。处理器102利用各种接口和线路连接整个电子设备200内的各个部分,通过运行或执行存储在存储器104内的指令、程序、代码集或指令集,以及调用存储在存储器104内的数据,执行电子设备200的各种功能和处理数据。可选地,处理器102可以采用数字信号处理(Digital Signal Processing,DSP)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)、可编程逻辑阵列(Programmable Logic Array,PLA)中的至少一种硬件形式来实现。处理器102可集成中央处理器(Central Processing Unit,CPU)、图像处理器(Graphics Processing Unit,GPU)和调制解调器等中的一种或几种的组合。其中,CPU主要处理操作系统、用户界面和应用程序等;GPU用于负责显示内容的渲染和绘制;调制解调器用于处理无线通信。可以理解的是,上述调制解调器也可以不集成到处理器102中,单独通过一块通信芯片进行实现。
存储器104可以包括随机存储器(Random Access Memory,RAM),也可以包括只读存储器(Read-Only Memory)。存储器104可用于存储指令、程序、代码、代码集或指令集。存储器104可包括存储程序区和存储数据区,其中,存储程序区可存储用于实现操作系统的指令、用于实现至少一个功能的指令(比如触控功能、声音播放功能、图像播放功能等)、用于实现下述各个方法实施例的指令等。存储数据区还可以存储终端100在使用中所创建的数据(比如电话本、音视频数据、聊天记录数据)等。
所述网络模块106用于接收以及发送电磁波,实现电磁波与电信号的相互转换,从而与通讯网络或者其他设备进行通讯,例如和音频播放设备进行通讯。所述网络模块106可包括各种现有的用于执行这些功能的电路元件,例如,天线、射频收发器、数字信号处理器、加密/解密芯片、用户身份模块(SIM)卡、存储器等等。所述网络模块106可与各种网络如互联网、企业内部网、无线网络进行通讯或者通过无线网络与其他设备进行通讯。上述的无线网络可包括蜂窝式电话网、无线局域网或者城域网。例如,网络模块106可以与基站进行信息交互。
请参考图27,其示出了本申请实施例提供的一种计算机可读存储介质的结构框图。该计算机可读介质1100中存储有程序代码,所述程序代码可被处理器调用执行上述方法实施例中所描述的方法。
计算机可读存储介质1100可以是诸如闪存、EEPROM(电可擦除可编程只读存储器)、EPROM、硬盘或者ROM之类的电子存储器。可选地,计算机可读存储介质1100包括非易失性计算机可读介质(non-transitory computer-readable storage medium)。计算 机可读存储介质1100具有执行上述方法中的任何方法步骤的程序代码810的存储空间。这些程序代码可以从一个或者多个计算机程序产品中读出或者写入到这一个或者多个计算机程序产品中。程序代码1110可以例如以适当形式进行压缩。
综上所述,本申请提供的一种图像处理方法、装置、电子设备及存储介质,通过在获取第一序列图像以及运动矢量数据后,再基于所述运动矢量数据、所述第一序列图像以及放慢倍数,生成数量与所述放慢倍数匹配的插入图像,并将所述插入图像插入所述第一序列图像的播放序列中,得到第二序列图像的方式,实现了可以在已经制作得到第一序列图像后,在需要将第一序列图像所表征的动态效果进行放慢且同时保持视觉效果的情况下,可以根据第一序列图像中每帧图像各自对应的运动矢量数据制作得到插入图像插入到第一序列图像的播放序列中,以得到包括更多图像的第二序列图像,进而实现不需要再通过开发工具进行制作更多用于插入到第一图像序列中的图像,以降低制作成本以及缩短动态效果制作的耗时。
并且,因为降低了前期制作过程中所需要通过开发工具进行制作的图像的数量,也降低了需要占用的内存的空间,提升了存储空间的利用率。
最后应说明的是:以上实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不驱使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围。

Claims (15)

  1. 一种图像处理方法,所述方法由电子设备执行,所述方法包括:
    获取第一序列图像以及所述第一序列图像中每帧图像各自对应的运动矢量数据;
    基于所述运动矢量数据、所述第一序列图像以及放慢倍数,生成与所述放慢倍数匹配的插入图像;其中,所述插入图像的数量与所述放慢倍数对应;
    将所述插入图像插入所述第一序列图像的播放序列中,得到第二序列图像;
    播放所述第二序列图像。
  2. 根据权利要求1所述的方法,其中,所述基于所述运动矢量数据、所述第一序列图像以及放慢倍数,生成与所述放慢倍数匹配的插入图像,包括:
    基于所述运动矢量数据以及放慢倍数,生成与所述放慢倍数匹配的参考运动矢量数据;其中,所述参考运动矢量数据的数量与所述放慢倍数对应;
    基于所述第一序列图像以及所述参考运动矢量数据,生成与所述放慢倍数匹配的插入图像。
  3. 根据权利要求2所述的方法,其中,所述基于所述运动矢量数据以及放慢倍数,生成与所述放慢倍数匹配的参考运动矢量数据,包括:
    获取目标位移,所述目标位移为在后显示图像对应的运动矢量数据所表征的位移;其中,所述在后显示图像为所述第一序列图像中每两个相邻图像中播放次序靠后的图像;
    获取所述目标位移与所述放慢倍数的比值;
    基于所述放慢倍数,获得所述每两个相邻图像之间的插入图像的数量;
    将所述比值作为所述每两个相邻图像之间的插入图像对应的参考运动矢量数据,将所述插入图像对应的参考运动矢量数据作为与所述放慢倍数匹配的参考运动矢量数据。
  4. 根据权利要求2所述的方法,其中,所述基于所述第一序列图像以及所述参考运动矢量数据,生成与所述放慢倍数匹配的插入图像,包括:
    在生成当前的参考运动矢量数据对应的插入图像的过程中,获取所述当前的参考运动矢量数据对应的目标图像,所述目标图像为所述当前的参考运动矢量数据所对应的目标像素的初始位置所对应的图像;
    将每个所述当前参考运动矢量数据各自对应的所述目标图像中的目标像素,按照各自所对应的参考运动矢量数据进行移动,得到每个所述当前参考运动矢量数据对应的插入图像;
    将所述当前参考运动矢量数据对应的插入图像的集合作为与所述放慢倍数匹配的插入图像。
  5. 根据权利要求1所述的方法,其中,所述基于所述运动矢量数据、所述第一序列图像以及放慢倍数,生成与所述放慢倍数匹配的插入图像之前,还包括:
    显示配置界面;
    基于所述配置界面获取所述放慢倍数。
  6. 根据权利要求5所述的方法,其中,所述配置界面包括第一控件以及可相对于所述第一控件滑动的第二控件,
    所述基于所述配置界面获取所述放慢倍数,包括:
    获取所述第二控件响应触控操作滑动后所在的位置;
    将所述所在的位置对应的数值,作为所述放慢倍数。
  7. 根据权利要求1所述的方法,其中,所述基于所述运动矢量数据、所述第一序列图像以及放慢倍数,生成与所述放慢倍数匹配的插入图像之前,还包括:
    基于外部数据接口获取放慢倍数。
  8. 根据权利要求1-7任一所述的方法,其中,所述运动矢量数据为图像内容与所述第一序列图像对应的贴图所携带的运动矢量,所述贴图中每个像素的指定颜色通道的值用于表征第一序列图像中对应像素的运动矢量。
  9. 一种图像处理装置,所述装置包括:
    数据获取单元,配置为获取第一序列图像以及所述第一序列图像中每帧图像各自对应的运动矢量数据;
    图像生成单元,配置为基于所述运动矢量数据、所述第一序列图像以及放慢倍数,生成与所述放慢倍数匹配的插入图像;其中,所述插入图像的数量与所述放慢倍数对应;
    图像配置单元,配置为将所述插入图像插入所述第一序列图像的播放序列中,得到第二序列图像;
    图像播放单元,配置为播放所述第二序列图像。
  10. 根据权利要求9所述的装置,其中,所述图像生成单元,包括:
    矢量数据生成子单元,配置为基于该运动矢量数据以及放慢倍数,生成与该放慢倍数匹配的参考运动矢量数据;其中,所述参考运动矢量数据的数量与所述放慢倍数对应;
    图像生成执行子单元,配置为基于所述第一序列图像以及所述参考运动矢量数据,生成与所述放慢倍数匹配的插入图像。
  11. 根据权利要求10所述的装置,其中,所述矢量数据生成子单元,还配置为获取目标位移,所述目标位移为在后显示图像对应的运动矢量数据所表征的位移;其中,所述在后显示图像为所述第一序列图像中每两个相邻图像中播放次序靠后的图像;获取所述目标位移与所述放慢倍数的比值;基于所述放慢倍数,获得所述每两个相邻图像之间的插入图像的数量;将所述比值作为所述每两个相邻图像之间的插入图像对应的参考运动矢量数据,将所述插入图像对应的参考运动矢量数据作为与所述放慢倍数匹配的参考运动矢量数据。
  12. 根据权利要求10所述的装置,其中,所述图像生成执行子单元,还配置为在生成当前的参考运动矢量数据对应的插入图像的过程中,获取所述当前的参考运动矢量数据对应的目标图像,所述目标图像为所述当前的参考运动矢量数据所对应的目标像素的初始位置所对应的图像;将每个所述当前参考运动矢量数据各自对应的所述目标图像中的目标像素,按照各自所对应的参考运动矢量数据进行移动,得到每个所述当前参考运动矢量数据对应的插入图像;将所述当前参考运动矢量数据对应的插入图像的集合作为与所述放慢倍数匹配的插入图像。
  13. 根据权利要求9所述的装置,其中,所述装置还包括:
    参数配置单元,配置为显示配置界面;基于所述配置界面获取所述放慢倍数。
  14. 一种电子设备,包括处理器以及存储器;一个或多个程序被存储在所述存储器中并被配置为由所述处理器执行以实现权利要求1-8所述的方法。
  15. 一种计算机可读存储介质,所述计算机可读存储介质中存储有程序代码,在所述程序代码被处理器运行时执行权利要求1-8任一所述的方法。
PCT/CN2020/125078 2020-01-10 2020-10-30 图像处理方法、装置、电子设备及存储介质 WO2021139359A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/718,318 US11989814B2 (en) 2020-01-10 2022-04-12 Image processing method and apparatus, electronic device, and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010028338.3A CN111260760B (zh) 2020-01-10 2020-01-10 图像处理方法、装置、电子设备及存储介质
CN202010028338.3 2020-01-10

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/718,318 Continuation US11989814B2 (en) 2020-01-10 2022-04-12 Image processing method and apparatus, electronic device, and storage medium

Publications (1)

Publication Number Publication Date
WO2021139359A1 true WO2021139359A1 (zh) 2021-07-15

Family

ID=70953982

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/125078 WO2021139359A1 (zh) 2020-01-10 2020-10-30 图像处理方法、装置、电子设备及存储介质

Country Status (3)

Country Link
US (1) US11989814B2 (zh)
CN (1) CN111260760B (zh)
WO (1) WO2021139359A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114820895A (zh) * 2022-03-11 2022-07-29 支付宝(杭州)信息技术有限公司 动画数据处理方法、装置、设备及系统

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111260760B (zh) 2020-01-10 2023-06-20 腾讯科技(深圳)有限公司 图像处理方法、装置、电子设备及存储介质
CN113509731B (zh) * 2021-05-19 2024-06-04 网易(杭州)网络有限公司 流体模型处理方法及装置、电子设备、存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013112517A1 (en) * 2012-01-25 2013-08-01 Sony Corporation Applying motion blur to only select objects in video
CN105100692A (zh) * 2014-05-14 2015-11-25 杭州海康威视系统技术有限公司 视频播放方法及其装置
CN105120337A (zh) * 2015-08-28 2015-12-02 小米科技有限责任公司 视频特效处理方法、装置及终端设备
CN111260760A (zh) * 2020-01-10 2020-06-09 腾讯科技(深圳)有限公司 图像处理方法、装置、电子设备及存储介质

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6442203B1 (en) * 1999-11-05 2002-08-27 Demografx System and method for motion compensation and frame rate conversion
JP5155462B2 (ja) * 2011-08-17 2013-03-06 株式会社スクウェア・エニックス・ホールディングス 動画配信サーバ、動画再生装置、制御方法、プログラム、及び記録媒体
US8692933B1 (en) * 2011-10-20 2014-04-08 Marvell International Ltd. Method and apparatus for buffering anchor frames in motion compensation systems
JP2015056695A (ja) * 2013-09-10 2015-03-23 株式会社東芝 動画再生装置
CN107707899B (zh) * 2017-10-19 2019-05-10 中科创达软件股份有限公司 包含运动目标的多视角图像处理方法、装置及电子设备
CN107808388B (zh) * 2017-10-19 2021-10-12 中科创达软件股份有限公司 包含运动目标的图像处理方法、装置及电子设备
CN110121114B (zh) * 2018-02-07 2021-08-27 华为技术有限公司 发送流数据的方法及数据发送设备
CN114513671B (zh) 2018-04-02 2024-04-09 华为技术有限公司 一种视频编解码方法和装置
CN109803175B (zh) * 2019-03-12 2021-03-26 京东方科技集团股份有限公司 视频处理方法及装置、设备、存储介质
CN110460907B (zh) * 2019-08-16 2021-04-13 维沃移动通信有限公司 一种视频播放控制方法及终端

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013112517A1 (en) * 2012-01-25 2013-08-01 Sony Corporation Applying motion blur to only select objects in video
CN105100692A (zh) * 2014-05-14 2015-11-25 杭州海康威视系统技术有限公司 视频播放方法及其装置
CN105120337A (zh) * 2015-08-28 2015-12-02 小米科技有限责任公司 视频特效处理方法、装置及终端设备
CN111260760A (zh) * 2020-01-10 2020-06-09 腾讯科技(深圳)有限公司 图像处理方法、装置、电子设备及存储介质

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114820895A (zh) * 2022-03-11 2022-07-29 支付宝(杭州)信息技术有限公司 动画数据处理方法、装置、设备及系统

Also Published As

Publication number Publication date
US11989814B2 (en) 2024-05-21
US20220237848A1 (en) 2022-07-28
CN111260760B (zh) 2023-06-20
CN111260760A (zh) 2020-06-09

Similar Documents

Publication Publication Date Title
WO2021139359A1 (zh) 图像处理方法、装置、电子设备及存储介质
CN112037311B (zh) 一种动画生成的方法、动画播放的方法以及相关装置
CN106611435B (zh) 动画处理方法和装置
US20210001216A1 (en) Method and device for generating video frames
WO2022048097A1 (zh) 一种基于多显卡的单帧画面实时渲染方法
CN102375687A (zh) 在无线显示表面上显示计算机桌面
CN111491208B (zh) 视频处理方法、装置、电子设备及计算机可读介质
KR102441514B1 (ko) 하이브리드 스트리밍
CN112770168A (zh) 视频的播放方法以及相关装置、设备
CN114845136A (zh) 视频合成方法、装置、设备和存储介质
CN114880062A (zh) 聊天表情展示方法、设备、电子设备及存储介质
CN114570020A (zh) 数据处理方法以及系统
CN108876866B (zh) 一种媒体数据处理方法、装置及存储介质
WO2024067159A1 (zh) 视频生成方法、装置、电子设备及存储介质
WO2018049682A1 (zh) 一种虚拟3d场景制作方法及相关设备
EP4406632A1 (en) Image frame rendering method and related apparatus
CN116109737A (zh) 动画生成方法、装置、计算机设备及计算机可读存储介质
CN116966546A (zh) 图像处理方法、装置、介质、设备和程序产品
CN112274933B (zh) 动画数据处理方法及装置、存储介质、计算机设备
US8425328B2 (en) Terminal device, control method for terminal device, information non-transitory storage medium and program
CN115499673B (zh) 一种直播方法及装置
WO2024087971A1 (zh) 用于图像处理的方法、装置及存储介质
CN111107425B (zh) 基于渲染组件元素获取计算资源的方法、系统和存储介质
US20240155175A1 (en) Method and apparatus for generating interactive video
JP2020160737A (ja) 情報処理装置、情報処理方法及びプログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20912958

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 14.11.2022)

122 Ep: pct application non-entry in european phase

Ref document number: 20912958

Country of ref document: EP

Kind code of ref document: A1