CN111107427A - Image processing method and related product - Google Patents

Image processing method and related product Download PDF

Info

Publication number
CN111107427A
CN111107427A CN201911142691.8A CN201911142691A CN111107427A CN 111107427 A CN111107427 A CN 111107427A CN 201911142691 A CN201911142691 A CN 201911142691A CN 111107427 A CN111107427 A CN 111107427A
Authority
CN
China
Prior art keywords
frame
frame image
target object
image
iou
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911142691.8A
Other languages
Chinese (zh)
Other versions
CN111107427B (en
Inventor
任康
方攀
陈岩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201911142691.8A priority Critical patent/CN111107427B/en
Publication of CN111107427A publication Critical patent/CN111107427A/en
Application granted granted Critical
Publication of CN111107427B publication Critical patent/CN111107427B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440281Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by altering the temporal resolution, e.g. by frame skipping
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/01Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level
    • H04N7/0127Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level by changing the field or frame frequency of the incoming video signal, e.g. frame rate converter

Abstract

The embodiment of the application discloses an image processing method and a related product, wherein the method comprises the following steps: acquiring a first frame image and a second frame image of the target object, wherein the first frame image and the second frame image are adjacent frame images; acquiring a first coordinate of the target object according to the first frame image, and acquiring a second coordinate of the target object according to the second frame image; calculating IoU an intersection ratio of the target object according to the first coordinate and the second coordinate; and when the IoU is smaller than a preset threshold value, generating an intermediate frame according to the first frame image and the second frame image. Judging whether frame interpolation is needed or not by calculating the intersection ratio IoU of the target objects between the front frame and the rear frame; and when the IoU is smaller than a preset threshold, performing frame interpolation to generate an intermediate frame, reducing the span between the front frame image and the rear frame image, and improving the continuity between the multi-frame images.

Description

Image processing method and related product
Technical Field
The present application relates to the field of video processing technologies, and in particular, to an image processing method and a related product.
Background
At present, in the process of identifying a dynamic object, the position of the dynamic object is generally updated in real time, and the frame rate of object tracking depends on the track algorithm callback frequency and also depends on the frame rate of a video stream taken from a camera device, and the frame rate is generally 30 (FPS). If the handset performance is not sufficient, the frame rate may be below 30 FPS. When the frame rate is too low and the camera device moves rapidly, if the span of the position of the dynamic object is large, the updated position causes a pause feeling. The overall presentation effect of the whole video is influenced.
Disclosure of Invention
The embodiment of the application provides an image processing method and a related product.
In a first aspect, a method for image processing is applied to a terminal, and the method includes:
acquiring a first frame image and a second frame image of the target object, wherein the first frame image and the second frame image are adjacent frame images;
acquiring a first coordinate of the target object according to the first frame image, and acquiring a second coordinate of the target object according to the second frame image;
calculating IoU an intersection ratio of the target object according to the first coordinate and the second coordinate;
and when the IoU is smaller than a preset threshold value, generating an intermediate frame according to the first frame image and the second frame image.
In a second aspect, a terminal is characterized in that the terminal includes:
the device comprises an acquisition unit, a processing unit and a display unit, wherein the acquisition unit is used for acquiring a first frame image and a second frame image of the target object, and the first frame image and the second frame image are adjacent frame images;
the acquisition unit is further configured to acquire a first coordinate of the target object according to the first frame image, and acquire a second coordinate of the target object according to the second frame image;
a calculating unit, configured to calculate IoU an intersection ratio of the target object according to the first coordinate and the second coordinate;
and the generating unit is used for generating an intermediate frame according to the first frame image and the second frame image when the IoU is smaller than a preset threshold value.
In a third aspect, an embodiment of the present application provides an electronic device, including a processor, a memory, a communication interface, and one or more programs, where the one or more programs are stored in the memory and configured to be executed by the processor, and the program includes instructions for executing the steps in the first aspect of the embodiment of the present application.
In a fourth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a data interface, where the processor reads instructions stored on a memory through the data interface, and performs a method according to the first aspect to the third aspect and any optional implementation manner described above.
In a fifth aspect, the present application provides a computer-readable storage medium, where the computer-readable storage medium stores a computer program for electronic data exchange, where the computer program makes a computer perform some or all of the steps described in the first aspect of the present application.
In a sixth aspect, embodiments of the present application provide a computer program product, where the computer program product includes a non-transitory computer-readable storage medium storing a computer program, where the computer program is operable to cause a computer to perform some or all of the steps as described in the first aspect of embodiments of the present application. The computer program product may be a software installation package.
In the embodiment of the present application, after the first frame image and the second frame image of the target object, that is, the adjacent frame images are acquired, the intersection ratio IoU of the target object is calculated according to the coordinates of the target object in the adjacent frames, and when the value IoU is smaller than the preset threshold value, that is, the overlapping ratio of the two frames before and after is too low, it indicates that the distance span of the target object in the two frames before and after is too large, so that an intermediate frame is generated according to the first frame image and the second frame image. Through frame interpolation processing, the span between the front frame image and the rear frame image is reduced, the problem that the callback frequency of a track algorithm is too low is solved, and the continuity between the multi-frame images is improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic structural diagram of a terminal according to an embodiment of the present application;
fig. 2A is a schematic flowchart of an image processing method according to an embodiment of the present application;
FIG. 2B is a schematic diagram of a target object intersection ratio provided in an embodiment of the present application;
FIG. 3 is a schematic flowchart of another image processing method provided in the embodiments of the present application;
fig. 4 is a schematic diagram of functional units of a terminal according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of an electronic device provided in an embodiment of the present application;
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "first," "second," and the like in the description and claims of the present application and in the above-described drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
At present, in the process of identifying a dynamic object, the position of the dynamic object is generally updated in real time, and the frame rate of object tracking depends on the track algorithm callback frequency and also depends on the frame rate of a video stream taken from a camera device, and the frame rate is generally 30 frame rate (FPS). If the handset performance is not sufficient, the frame rate may be below 30 FPS. When the frame rate is too low and the camera device moves rapidly, if the span of the position of the dynamic object is large, the updated position causes a pause feeling.
In order to solve the above problem, an embodiment of the present application provides an image processing method, which is applied to a terminal. The following detailed description is made with reference to the accompanying drawings.
First, please refer to the schematic structural diagram of the terminal 100 shown in fig. 1, which includes an image processing apparatus 110, an image display apparatus 120, and a communication interface 130.
The terminal may include, for example, a distributed storage server, a legacy server, a mass storage system, a desktop computer, a notebook computer, a tablet computer, a palmtop computer, a smart phone, a portable digital player, a smart watch, a smart bracelet, and the like. The terminal includes, but is not limited to, a device with a communication function, a smart phone, a tablet computer, a notebook computer, a desktop computer, a portable digital player, a smart band, a smart watch, and the like.
The technical solution of the embodiment of the present application may be implemented based on the communication system with the architecture illustrated in fig. 1 by way of example or a modified architecture thereof.
Referring to fig. 2A, fig. 2A is a schematic flowchart of a method for image processing according to an embodiment of the present application, where the method may include, but is not limited to, the following steps:
201. acquiring a first frame image and a second frame image of the target object, wherein the first frame image and the second frame image are adjacent frame images;
specifically, it may be understood that, for convenience of acquiring the first frame image and the second frame image of the target object, before the acquisition, the target object may be marked to obtain a mark ID corresponding to the target object, and the mark ID may be stored. The target object is a moving object such as a running dog, a fast moving electric toy, or any object that can be moved such as a fast moving person. One or two or more. When the target object is one, each frame of image in a preview stream contains the target object, and a first frame of image and a second frame of image of the target object are obtained from the target object, wherein the first frame of image and the second frame of image are adjacent frames, namely the first frame is a current frame, and the second frame is a previous frame or a next frame of the first frame.
202. Acquiring a first coordinate of the target object according to the first frame image, and acquiring a second coordinate of the target object according to the second frame image;
specifically, the target object in the first frame image is framed and selected in a form of a target frame to obtain a first target frame, and a first coordinate of the target object is obtained according to the first target frame; and framing the target object in the second frame image in a target frame mode to obtain a second target frame, and acquiring a second coordinate of the target object according to the second target frame. Each frame image is regarded as a coordinate system, for example, the first frame image and the second frame image may have the same coordinate system. The target frame of the target object corresponds to four vertex coordinates in the first frame image, the coordinates of the four vertex coordinates can be called as first coordinates, or the coordinates of the center point of the target frame can be calculated according to the four vertex coordinates, and the coordinates of the center point are called as first coordinates; similarly, the target corresponding to the target object in the second frame image may also correspond to four vertex coordinates, a combination of the four vertex coordinates may be referred to as a second coordinate, or a coordinate of a center point of the target frame may be calculated from the four vertex coordinates, and the coordinate of the center point may be referred to as the second coordinate.
203. Calculating IoU an intersection ratio of the target object according to the first coordinate and the second coordinate;
specifically, an Intersection-over-Union (IoU), a concept used in target detection, is the overlapping rate of the generated candidate frame (candidate frame) and the original labeled frame (ground truth frame), i.e., the ratio of their Intersection to Union. The determination of the intersection ratio of the candidate frame, i.e., the second target frame, and the original mark frame, i.e., the first target frame, based on the first coordinate and the second coordinate may be performed based on the four vertex coordinates of the first target frame and the four vertex coordinates of the second target frame, or may be performed based on the center point coordinate of the first target frame and the center point coordinate of the second target frame.
204. And when the IoU is smaller than a preset threshold value, generating an intermediate frame according to the first frame image and the second frame image.
Specifically, the preset threshold may be understood as a preset intersection ratio, and when the preset intersection ratio is smaller than the preset threshold, it indicates that the overlapping rate of the first frame image and the second frame image is too low, which may be because the target object moves too fast, and the speed of capturing an image by the image capturing device cannot keep pace with the moving speed of the target object, so that the first frame image and the second frame image may present a frame break or a pause phenomenon when displayed, and therefore, an intermediate frame is generated according to the first frame image and the second frame image, which can improve the continuity of a video stream composed of multiple frame images including the target object, and present a smoother effect.
It can be seen that, in the embodiment of the present application, after the first frame image and the second frame image of the target object, that is, the adjacent frame images are acquired, the intersection ratio IoU of the target object is calculated according to the coordinates of the target object in the adjacent frames, and when the value IoU is smaller than the preset threshold value, that is, the overlapping ratio of the two frames before and after is too low, it indicates that the distance span of the target object in the two frames before and after is too large, so that the intermediate frame is generated according to the first frame image and the second frame image. Through frame interpolation processing, the span between the front frame image and the rear frame image is reduced, and the overall continuity of the multi-frame images is improved.
In one possible example, the calculating IoU the intersection ratio of the target object according to the first coordinate and the second coordinate includes: calculating the area of the intersection region of the target object according to the first coordinate and the second coordinate; calculating the area of a phase-parallel region of the target object according to the first coordinate and the second coordinate; and calculating IoU the intersection ratio according to the intersection region area and the phase region area.
Specifically, as shown in fig. 2B, the four vertices of the first target box are a1 (a)1x,A1y)、B1(B1x,B1y)、C1(C1x,C1y)、D1(D1x,D1y) The four vertices of the second target box are A2 (A)2x,A2y)、B2(B2x,B2y)、C2(C2x,C2y)、D2(D2x,D2y);
Wherein the length of the intersection region is C1x-A2xThe width of the intersection region is A2y+C1yAnd the area of the intersection region is equal to the length of the intersection region and the width of the intersection region. Similarly, the area of the intersection region may also be calculated from the coordinates of other vertices. In addition, the area of the phase-parallel region is equal to the area of the first target frame + the area of the second target frame-the area of the intersection region; the intersection ratio is the area of the intersection region/the area of the phase-parallel region.
It can be seen that the same coordinate system is established for different frames, so that the intersection-parallel ratio of the front frame and the rear frame can be calculated conveniently, and the image processing efficiency can be effectively improved.
In one possible example, the generating an intermediate frame from the first frame image and the second frame image includes: the target object in the first frame image is selected in a frame mode in a target frame mode, and a first target frame is obtained; the target object in the second frame image is selected in a frame mode in a target frame mode, and a second target frame is obtained; the first coordinates are four vertex coordinates of the first target frame, the second coordinates are four vertex coordinates of the second target frame, and the four vertex coordinates of the middle target frame are calculated according to the four vertex coordinates of the first target frame and the four vertex coordinates of the second target frame to obtain the position of the middle target frame; and generating the intermediate frame according to the position of the intermediate target frame and the background images of the first frame image and the second frame image.
Specifically, the first frame image and the second frame image are respectively regarded as a complete coordinate, and the directions of the origin, the scale and the horizontal and vertical coordinates of the two coordinate systems are completely consistent. The first target frame is a frame containing a target object in the first frame image, and can be a rectangle or any other polygon; the second target frame is a frame containing a target object in the second frame image, and may be a rectangle or any other polygon. Taking a rectangle as an example, as shown in fig. 2B, four vertex coordinates of the middle target frame are calculated, the middle target frame is a target frame located between the first target frame and the second target frame, and four vertices of the middle target frame are denoted as a12、B12、C12、D12Wherein A is12Has an abscissa of (A)1x+A2x)/2,A12Has a ordinate of (A)1y+A2y)/2,B12Has an abscissa of (B)1x+B2x)/2,B12Has a ordinate of (B)2y+B2y)/2;C12Has an abscissa of (C)1x+C2x)/2,C12Has a ordinate of (C)2y+C2y)/2;D12Has an abscissa of (D)1x+D2x)/2,D12Has a ordinate of (D)2y+D2y)/2. Four vertex coordinates of the intermediate target frame, i.e., the position of the intermediate target frame, are thus determined. Then, a background image is collected from the first frame image and the second frame image, and the intermediate frame is synthesized with the intermediate target frame.
In one possible example, after the generating an intermediate frame from the first frame image and the second frame image, the method further includes: sequentially selecting the first frame image, the intermediate frame image and the second frame image to enter an image queue to be displayed; sequentially acquiring image data in the image queue to be displayed, and when the first frame of image data is acquired, sending the first frame of image to a UI thread according to a preset period for display operation, and displaying the first frame of image; when the intermediate frame image data is acquired, sending the intermediate frame image to a UI thread according to a preset period for display operation, and displaying the intermediate frame image; and when the second frame image data is acquired, sending the second frame image to a UI thread according to a preset period for display operation, and displaying the second frame image.
Specifically, the to-be-displayed image queue is a multi-frame image set which is obtained by preprocessing acquired multi-frame images and then sequentially storing the preprocessed multi-frame images according to the acquired time sequence. And after the image entering the image queue to be displayed is obtained by the UI thread, performing display refreshing operation on the image corresponding to the image data. For example, the current display screen displays a first frame image, and after a preset period (determined by a screen refresh rate) is reached, frame data at the head of a queue to be displayed is taken and sent to the UI thread for refreshing. If the image at the head of the queue is an intermediate frame, the intermediate frame is taken and sent to the UI thread for refreshing, and so on.
In addition, before the first frame image, the intermediate frame image and the second frame image are selected to enter an image queue to be displayed, preprocessing operation is performed: calculating the required number of intermediate frames, wherein the generated number of intermediate frames depends on the camera preview stream frame rate (or Tracker algorithm call-back frame rate), the camera preview stream frame rate is set to be A, and the screen supports a refresh rate B, and then the required number of intermediate frames is (B-A)/A. The refresh rate supported by a screen is the number of frames per second that the screen displays refreshes. If a is 30 and B is 60, the number of intermediate frames is 1.
Therefore, the multi-frame image after the preprocessing (whether the intermediate frame needs to be produced or not is judged, and when the intermediate frame needs to be generated, the intermediate frame is generated) is placed in the image queue to be displayed, and when the intermediate frame is displayed, the generated intermediate frame is positioned between the front frame and the rear frame according to the acquired time sequence, the display refreshing is carried out in sequence, and the refreshing stability of the display screen can be improved.
In one possible example, after the displaying the second frame image, the method further includes:
and when the acquired image data is empty, displaying the second frame image.
Specifically, the image data is empty, that is, the image data is not acquired, that is, there is no image available for refreshing in the image queue to be displayed. This time, the display screen continues to display the currently displayed image, stopping the refresh. For example, the second frame image is currently displayed, the display screen continues to display the second frame image. Or, the second frame image may be displayed when the image data acquired twice or more continuously is empty; twice and more than once, the possibility of misidentification can be reduced.
Therefore, an image queue to be displayed is added between the acquired image and the display operation, and when no image which can be used for refreshing exists in the image queue to be displayed, the display screen continues to display the current image and stops the refreshing operation. The power consumption of the device is effectively reduced.
In one possible example, after the calculating IoU the intersection ratio of the target object, the method further comprises: if the IoU is larger than a preset threshold and smaller than 1, sequentially selecting the first frame image and the second frame image to enter an image queue to be displayed; and if the IoU is equal to 1, prohibiting the second frame image from entering the image queue to be displayed.
Specifically, the IoU is greater than a preset threshold and less than 1, that is, the intersection ratio of the first frame image and the second frame image reflects that the two frame images are relatively coherent, and the camera device effectively captures the movement of the target object, so that frame interpolation is not required. And directly selecting the first frame image and the second frame image to enter an image queue to be displayed. In addition, when the IoU is equal to 1, that is, the positions of the target object in the first frame image and the second frame image are identical, the displacement of the target object is not captured, so that the second frame image is not required to enter the image queue to be displayed, that is, the second frame image is prohibited from entering the image queue to be displayed.
It can be seen that, whether the next frame image (i.e. the second frame image) can enter the image queue to be displayed is determined according to the IoU values of the first frame image and the second frame image, and when the IoU is equal to 1, that is, the positions of the target objects in the first frame image and the second frame image are identical, the second frame image is prohibited from entering the image queue to be displayed, so that the consumption of resources is reduced.
In one possible example, if the target object includes a first target object and a second target object, the method further includes: acquiring the coordinates of the first target object and the coordinates of the second target object in the first frame image; acquiring the coordinates of the first target object and the coordinates of the second target object in the second frame image; calculating the intersection ratio of the first target object according to the coordinates of the first target object in the first frame image and the coordinates of the first target object in the second frame image to obtain a first IoU; calculating the intersection ratio of a second target object according to the coordinates of the second target object in the first frame image and the coordinates of the second target object in the second frame image to obtain a second IoU; generating the intermediate frame from the first frame image and the second frame image when either or both of the first IoU and the second IoU are less than the preset threshold; when the first IoU equals 1 and the second IoU equals 1, prohibiting the second frame image from entering the queue of images to be displayed.
Specifically, if the target objects include a first target object and a second target object, that is, two (or three or more, here, two are taken as an example) target objects are included in one frame of image, as described above, the first target object and the second target object may be selected by displaying target frames, and IoU of the first target object and the second target object may be calculated respectively. When either or both of IoU are less than the preset threshold, generating the intermediate frame from the first frame image and the second frame image. And when both are equal to 1, forbidding the second frame image from entering the image queue to be displayed.
Therefore, when two or more target objects are provided, the two target objects are separated and treated as a whole, so that the whole smooth feeling can be effectively improved, and the resource idle consumption is reduced.
Referring to fig. 3, fig. 3 is a schematic flowchart of another image processing method provided in the embodiment of the present application, and is applied to a terminal, consistent with the embodiment shown in fig. 2A; the method comprises the following steps:
301. acquiring a first frame image and a second frame image of the target object, wherein the first frame image and the second frame image are adjacent frame images;
302. acquiring a first coordinate of the target object according to the first frame image, and acquiring a second coordinate of the target object according to the second frame image;
303. calculating IoU an intersection ratio of the target object according to the first coordinate and the second coordinate;
304. when the IoU is smaller than a preset threshold value, generating an intermediate frame according to the first frame image and the second frame image;
the steps 301 and 304 are the same as the steps 201 and 204, and are not described herein again.
305. Sequentially selecting the first frame image, the intermediate frame image and the second frame image to enter an image queue to be displayed;
specifically, the to-be-displayed image queue is a multi-frame image set which is obtained by preprocessing acquired multi-frame images and then sequentially storing the preprocessed multi-frame images according to the acquired time sequence. And the images entering the image queue to be displayed are sorted according to a preset rule or according to an entering sequence, and are waiting for display refreshing according to the sequence.
306. And acquiring the image data in the image queue to be displayed, and displaying the image corresponding to the image data when the image data is acquired.
And after the UI thread acquires the image data, performing display refreshing operation on an image corresponding to the image data. For example, the current display screen displays a first frame image, and after a preset period (determined by a screen refresh rate) is reached, frame data at the head of a queue to be displayed is taken and sent to the UI thread for refreshing. If the image at the head of the queue is an intermediate frame, the intermediate frame is taken and sent to the UI thread for refreshing, and so on.
It can be seen that, before the image display refresh, the previous and subsequent frame images are preprocessed, the continuity of the previous and subsequent frame images is judged by IoU of the previous and subsequent frame images, and when IoU is smaller than a preset threshold, an intermediate frame is generated. And after entering the queue to be displayed, sequentially displaying and sunning the new clothes according to the sequence. The fluency of the video effect displayed by the multi-frame image display is effectively improved.
Consistent with the embodiments shown in fig. 2A and fig. 3, please refer to fig. 4, where fig. 4 is a schematic structural diagram of a functional unit of a terminal 400 according to an embodiment of the present application, where the terminal 400 includes: an acquisition unit 410, a calculation unit 420, and a generation unit 430, wherein,
the acquiring unit is used for acquiring a first frame image and a second frame image of the target object, wherein the first frame image and the second frame image are adjacent frame images;
the acquisition unit is further configured to acquire a first coordinate of the target object according to the first frame image, and acquire a second coordinate of the target object according to the second frame image;
the calculating unit is used for calculating an intersection ratio IoU of the target object according to the first coordinate and the second coordinate;
the generating unit is configured to generate an intermediate frame according to the first frame image and the second frame image when the IoU is smaller than a preset threshold.
In a possible example, in said calculating IoU the intersection ratio of the target object according to the first coordinate and the second coordinate, the calculating unit 420 is specifically configured to calculate the intersection area of the target object according to the first coordinate and the second coordinate; calculating the area of a phase-parallel region of the target object according to the first coordinate and the second coordinate; and calculating IoU the intersection ratio according to the intersection region area and the phase region area.
In a possible example, in the aspect of generating an intermediate frame according to the first frame image and the second frame image, the generating unit 430 is specifically configured to frame the target object in the first frame image in a form of a target frame to obtain a first target frame; the target object in the second frame image is selected in a frame mode in a target frame mode, and a second target frame is obtained; the first coordinates are four vertex coordinates of the first target frame, the second coordinates are four vertex coordinates of the second target frame, and the four vertex coordinates of the middle target frame are calculated according to the four vertex coordinates of the first target frame and the four vertex coordinates of the second target frame to obtain the position of the middle target frame; and generating the intermediate frame according to the position of the intermediate target frame and the background images of the first frame image and the second frame image.
In a possible example, the terminal 400 further includes a display unit 440, and after generating an intermediate frame according to the first frame image and the second frame image, the display unit 440 is configured to sequentially select the first frame image, the intermediate frame image, and the second frame image to enter an image queue to be displayed; sequentially acquiring image data in the image queue to be displayed, and when the first frame of image data is acquired, sending the first frame of image to a UI thread according to a preset period for display operation, and displaying the first frame of image; when the intermediate frame image data is acquired, sending the intermediate frame image to a UI thread according to a preset period for display operation, and displaying the intermediate frame image; and when the second frame image data is acquired, sending the second frame image to a UI thread according to a preset period for display operation, and displaying the second frame image.
In one possible example, after the displaying of the second frame image, the display unit 440 is further configured to display the second frame image when the acquired image data is empty.
In a possible example, after the intersection ratio IoU of the target object is calculated, the display unit 440 is configured to select the first frame image and the second frame image in sequence to enter an image queue to be displayed if the value IoU is greater than a preset threshold and less than 1; and if the IoU is equal to 1, prohibiting the second frame image from entering the image queue to be displayed.
In a possible example, if the target object includes a first target object and a second target object, the obtaining unit 410 is configured to obtain coordinates of the first target object and coordinates of the second target object in the first frame image; the second frame image is used for acquiring the coordinates of the first target object and the coordinates of the second target object in the second frame image; the calculating unit 420 is configured to calculate an intersection ratio of the first target object according to the coordinates of the first target object in the first frame image and the coordinates of the first target object in the second frame image, so as to obtain a first target object IoU; the second target object fusion processing module is further configured to calculate a cross-over ratio of a second target object according to coordinates of the second target object in the first frame image and coordinates of the second target object in the second frame image, so as to obtain a second IoU; the generating unit 430 is configured to generate the intermediate frame according to the first frame image and the second frame image when either or both of the first IoU and the second IoU are smaller than the preset threshold; the display unit 440 is configured to prohibit the second frame image from entering the image queue to be displayed when the first IoU equals 1 and the second IoU equals 1.
The terminal 400 may further include a storage unit 450 for storing program codes and data of the terminal. The obtaining unit 410 may be a transceiver, the calculating unit 420 and the generating unit 430 may be processors, the display unit 440 may be a display, and the storage unit 450 may be a memory.
It can be understood that, since the method embodiment and the terminal embodiment are different presentation forms of the same technical concept, the contents of the method embodiment portion in the present application should be synchronously adapted to the terminal embodiment portion, and are not described herein again.
Fig. 5 is a schematic structural diagram of a terminal 500 provided in an embodiment of the present application, and as shown in the figure, the terminal 500 includes a processor 510, a memory 520, a communication interface 530, and one or more programs 521, where the one or more programs 521 are stored in the memory 520 and configured to be executed by the processor 510, and the one or more programs 521 include instructions for performing the following steps:
acquiring a first frame image and a second frame image of the target object, wherein the first frame image and the second frame image are adjacent frame images; the acquisition unit is further configured to acquire a first coordinate of the target object according to the first frame image, and acquire a second coordinate of the target object according to the second frame image; calculating IoU an intersection ratio of the target object according to the first coordinate and the second coordinate; and when the IoU is smaller than a preset threshold value, generating an intermediate frame according to the first frame image and the second frame image.
In one possible example, in said calculating IoU an intersection ratio of said target object based on said first and second coordinates, said one or more programs 521 specifically include instructions for calculating an intersection area of said target object based on said first and second coordinates; calculating the area of a phase-parallel region of the target object according to the first coordinate and the second coordinate; and calculating IoU the intersection ratio according to the intersection region area and the phase region area.
In one possible example, in the aspect of generating an intermediate frame from the first frame image and the second frame image, the one or more programs 521 specifically include instructions for performing the following operations, so as to frame and select the target object in the first frame image in the form of a target frame, so as to obtain a first target frame; the target object in the second frame image is selected in a frame mode in a target frame mode, and a second target frame is obtained; the first coordinates are four vertex coordinates of the first target frame, the second coordinates are four vertex coordinates of the second target frame, and the four vertex coordinates of the middle target frame are calculated according to the four vertex coordinates of the first target frame and the four vertex coordinates of the second target frame to obtain the position of the middle target frame; and generating the intermediate frame according to the position of the intermediate target frame and the background images of the first frame image and the second frame image.
In a possible example, after the intermediate frame is generated according to the first frame image and the second frame image, the one or more programs 521 specifically include instructions for performing operations of sequentially selecting the first frame image, the intermediate frame image, and the second frame image to enter an image queue to be displayed; sequentially acquiring image data in the image queue to be displayed, and when the first frame of image data is acquired, sending the first frame of image to a UI thread according to a preset period for display operation, and displaying the first frame of image; when the intermediate frame image data is acquired, sending the intermediate frame image to a UI thread according to a preset period for display operation, and displaying the intermediate frame image; and when the second frame image data is acquired, sending the second frame image to a UI thread according to a preset period for display operation, and displaying the second frame image.
In one possible example, after the displaying the second frame image, the one or more programs 521 specifically include instructions for displaying the second frame image when the acquired image data is empty.
In a possible example, after the intersection ratio IoU of the target object is calculated, the one or more programs 521 specifically include instructions for performing the following operations, and if the value IoU is greater than a preset threshold and less than 1, sequentially selecting the first frame image and the second frame image to enter an image queue to be displayed; and if the IoU is equal to 1, prohibiting the second frame image from entering the image queue to be displayed.
In one possible example, if the target object includes a first target object and a second target object, the one or more programs 521 specifically include instructions for obtaining coordinates of the first target object and coordinates of the second target object in the first frame image; the second frame image is used for acquiring the coordinates of the first target object and the coordinates of the second target object in the second frame image; calculating the intersection ratio of the first target object according to the coordinates of the first target object in the first frame image and the coordinates of the first target object in the second frame image to obtain a first IoU; the second target object fusion processing module is further configured to calculate a cross-over ratio of a second target object according to coordinates of the second target object in the first frame image and coordinates of the second target object in the second frame image, so as to obtain a second IoU; generating the intermediate frame from the first frame image and the second frame image when either or both of the first IoU and the second IoU are less than the preset threshold; when the first IoU equals 1 and the second IoU equals 1, prohibiting the second frame image from entering the queue of images to be displayed.
Processor 510 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so forth. The processor 510 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). Processor 510 may also include a main processor and a coprocessor, the main processor being a processor for processing data in the wake state, also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content that the display screen needs to display. In some embodiments, processor 510 may also include an AI (Artificial Intelligence) processor for processing computational operations related to machine learning.
Memory 520 may include one or more computer-readable storage media, which may be non-transitory. Memory 520 may also include high speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In this embodiment, the memory 520 is at least used for storing a computer program, wherein after being loaded and executed by the processor 510, the computer program can implement relevant steps in the call control method disclosed in any of the foregoing embodiments. In addition, the resources stored in the memory 520 may also include an operating system, data, and the like, and the storage manner may be a transient storage or a permanent storage. The operating system may include Windows, Unix, Linux, and the like. The data may include, but is not limited to, terminal interaction data, terminal device signals, and the like.
In some embodiments, the electronic device 500 may further include an input-output interface, a communication interface, a power source, and a communication bus.
Those skilled in the art will appreciate that the disclosed configuration of the present embodiment is not intended to be limiting and may include more or fewer components.
The above description has introduced the solution of the embodiment of the present application mainly from the perspective of the method-side implementation process. It is understood that the first terminal includes corresponding hardware structures and/or software modules for performing the respective functions in order to implement the above-described functions. Those of skill in the art will readily appreciate that the present application is capable of hardware or a combination of hardware and computer software implementing the various illustrative elements and algorithm steps described in connection with the embodiments provided herein. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiment of the present application, the first terminal may be divided into the functional units according to the above method example, for example, each functional unit may be divided corresponding to each function, or two or more functions may be integrated into one processing unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit. It should be noted that the division of the unit in the embodiment of the present application is schematic, and is only a logic function division, and there may be another division manner in actual implementation.
An embodiment of the present application provides a chip, where the chip includes a processor and a data interface, and the processor reads instructions stored on a memory through the data interface to perform a method according to the first aspect to the third aspect and any optional implementation manner described above.
Embodiments of the present application also provide a computer storage medium, where the computer storage medium stores a computer program for electronic data exchange, the computer program enabling a computer to execute part or all of the steps of any one of the methods as described in the above method embodiments, and the computer includes a first electronic device.
Embodiments of the present application also provide a computer program product comprising a non-transitory computer readable storage medium storing a computer program operable to cause a computer to perform some or all of the steps of any of the methods as described in the above method embodiments. The computer program product may be a software installation package, the computer comprising the electronic device.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described terminal embodiments are merely illustrative, and for example, the division of the above-described units is only one logical function division, and other division manners may be available in actual implementation, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or may not be executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some interfaces, devices or units, and may be an electric or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit may be stored in a computer readable memory if it is implemented in the form of a software functional unit and sold or used as a stand-alone product. Based on such understanding, the technical solution of the present application may be substantially implemented or a part of or all or part of the technical solution contributing to the prior art may be embodied in the form of a software product stored in a memory, and including several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the above-mentioned method of the embodiments of the present application. And the aforementioned memory comprises: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable memory, which may include: flash Memory disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
While the present disclosure has been described with reference to particular embodiments, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the present disclosure.

Claims (10)

1. An image processing method, applied to a terminal, the method comprising:
acquiring a first frame image and a second frame image of the target object, wherein the first frame image and the second frame image are adjacent frame images;
acquiring a first coordinate of the target object according to the first frame image, and acquiring a second coordinate of the target object according to the second frame image;
calculating IoU an intersection ratio of the target object according to the first coordinate and the second coordinate;
and when the IoU is smaller than a preset threshold value, generating an intermediate frame according to the first frame image and the second frame image.
2. The method of claim 1, wherein said calculating IoU a merging ratio of the target object according to the first and second coordinates comprises:
calculating the area of the intersection region of the target object according to the first coordinate and the second coordinate;
calculating the area of a phase-parallel region of the target object according to the first coordinate and the second coordinate;
and calculating IoU the intersection ratio according to the intersection region area and the phase region area.
3. The method of claim 1, wherein generating an intermediate frame from the first frame image and the second frame image comprises:
the target object in the first frame image is selected in a frame mode in a target frame mode, and a first target frame is obtained;
the target object in the second frame image is selected in a frame mode in a target frame mode, and a second target frame is obtained;
the first coordinates are four vertex coordinates of the first target frame, the second coordinates are four vertex coordinates of the second target frame, and the four vertex coordinates of the middle target frame are calculated according to the four vertex coordinates of the first target frame and the four vertex coordinates of the second target frame to obtain the position of the middle target frame;
and generating the intermediate frame according to the position of the intermediate target frame and the background images of the first frame image and the second frame image.
4. The method of claim 1, wherein after generating an intermediate frame from the first frame image and the second frame image, the method further comprises:
sequentially selecting the first frame image, the intermediate frame image and the second frame image to enter an image queue to be displayed;
sequentially acquiring image data in the image queue to be displayed, and when the first frame of image data is acquired, sending the first frame of image to a UI thread according to a preset period for display operation, and displaying the first frame of image;
when the intermediate frame image data is acquired, sending the intermediate frame image to a UI thread according to a preset period for display operation, and displaying the intermediate frame image;
and when the second frame image data is acquired, sending the second frame image to a UI thread according to a preset period for display operation, and displaying the second frame image.
5. The method of claim 4, wherein after the displaying the second frame image, the method further comprises:
and when the acquired image data is empty, displaying the second frame image.
6. The method of claim 1, wherein after calculating the intersection ratio IoU of the target object, the method further comprises:
if the IoU is larger than a preset threshold and smaller than 1, sequentially selecting the first frame image and the second frame image to enter an image queue to be displayed;
and if the IoU is equal to 1, prohibiting the second frame image from entering the image queue to be displayed.
7. The method of claim 1, wherein if the target object comprises a first target object and a second target object, the method further comprises:
acquiring the coordinates of the first target object and the coordinates of the second target object in the first frame image;
acquiring the coordinates of the first target object and the coordinates of the second target object in the second frame image;
calculating the intersection ratio of the first target object according to the coordinates of the first target object in the first frame image and the coordinates of the first target object in the second frame image to obtain a first IoU;
calculating the intersection ratio of a second target object according to the coordinates of the second target object in the first frame image and the coordinates of the second target object in the second frame image to obtain a second IoU;
generating the intermediate frame from the first frame image and the second frame image when either or both of the first IoU and the second IoU are less than the preset threshold;
when the first IoU equals 1 and the second IoU equals 1, prohibiting the second frame image from entering the queue of images to be displayed.
8. A terminal, characterized in that the terminal comprises:
the device comprises an acquisition unit, a processing unit and a display unit, wherein the acquisition unit is used for acquiring a first frame image and a second frame image of the target object, and the first frame image and the second frame image are adjacent frame images;
the acquisition unit is further configured to acquire a first coordinate of the target object according to the first frame image, and acquire a second coordinate of the target object according to the second frame image;
a calculating unit, configured to calculate IoU an intersection ratio of the target object according to the first coordinate and the second coordinate;
and the generating unit is used for generating an intermediate frame according to the first frame image and the second frame image when the IoU is smaller than a preset threshold value.
9. An electronic device comprising a processor, a memory, a communication interface, and one or more programs stored in the memory and configured to be executed by the processor, the programs comprising instructions for performing the steps in the method of any of claims 1-7.
10. A computer-readable storage medium, characterized in that a computer program for electronic data exchange is stored, wherein the computer program causes a computer to perform the method according to any one of claims 1-7.
CN201911142691.8A 2019-11-20 2019-11-20 Image processing method and related product Active CN111107427B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911142691.8A CN111107427B (en) 2019-11-20 2019-11-20 Image processing method and related product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911142691.8A CN111107427B (en) 2019-11-20 2019-11-20 Image processing method and related product

Publications (2)

Publication Number Publication Date
CN111107427A true CN111107427A (en) 2020-05-05
CN111107427B CN111107427B (en) 2022-01-28

Family

ID=70421395

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911142691.8A Active CN111107427B (en) 2019-11-20 2019-11-20 Image processing method and related product

Country Status (1)

Country Link
CN (1) CN111107427B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111898701A (en) * 2020-08-13 2020-11-06 网易(杭州)网络有限公司 Model training, frame image generation, frame interpolation method, device, equipment and medium
CN112596843A (en) * 2020-12-29 2021-04-02 北京元心科技有限公司 Image processing method, image processing device, electronic equipment and computer readable storage medium
CN114286007A (en) * 2021-12-28 2022-04-05 维沃移动通信有限公司 Image processing circuit, image processing method, electronic device, and readable storage medium

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2000062419A2 (en) * 1999-04-13 2000-10-19 Broadcom Corporation Mos variable gain amplifier
CN101772802A (en) * 2007-08-06 2010-07-07 哉英电子股份有限公司 Image signal processing device
CN101808205A (en) * 2009-02-18 2010-08-18 索尼爱立信移动通信股份公司 Moving image output method and moving image output apparatus
CN107277607A (en) * 2017-06-09 2017-10-20 努比亚技术有限公司 A kind of screen picture method for recording, terminal and computer-readable recording medium
CN107608815A (en) * 2017-09-18 2018-01-19 中国航空工业集团公司洛阳电光设备研究所 Multi-tiled display processing and integrality circularly monitoring apparatus and method for airborne display system
CN108629284A (en) * 2017-10-28 2018-10-09 深圳奥瞳科技有限责任公司 The method and device of Real- time Face Tracking and human face posture selection based on embedded vision system
CN109151474A (en) * 2018-08-23 2019-01-04 复旦大学 A method of generating new video frame
EP3438776A1 (en) * 2017-08-04 2019-02-06 Bayerische Motoren Werke Aktiengesellschaft Method, apparatus and computer program for a vehicle
US20190102646A1 (en) * 2017-10-02 2019-04-04 Xnor.ai Inc. Image based object detection
KR101985712B1 (en) * 2018-12-13 2019-06-04 주식회사 버넥트 Machine vision based non-contact method for collecting instrument information and remote monitoring system using the same
CN110084831A (en) * 2019-04-23 2019-08-02 江南大学 Based on the more Bernoulli Jacob's video multi-target detecting and tracking methods of YOLOv3

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2000062419A2 (en) * 1999-04-13 2000-10-19 Broadcom Corporation Mos variable gain amplifier
CN101772802A (en) * 2007-08-06 2010-07-07 哉英电子股份有限公司 Image signal processing device
CN101808205A (en) * 2009-02-18 2010-08-18 索尼爱立信移动通信股份公司 Moving image output method and moving image output apparatus
CN107277607A (en) * 2017-06-09 2017-10-20 努比亚技术有限公司 A kind of screen picture method for recording, terminal and computer-readable recording medium
EP3438776A1 (en) * 2017-08-04 2019-02-06 Bayerische Motoren Werke Aktiengesellschaft Method, apparatus and computer program for a vehicle
CN107608815A (en) * 2017-09-18 2018-01-19 中国航空工业集团公司洛阳电光设备研究所 Multi-tiled display processing and integrality circularly monitoring apparatus and method for airborne display system
US20190102646A1 (en) * 2017-10-02 2019-04-04 Xnor.ai Inc. Image based object detection
CN108629284A (en) * 2017-10-28 2018-10-09 深圳奥瞳科技有限责任公司 The method and device of Real- time Face Tracking and human face posture selection based on embedded vision system
CN109151474A (en) * 2018-08-23 2019-01-04 复旦大学 A method of generating new video frame
KR101985712B1 (en) * 2018-12-13 2019-06-04 주식회사 버넥트 Machine vision based non-contact method for collecting instrument information and remote monitoring system using the same
CN110084831A (en) * 2019-04-23 2019-08-02 江南大学 Based on the more Bernoulli Jacob's video multi-target detecting and tracking methods of YOLOv3

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
雷维卓: "基于YOLOv2的实时目标检测研究", 《中国优秀硕士学位论文全文数据库》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111898701A (en) * 2020-08-13 2020-11-06 网易(杭州)网络有限公司 Model training, frame image generation, frame interpolation method, device, equipment and medium
CN111898701B (en) * 2020-08-13 2023-07-25 网易(杭州)网络有限公司 Model training, frame image generation and frame insertion methods, devices, equipment and media
CN112596843A (en) * 2020-12-29 2021-04-02 北京元心科技有限公司 Image processing method, image processing device, electronic equipment and computer readable storage medium
CN112596843B (en) * 2020-12-29 2023-07-25 北京元心科技有限公司 Image processing method, device, electronic equipment and computer readable storage medium
CN114286007A (en) * 2021-12-28 2022-04-05 维沃移动通信有限公司 Image processing circuit, image processing method, electronic device, and readable storage medium

Also Published As

Publication number Publication date
CN111107427B (en) 2022-01-28

Similar Documents

Publication Publication Date Title
CN108010112B (en) Animation processing method, device and storage medium
CN111107427B (en) Image processing method and related product
US11587280B2 (en) Augmented reality-based display method and device, and storage medium
US11594000B2 (en) Augmented reality-based display method and device, and storage medium
CN109151966B (en) Terminal control method, terminal control device, terminal equipment and storage medium
CN109829964B (en) Web augmented reality rendering method and device
US20090262139A1 (en) Video image display device and video image display method
EP3917131A1 (en) Image deformation control method and device and hardware device
CN111432262B (en) Page video rendering method and device
EP4345756A1 (en) Special effect generation method and apparatus, electronic device and storage medium
CN116501210A (en) Display method, electronic equipment and storage medium
CN112884908A (en) Augmented reality-based display method, device, storage medium, and program product
CN113015007A (en) Video frame insertion method and device and electronic equipment
US20190371039A1 (en) Method and smart terminal for switching expression of smart terminal
CN112419456B (en) Special effect picture generation method and device
CN113132800A (en) Video processing method and device, video player, electronic equipment and readable medium
CN111918099A (en) Video processing method and device, electronic equipment and storage medium
US11763533B2 (en) Display method based on augmented reality, device, storage medium and program product
WO2021237736A1 (en) Image processing method, apparatus and system, and computer-readable storage medium
CN113034653A (en) Animation rendering method and device
CN114390333B (en) Interface content display method, device, equipment and storage medium
CN111475242B (en) Equipment control method and device, storage medium and electronic equipment
CN116309974B (en) Animation scene rendering method, system, electronic equipment and medium
CN108898652A (en) A kind of skin image setting method, device and electronic equipment
US20240135501A1 (en) Video generation method and apparatus, device and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant