CN116366793A - Image display method and device, storage medium and electronic device - Google Patents

Image display method and device, storage medium and electronic device Download PDF

Info

Publication number
CN116366793A
CN116366793A CN202310411864.1A CN202310411864A CN116366793A CN 116366793 A CN116366793 A CN 116366793A CN 202310411864 A CN202310411864 A CN 202310411864A CN 116366793 A CN116366793 A CN 116366793A
Authority
CN
China
Prior art keywords
target
image
frame
video stream
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310411864.1A
Other languages
Chinese (zh)
Inventor
王源
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202310411864.1A priority Critical patent/CN116366793A/en
Publication of CN116366793A publication Critical patent/CN116366793A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/01Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level
    • H04N7/0127Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level by changing the field or frame frequency of the incoming video signal, e.g. frame rate converter
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/01Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level
    • H04N7/0135Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level involving interpolation processes
    • H04N7/0145Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level involving interpolation processes the interpolation being class adaptive, i.e. it uses the information of class which is determined for a pixel based upon certain characteristics of the neighbouring pixels

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The embodiment of the invention provides a method and a device for displaying images, a storage medium and an electronic device, wherein the method comprises the following steps: acquiring a video stream acquired by target equipment in real time; predicting target time length required by target equipment for processing a current frame image included in a video stream through a target algorithm; and under the condition that the frame interval of the video stream is smaller than the target duration, performing frame supplementing operation on the video stream, and displaying a target image obtained after the frame supplementing operation in a target application, wherein the target application is an application installed in target equipment. The invention solves the problem of poor real-time performance of image display in the related technology, and achieves the effect of improving the real-time performance of image display.

Description

Image display method and device, storage medium and electronic device
Technical Field
The embodiment of the invention relates to the field of image processing, in particular to an image display method, an image display device, a storage medium and an electronic device.
Background
With the development of mobile platform devices and the popularization of intelligent algorithms, various intelligent algorithms, image processing algorithms are increasingly applied to mobile terminals. Various algorithms are required to be transplanted to the mobile terminal, but a better frame is required to be designed to realize the transplantation of the image algorithm because the embedded equipment of the mobile terminal is limited by volume and power consumption conditions, calculation capability is weak, and the requirements of high real-time performance, stability, robustness and the like are required for an actual scene. However, in the related art, when the system fluctuates, the sub-thread cannot obtain the operation result in time when the algorithm operation time consumption becomes large, resulting in poor image display instantaneity.
In view of the above problems in the related art, no effective solution has been proposed at present.
Disclosure of Invention
The embodiment of the invention provides an image display method, an image display device, a storage medium and an electronic device, which are used for at least solving the problem of poor image display instantaneity in the related art.
According to an embodiment of the present invention, there is provided a display method of an image, including: acquiring a video stream acquired by target equipment in real time; predicting a target time length required by the target equipment to process the current frame image included in the video stream through a target algorithm; and under the condition that the frame interval of the video stream is smaller than the target duration, carrying out frame supplementing operation on the video stream, and displaying a target image obtained after the frame supplementing operation in a target application, wherein the target application is an application installed in target equipment.
According to another embodiment of the present invention, there is provided a display device of an image including: the acquisition module is used for acquiring the video stream acquired by the target equipment in real time; the prediction module is used for predicting target duration required by the target equipment for processing the current frame image included in the video stream through a target algorithm; and the display module is used for carrying out frame supplementing operation on the video stream under the condition that the frame interval of the video stream is smaller than the target duration, and displaying a target image obtained after the frame supplementing operation in a target application, wherein the target application is an application installed in the target equipment.
According to a further embodiment of the invention, there is also provided a computer readable storage medium having stored therein a computer program, wherein the computer program is arranged to perform the steps of any of the method embodiments described above when run.
According to a further embodiment of the invention, there is also provided an electronic device comprising a memory having stored therein a computer program and a processor arranged to run the computer program to perform the steps of any of the method embodiments described above.
According to the method and the device, the video stream acquired by the target device is acquired in real time, the target time required by the target device for processing the current frame image included in the video stream through the target algorithm is predicted, the frame supplementing operation is carried out on the video stream under the condition that the frame interval of the video stream is smaller than the target time, and the target image obtained after the frame supplementing operation is displayed in the target application.
Drawings
Fig. 1 is a block diagram of a hardware configuration of a mobile terminal of a display method of an image according to an embodiment of the present invention;
fig. 2 is a flowchart of a display method of an image according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of image processing by a target algorithm according to an exemplary embodiment of the present invention;
FIG. 4 is a schematic diagram of a serial architecture according to an exemplary embodiment of the present invention;
FIG. 5 is a schematic diagram of a parallel architecture according to an exemplary embodiment of the present invention;
FIG. 6 is a sliding window schematic diagram of an exponentially weighted moving average according to an exemplary embodiment of the present invention;
FIG. 7 is a graph comparing results using a serial scheme and a moving weighted average scheme in accordance with an embodiment of the present invention;
fig. 8 is a block diagram of a display device of an image according to an embodiment of the present invention.
Detailed Description
Embodiments of the present invention will be described in detail below with reference to the accompanying drawings in conjunction with the embodiments.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present invention and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order.
The method embodiments provided in the embodiments of the present application may be performed in a mobile terminal, a computer terminal or similar computing device. Taking the mobile terminal as an example, fig. 1 is a block diagram of a hardware structure of the mobile terminal according to an embodiment of the present invention. As shown in fig. 1, a mobile terminal may include one or more (only one is shown in fig. 1) processors 102 (the processor 102 may include, but is not limited to, a microprocessor MCU or a processing device such as a programmable logic device FPGA) and a memory 104 for storing data, wherein the mobile terminal may also include a transmission device 106 for communication functions and an input-output device 108. It will be appreciated by those skilled in the art that the structure shown in fig. 1 is merely illustrative and not limiting of the structure of the mobile terminal described above. For example, the mobile terminal may also include more or fewer components than shown in fig. 1, or have a different configuration than shown in fig. 1.
The memory 104 may be used to store a computer program, for example, a software program of application software and a module, such as a computer program corresponding to a method for displaying an image in an embodiment of the present invention, and the processor 102 executes the computer program stored in the memory 104 to perform various functional applications and data processing, that is, to implement the above-described method. Memory 104 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 104 may further include memory remotely located relative to the processor 102, which may be connected to the mobile terminal via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device 106 is used to receive or transmit data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the mobile terminal. In one example, the transmission device 106 includes a network adapter (Network Interface Controller, simply referred to as NIC) that can connect to other network devices through a base station to communicate with the internet. In one example, the transmission device 106 may be a Radio Frequency (RF) module, which is configured to communicate with the internet wirelessly.
In this embodiment, there is provided a method for displaying an image, and fig. 2 is a flowchart of a method for displaying an image according to an embodiment of the present invention, as shown in fig. 2, the flowchart includes the steps of:
step S202, acquiring a video stream acquired by target equipment in real time;
step S204, predicting a target time length required by the target device to process the current frame image included in the video stream through a target algorithm;
and step S206, carrying out frame supplementing operation on the video stream under the condition that the frame interval of the video stream is smaller than the target duration, and displaying the target image obtained after the frame supplementing operation in a target application, wherein the target application is an application installed in the target equipment.
In the above embodiment, the target device may be a mobile phone, a tablet computer, an intelligent wearable device with an image acquisition function, a computer, or other intelligent terminals, and an image capturing device is disposed in the target device, so that images, videos, and the like may be acquired. The target equipment can be further provided with a target application, after the target application is started, the camera equipment arranged in the target equipment can be called to acquire the video stream in real time, the video stream is processed in real time through a target algorithm, the processed images are obtained, and the processed images are sequentially displayed in the target application. That is, the target application may display the image processed for each frame of the image in the video stream frame by frame, so as to achieve the effect of displaying the processed video stream in the target application. The target algorithm may be an image rotation algorithm, such as an image turning algorithm, an image rotation algorithm, or a special effect algorithm, such as detecting a person in an image, adding a headwear feature to the person, or adding an action special effect to the person. It should be noted that the above algorithm is only an exemplary illustration, and the present invention is not limited thereto. The target algorithm may be any algorithm that is input as an image and output as an image. A schematic view of the processing of an image by the target algorithm can be seen in fig. 3.
In the above embodiment, after the target device acquires the current frame image, the target duration required for processing the current frame image by the target algorithm may be predicted, and the frame interval of the video stream acquired by the target device may be determined, for example, the sampling frequency of the target device may be determined, and the inverse of the sampling frequency is determined as the frame interval. A size relationship between the target duration and the frame interval may be determined, and the target image may be determined based on the size relationship.
In the above embodiment, when the frame interval is smaller than the target duration, it is indicated that the system may fluctuate, resulting in unstable algorithm performance. When the algorithm time consumption is increased due to system fluctuation, and the upper layer application performs frame data callback, the data which is processed by the algorithm is not in the processed queue, and the data module waits for the algorithm processing module, so that the real-time frame rate is reduced. That is, when the frame interval is smaller than the target duration, it is indicated that the target device has acquired the next frame image when the target algorithm has not processed the current frame image, and in order to ensure the real-time performance of image display in the target application, a frame supplementing operation may be performed, and an image determined by the frame supplementing operation is displayed in the target application. The frame supplementing operation may include determining a processed image closest to the current frame image in the processed queue as a target image, and may further include determining an unprocessed current frame image as a target image.
The main execution body of the above steps may be a background processor, but is not limited thereto.
According to the method and the device, the video stream acquired by the target device is acquired in real time, the target time required by the target device for processing the current frame image included in the video stream through the target algorithm is predicted, the frame supplementing operation is carried out on the video stream under the condition that the frame interval of the video stream is smaller than the target time, and the target image obtained after the frame supplementing operation is displayed in the target application.
In an exemplary embodiment, the method further comprises: determining a first image obtained by processing the current frame image by the target algorithm under the condition that the difference value between the frame interval and the target duration is larger than a first threshold value, and displaying the first image in the target application; and displaying a second image at the head of the queue in the target application when the difference value between the frame interval and the target duration is smaller than a second threshold value, wherein the processed queue is used for storing images processed by the target algorithm. In this embodiment, when the difference between the frame interval and the target duration is greater than the first threshold, that is, when the frame interval is far greater than the target duration, it is indicated that the target algorithm finishes processing the current frame image when the target device has not acquired the next frame image yet, so that the first image obtained by processing the current frame image by the target algorithm may be displayed in the target application. At this time, a serial architecture may be employed, i.e., when the algorithm average time consumption (target duration) is much smaller than the frame interval. The current data frame is acquired, the data frame is sent to the algorithm, and the image output by the algorithm is returned. Wherein, the serial architecture schematic diagram can be seen in fig. 4.
In the above embodiment, when the difference between the frame interval and the target time length is smaller than the second threshold value, it is explained that the frame interval and the target time length are not much different, and therefore, the image that has been processed by the target algorithm may be stored in the processed queue, and the target application may sequentially display the images in the processed queue.
In one exemplary embodiment of the present invention,after displaying the target image obtained after the frame-filling operation in the target application, the method further includes: determining a target number of frame images supplemented by executing the frame supplementing operation; and under the condition that the frame interval is larger than the target duration, performing frame extraction operation on the video stream, wherein the frame extraction operation comprises deleting third images, the third images are images obtained by processing fourth images through the target algorithm, the fourth images are images acquired after the current frame images, and the number of the images included in the third images is equal to the target number. In this embodiment, when the system fluctuates, the algorithm execution time is increased, which may result in no processed data in the processed queue, and frame repair processing is required. In order to ensure the real-time performance of the image frame, when the execution time of the current algorithm is predicted to be lower than the required frame interval, frame extraction operation is required. The frame interval between two frames is marked as T, when the frame callback is needed and no data is in the processing queue, when V t <T, waiting for data; otherwise, a frame is complemented, and the complemented frame number K+1 is obtained. When V is t >T and K>At 0, one frame is pulled.
After the frame supplementing operation is performed, the target number of the frame images supplemented by the frame supplementing operation can be determined, and in the later detection, if the frame interval is determined to be larger than the target duration, the frame extracting operation can be performed on the video stream, wherein the frame extracting operation comprises third images stored in the processed queue, and the number of the deleted third images is equal to the supplemented target number.
In the above embodiment, the frame extracting operation may further include deleting images included in the video stream that are acquired after the current frame image, the number of deleted images being equal to the target number of the supplementary frame images. The frame extraction operation directly deletes the image in the video stream, but not the image processed by the target algorithm, so that the processing pressure of the target algorithm can be reduced, and the processing speed can be improved.
In the above embodiment, when the image acquired after the current frame image included in the deleted third image or the deleted video is a multi-frame image, the images may be adjacent images or non-adjacent images, and when the images are non-adjacent images, the occurrence of frame skip of the picture displayed in the target application may be prevented.
In an exemplary embodiment, displaying the target image obtained after the frame-filling operation in the target application includes: determining a previous frame image of the current frame image included in the video stream; determining a fifth image obtained by processing the previous frame image through the target algorithm; the fifth image is determined as the target image. In this embodiment, when the frame is required to be supplemented, a fifth image obtained by processing a previous frame image of the current frame image by the target algorithm may be determined as the target image. I.e., the image included in the processed queue that is closest to the current frame image may be determined, and the closest image may be determined as the target image.
In an exemplary embodiment, a highest similarity image with highest similarity to the current frame image included in the video may also be determined, the highest similarity image is processed by a target algorithm to obtain a processed image, and the processed image is displayed in the target application.
In an exemplary embodiment, the method further comprises: storing image frames in a video stream acquired in real time into a queue to be processed under the condition that the difference value between the frame interval and the target duration is smaller than the second threshold value; controlling the target algorithm to acquire the image frames from the queue to be processed for processing, and storing the images processed by the target algorithm into the processed queue; and controlling the target application to sequentially display the images stored in the processed queue. In this embodiment, when the difference between the frame interval and the target duration is smaller than the second threshold, that is, when the average time consumption of the algorithm is slightly smaller than the frame interval, a parallel architecture may be adopted, where a schematic diagram of the parallel architecture may be shown in fig. 5, and as shown in fig. 5, the parallel architecture may include a data obtaining module and an algorithm processing module, where the data module obtains frame data to be processed currently and sends the frame data to be processed to the algorithm processing module. And acquiring the processed data frame from the algorithm processing module and returning the processed data frame to the upper layer application. The algorithm processing module constructs a to-be-processed queue and a processed queue, sends the acquired current data frame into the to-be-processed queue, and places the current data frame at the tail of the queue. The algorithm module continuously acquires frame data of the head of the queue in the queue to be processed and sends the frame data to the algorithm module for processing. And send the processed data to the processed queue. When the data processing module needs to call back the image data, one frame of image is taken out from the processed queue.
In one exemplary embodiment, predicting a target duration required by the target device to process a current frame image included in the video stream through a target algorithm includes: determining a sliding window of an exponentially weighted moving average; determining a corresponding target moment of a previous frame image of the current frame image in the sliding window; determining a first exponentially weighted moving average value corresponding to the previous frame image and a first weight corresponding to the first exponentially weighted moving average value; the target time period is determined based on the target time, the first exponentially weighted moving average, and the first weight. In this embodiment, the target time period may be estimated from an exponentially weighted moving average when predicting the target time period. An exponentially weighted moving average (exponentially weighted moving average) can be used to estimate the local mean of the variable such that the update of the variable is related to the historical value over a period of time. The moving weighted average method gives different index weights to the observed values, and the weighting coefficient decreases with the length from the current time, and the numerical weighting coefficient increases as the current time approaches. Giving a degree of diminishing impact on past data. That is, as the data is further away, the assigned weight gradually converges to 0. It can be approximately seen that a window of data within a non-zero time of one weight is divided to predict the latest value, and the closer to the current time, the more the data pair and the predicted value influence. A schematic of a sliding window for an exponentially weighted moving average is shown in fig. 6.
In the above embodiment, the time consumed by the algorithm operation, that is, the target duration, may be counted by the exponentially weighted moving average algorithm to reflect the time consumed by the current system to execute the algorithm, and the current algorithm execution time is predicted by the time consumed by the system to execute the algorithm at the previous time.
In one exemplary embodiment, determining the target time period based on the target time, the first exponentially weighted moving average, and the first weight includes: calculating the power of the target moment of the first weight to obtain a first numerical value; determining a difference value between the first constant and the first numerical value to obtain a second numerical value; and determining the ratio of the first exponentially weighted moving average to the second value as the target duration. In this embodiment, the first constant may be 1, the first weight may be an attenuation weight β, the value range is [0,1 ], β is a super parameter, and the window size of the current frame prediction time affected by the previous data may be adjusted by adjusting the super parameter. The smaller the β, the smaller the window and the larger the mean curve fluctuation; the larger the β, the larger the window, and the smoother the mean curve.
In the above embodiment, the target time period may be expressed as V t =v t-1 /(1-β t-1 ) Wherein V is t Representing a first exponentially weighted moving average, t-1 representing the target instant.
In one exemplary embodiment, determining a first exponentially weighted moving average corresponding to the previous frame image includes: determining a first product of the first weight and a second finger weighted moving average value corresponding to a target frame image, wherein the target frame image is an image which is positioned before the previous frame image and is adjacent to the previous frame image; determining a difference between a second constant and the first weight as a second weight; determining the actual duration of processing the previous frame of image through the target algorithm; determining a second product of the second weight and the actual duration; determining a sum of the first product and the second product as the first exponentially weighted moving average; wherein, when the target frame image is the first frame image included in the sliding window, the second finger weighted moving average is a third constant. In this embodiment, the current time may be denoted as t (t=2, 3 …, n), v t-1 For the index weighted moving average at the previous time, i.e., the first index weighted moving average, θ is the time-consuming algorithm actually performed at time t-1, i.e., the actual time duration.Beta is the attenuation weight, and the value range is [0,1 ]. Let time v 0 0 =0, i.e. the third constant may be 0, which may result in the predicted time consumption of the algorithm execution at the current time being V t 。v t-1 =βv t-2 +(1-β)θ t-1 . Wherein the second constant may be 1.
Fig. 7 is a comparison chart of results of a serial scheme and a moving weighted average scheme according to an embodiment of the present invention, as shown in fig. 7, a graph 1 represents the serial scheme, a graph 2 represents the moving weighted average scheme, a horizontal axis represents a frame number of each frame of image, and a vertical axis represents a target duration of processing the image by a target algorithm.
In the foregoing embodiment, this type of algorithm proposes an integration framework for an input being an image and an output being an image. Different algorithm integration frameworks are designed according to the time consumption of different algorithms. And meanwhile, the conditions of frame rate change and frame delay caused by time-consuming fluctuation of algorithm operation due to system reasons are considered. When the algorithm is far less time-consuming than the frame interval, a serial architecture is directly adopted. When the algorithm time consumption is close to the frame interval, a multithreading parallel architecture is adopted, and different integrated architectures are adopted aiming at the operation efficiency of different algorithms, so that the defect of insufficient calculation power of mobile equipment can be overcome, and the frame rate and the real-time performance of the algorithm can be ensured. And (3) considering the condition that the algorithm time consumption is increased due to system fluctuation, calculating a system time consumption coefficient by adopting an exponential weighted moving average, evaluating the time consumption condition of the system operation algorithm, and judging whether the condition that the upper layer application cannot acquire the image frames processed by the algorithm in time occurs or not. The frame supplementing method is adopted, so that the upper layer application can acquire the data frame in time, and the frame rate is ensured. And the number of the complementary frames is recorded, when the system tends to be stable, the algorithm time consumption is reduced again, and when the algorithm time consumption is smaller than the frame interval, the frame extraction is carried out on the frame image data so as to ensure the real-time performance of the video stream.
From the description of the above embodiments, it will be clear to a person skilled in the art that the method according to the above embodiments may be implemented by means of software plus the necessary general hardware platform, but of course also by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method according to the embodiments of the present invention.
The embodiment also provides an image display device, which is used for implementing the above embodiments and preferred embodiments, and is not described in detail. As used below, the term "module" may be a combination of software and/or hardware that implements a predetermined function. While the means described in the following embodiments are preferably implemented in software, implementation in hardware, or a combination of software and hardware, is also possible and contemplated.
Fig. 8 is a block diagram of a display device of an image according to an embodiment of the present invention, as shown in fig. 8, the device including:
an acquisition module 82, configured to acquire, in real time, a video stream acquired by a target device;
a prediction module 84, configured to predict a target duration required by the target device to process, through a target algorithm, a current frame image included in the video stream;
and the display module 86 is configured to perform a frame-supplementing operation on the video stream if the frame interval of the video stream is less than the target duration, and display a target image obtained after the frame-supplementing operation in a target application, where the target application is an application installed in the target device.
In an exemplary embodiment, the apparatus is further configured to: determining a first image obtained by processing the current frame image by the target algorithm under the condition that the difference value between the frame interval and the target duration is larger than a first threshold value, and displaying the first image in the target application; and displaying a second image at the head of the queue in the target application when the difference value between the frame interval and the target duration is smaller than a second threshold value, wherein the processed queue is used for storing images processed by the target algorithm.
In an exemplary embodiment, the apparatus may be configured to determine a target number of frame images supplemented by performing the frame supplementing operation after displaying the target image obtained after the frame supplementing operation in a target application; and under the condition that the frame interval is larger than the target duration, performing frame extraction operation on the video stream, wherein the frame extraction operation comprises deleting third images, the third images are images obtained by processing fourth images through the target algorithm, the fourth images are images acquired after the current frame images, and the number of the images included in the third images is equal to the target number.
In one exemplary embodiment, the display module 86 may implement displaying the resulting target image after the frame-filling operation in the target application by: determining a previous frame image of the current frame image included in the video stream; determining a fifth image obtained by processing the previous frame image through the target algorithm; displaying the target image in the target application.
In an exemplary embodiment, the apparatus may be further configured to store image frames in a video stream acquired in real time in a queue to be processed if a difference between the frame interval and the target duration is less than the second threshold; controlling the target algorithm to acquire the image frames from the queue to be processed for processing, and storing the images processed by the target algorithm into the processed queue; and controlling the target application to sequentially display the images stored in the processed queue.
In one exemplary embodiment, the prediction module 84 may implement predicting a target duration required by the target device to process the current frame image included in the video stream through a target algorithm by: determining a sliding window of an exponentially weighted moving average; determining a corresponding target moment of a previous frame image of the current frame image in the sliding window; determining a first exponentially weighted moving average value corresponding to the previous frame image and a first weight corresponding to the first exponentially weighted moving average value; the target time period is determined based on the target time, the first exponentially weighted moving average, and the first weight.
In one exemplary embodiment, the prediction module 84 may implement determining the target time period based on the target time, the first exponentially weighted moving average, and the first weight by: calculating the power of the target moment of the first weight to obtain a first numerical value; determining a difference value between the first constant and the first numerical value to obtain a second numerical value; and determining the ratio of the first exponentially weighted moving average to the second value as the target duration.
In one exemplary embodiment, the prediction module 84 may implement determining the first exponentially weighted moving average corresponding to the previous frame image by: determining a first product of the first weight and a second finger weighted moving average value corresponding to a target frame image, wherein the target frame image is an image which is positioned before the previous frame image and is adjacent to the previous frame image; determining a difference between a second constant and the first weight as a second weight; determining the actual duration of processing the previous frame of image through the target algorithm; determining a second product of the second weight and the actual duration; determining a sum of the first product and the second product as the first exponentially weighted moving average; wherein, when the target frame image is the first frame image included in the sliding window, the second finger weighted moving average is a third constant.
It should be noted that each of the above modules may be implemented by software or hardware, and for the latter, it may be implemented by, but not limited to: the modules are all located in the same processor; alternatively, the above modules may be located in different processors in any combination.
Embodiments of the present invention also provide a computer readable storage medium having a computer program stored therein, wherein the computer program is arranged to perform the steps of any of the method embodiments described above when run.
In one exemplary embodiment, the computer readable storage medium may include, but is not limited to: a usb disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing a computer program.
An embodiment of the invention also provides an electronic device comprising a memory having stored therein a computer program and a processor arranged to run the computer program to perform the steps of any of the method embodiments described above.
In an exemplary embodiment, the electronic apparatus may further include a transmission device connected to the processor, and an input/output device connected to the processor.
Specific examples in this embodiment may refer to the examples described in the foregoing embodiments and the exemplary implementation, and this embodiment is not described herein.
It will be appreciated by those skilled in the art that the modules or steps of the invention described above may be implemented in a general purpose computing device, they may be concentrated on a single computing device, or distributed across a network of computing devices, they may be implemented in program code executable by computing devices, so that they may be stored in a storage device for execution by computing devices, and in some cases, the steps shown or described may be performed in a different order than that shown or described herein, or they may be separately fabricated into individual integrated circuit modules, or multiple modules or steps of them may be fabricated into a single integrated circuit module. Thus, the present invention is not limited to any specific combination of hardware and software.
The above description is only of the preferred embodiments of the present invention and is not intended to limit the present invention, but various modifications and variations can be made to the present invention by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the principle of the present invention should be included in the protection scope of the present invention.

Claims (11)

1. A method of displaying an image, comprising:
acquiring a video stream acquired by target equipment in real time;
predicting a target time length required by the target equipment to process the current frame image included in the video stream through a target algorithm;
and under the condition that the frame interval of the video stream is smaller than the target duration, carrying out frame supplementing operation on the video stream, and displaying a target image obtained after the frame supplementing operation in a target application, wherein the target application is an application installed in target equipment.
2. The method according to claim 1, wherein the method further comprises:
determining a first image obtained by processing the current frame image by the target algorithm under the condition that the difference value between the frame interval and the target duration is larger than a first threshold value, and displaying the first image in the target application;
and displaying a second image at the head of the queue in the target application when the difference value between the frame interval and the target duration is smaller than a second threshold value, wherein the processed queue is used for storing images processed by the target algorithm.
3. The method of claim 1, wherein after displaying the resulting target image after the frame-filling operation in a target application, the method further comprises:
determining a target number of frame images supplemented by executing the frame supplementing operation;
and under the condition that the frame interval is larger than the target duration, performing frame extraction operation on the video stream, wherein the frame extraction operation comprises deleting third images, the third images are images obtained by processing fourth images through the target algorithm, the fourth images are images acquired after the current frame images, and the number of the images included in the third images is equal to the target number.
4. The method of claim 1, wherein displaying the resulting target image after the frame-filling operation in a target application comprises:
determining a previous frame image of the current frame image included in the video stream;
determining a fifth image obtained by processing the previous frame image through the target algorithm;
displaying the fifth image in the target application.
5. The method according to claim 2, wherein the method further comprises:
storing image frames in a video stream acquired in real time into a queue to be processed under the condition that the difference value between the frame interval and the target duration is smaller than the second threshold value;
controlling the target algorithm to acquire the image frames from the queue to be processed for processing, and storing the images processed by the target algorithm into the processed queue;
and controlling the target application to sequentially display the images stored in the processed queue.
6. The method of claim 1, wherein predicting a target time period required for the target device to process a current frame image included in the video stream through a target algorithm comprises:
determining a sliding window of an exponentially weighted moving average;
determining a corresponding target moment of a previous frame image of the current frame image in the sliding window;
determining a first exponentially weighted moving average value corresponding to the previous frame image and a first weight corresponding to the first exponentially weighted moving average value;
the target time period is determined based on the target time, the first exponentially weighted moving average, and the first weight.
7. The method of claim 6, wherein determining the target time period based on the target time instant, the first exponentially weighted moving average, and the first weight comprises:
calculating the power of the target moment of the first weight to obtain a first numerical value;
determining a difference value between the first constant and the first numerical value to obtain a second numerical value;
and determining the ratio of the first exponentially weighted moving average to the second value as the target duration.
8. The method of claim 6, wherein determining a first exponentially weighted moving average for the previous frame image comprises:
determining a first product of the first weight and a second finger weighted moving average value corresponding to a target frame image, wherein the target frame image is an image which is positioned before the previous frame image and is adjacent to the previous frame image;
determining a difference between a second constant and the first weight as a second weight;
determining the actual duration of processing the previous frame of image through the target algorithm;
determining a second product of the second weight and the actual duration;
determining a sum of the first product and the second product as the first exponentially weighted moving average;
wherein, when the target frame image is the first frame image included in the sliding window, the second finger weighted moving average is a third constant.
9. An image display device, comprising:
the acquisition module is used for acquiring the video stream acquired by the target equipment in real time;
the prediction module is used for predicting target duration required by the target equipment for processing the current frame image included in the video stream through a target algorithm;
and the display module is used for carrying out frame supplementing operation on the video stream under the condition that the frame interval of the video stream is smaller than the target duration, and displaying a target image obtained after the frame supplementing operation in a target application, wherein the target application is an application installed in the target equipment.
10. A computer readable storage medium, characterized in that the computer readable storage medium has stored therein a computer program, wherein the computer program is arranged to execute the method of any of the claims 1 to 8 when run.
11. An electronic device comprising a memory and a processor, characterized in that the memory has stored therein a computer program, the processor being arranged to run the computer program to perform the method of any of the claims 1 to 8.
CN202310411864.1A 2023-04-13 2023-04-13 Image display method and device, storage medium and electronic device Pending CN116366793A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310411864.1A CN116366793A (en) 2023-04-13 2023-04-13 Image display method and device, storage medium and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310411864.1A CN116366793A (en) 2023-04-13 2023-04-13 Image display method and device, storage medium and electronic device

Publications (1)

Publication Number Publication Date
CN116366793A true CN116366793A (en) 2023-06-30

Family

ID=86917189

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310411864.1A Pending CN116366793A (en) 2023-04-13 2023-04-13 Image display method and device, storage medium and electronic device

Country Status (1)

Country Link
CN (1) CN116366793A (en)

Similar Documents

Publication Publication Date Title
KR101602032B1 (en) Frame encoding selection based on frame similarities and visual quality and interests
CN113163260B (en) Video frame output control method and device and electronic equipment
CN108989832B (en) Image data processing method and equipment, storage medium and terminal thereof
CN113596442B (en) Video processing method and device, electronic equipment and storage medium
CN110557633A (en) Compression transmission method, system and computer readable storage medium for image data
CN114466194A (en) Video coding adjusting method and device, storage medium and electronic equipment
CN115103210A (en) Information processing method, device, terminal and storage medium
WO2022000298A1 (en) Reinforcement learning based rate control
CN112463293A (en) Container-based expandable distributed double-queue dynamic allocation method in edge scene
CN114679607A (en) Video frame rate control method and device, electronic equipment and storage medium
CN112488060A (en) Object detection method, device, apparatus, medium, and program product
CN109783337B (en) Model service method, system, apparatus and computer readable storage medium
CN113254834B (en) Page content loading method and device, electronic equipment and readable storage medium
CN116366793A (en) Image display method and device, storage medium and electronic device
CN115346150A (en) Violent behavior detection method and system based on edge calculation
CN112261354B (en) Data transmission method based on multiple network cameras and related device
CN110856045B (en) Video processing method, electronic device, and storage medium
CN113784217A (en) Video playing method, device, equipment and storage medium
CN115190309B (en) Video frame processing method, training device, video frame processing equipment and storage medium
CN114095731A (en) Image transmission method, image transmission device, target identification method, target identification device, storage medium, terminal and server
CN116886880B (en) Method, device, equipment and computer program product for adjusting surveillance video
CN117459671A (en) Audio and video call method and device, electronic equipment and storage medium
CN115334321B (en) Method and device for acquiring access heat of video stream, electronic equipment and medium
CN114863701B (en) Traffic signal lamp control method, device, electronic equipment and medium
CN116827952A (en) Mobile edge computing method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination