CN116708753B - Method, device and storage medium for determining preview blocking reason - Google Patents
Method, device and storage medium for determining preview blocking reason Download PDFInfo
- Publication number
- CN116708753B CN116708753B CN202211629927.2A CN202211629927A CN116708753B CN 116708753 B CN116708753 B CN 116708753B CN 202211629927 A CN202211629927 A CN 202211629927A CN 116708753 B CN116708753 B CN 116708753B
- Authority
- CN
- China
- Prior art keywords
- preview
- frame
- time
- preview image
- link
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 108
- 238000003860 storage Methods 0.000 title claims abstract description 12
- 230000000903 blocking effect Effects 0.000 title claims description 52
- 238000012545 processing Methods 0.000 claims abstract description 89
- 230000008569 process Effects 0.000 claims abstract description 54
- 230000005856 abnormality Effects 0.000 claims abstract description 45
- 230000015572 biosynthetic process Effects 0.000 claims description 97
- 238000003786 synthesis reaction Methods 0.000 claims description 97
- 230000006870 function Effects 0.000 claims description 36
- 238000001514 detection method Methods 0.000 claims description 35
- 230000002194 synthesizing effect Effects 0.000 claims description 34
- 239000000203 mixture Substances 0.000 claims description 30
- 230000005540 biological transmission Effects 0.000 claims description 10
- 238000005070 sampling Methods 0.000 claims description 9
- 238000004590 computer program Methods 0.000 claims description 7
- 238000012544 monitoring process Methods 0.000 claims description 3
- 230000002159 abnormal effect Effects 0.000 claims description 2
- 230000003139 buffering effect Effects 0.000 claims 2
- 230000010354 integration Effects 0.000 claims 2
- 238000005457 optimization Methods 0.000 abstract description 8
- 239000010410 layer Substances 0.000 description 38
- 238000010586 diagram Methods 0.000 description 30
- 238000007726 management method Methods 0.000 description 21
- 241000282836 Camelus dromedarius Species 0.000 description 19
- 230000000694 effects Effects 0.000 description 18
- 238000004891 communication Methods 0.000 description 16
- 238000009877 rendering Methods 0.000 description 13
- 238000004422 calculation algorithm Methods 0.000 description 11
- 239000002131 composite material Substances 0.000 description 10
- 238000010295 mobile communication Methods 0.000 description 9
- 230000003287 optical effect Effects 0.000 description 4
- 230000005236 sound signal Effects 0.000 description 4
- 230000001360 synchronised effect Effects 0.000 description 4
- 229920001621 AMOLED Polymers 0.000 description 3
- 238000013461 design Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 238000011160 research Methods 0.000 description 3
- 238000012360 testing method Methods 0.000 description 3
- 230000001960 triggered effect Effects 0.000 description 3
- 230000009471 action Effects 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 2
- 230000006399 behavior Effects 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 238000012423 maintenance Methods 0.000 description 2
- 239000002096 quantum dot Substances 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 210000000988 bone and bone Anatomy 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000004040 coloring Methods 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 239000012792 core layer Substances 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 230000003111 delayed effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 230000007274 generation of a signal involved in cell-cell signaling Effects 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000000977 initiatory effect Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000009191 jumping Effects 0.000 description 1
- 230000001795 light effect Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 229910044991 metal oxide Inorganic materials 0.000 description 1
- 150000004706 metal oxides Chemical class 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000001537 neural effect Effects 0.000 description 1
- 230000000737 periodic effect Effects 0.000 description 1
- 238000011002 quantification Methods 0.000 description 1
- 238000013139 quantization Methods 0.000 description 1
- 230000005855 radiation Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N17/00—Diagnosis, testing or measuring for television systems or their details
- H04N17/002—Diagnosis, testing or measuring for television systems or their details for television cameras
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- General Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Studio Devices (AREA)
Abstract
The application provides a method, equipment and storage medium for determining a preview card on reason. When the preview jam abnormality occurs in an image preview interface of a target application with a shooting function, the method obtains the time information of the circulation of each processing link in the circulation path of the preview image frame displayed in the image preview interface according to the process of displaying the image preview interface, and further determines the preview jam reason causing the preview jam abnormality according to the time information of the circulation of the preview image frame in each processing link, thereby being capable of accurately determining which processing link or hardware causes the preview jam abnormality, enabling the follow-up optimization processing according to the preview jam reason, reducing the jam promotion fluency and improving the user experience.
Description
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to a method, an apparatus, and a storage medium for determining a preview card cause.
Background
Currently, shooting functions of terminal equipment such as mobile phones are more and more perfect, and user experience requirements on shooting are also higher and more. However, in the process of using an application with a photographing function, a problem of screen jam often occurs, especially in a preview interface and a video recording interface. The reduction of the click-through and the improvement of the smoothness are important indexes of the product experience, so how to accurately determine the reason of the preview click-through is particularly important.
Disclosure of Invention
In order to solve the technical problem, the application provides a method, equipment and storage medium for determining a preview blocking reason, which aim at accurately determining the reason for causing picture blocking, so that optimization processing can be performed according to the preview blocking reason, thereby reducing blocking and improving fluency and improving user experience.
In a first aspect, the present application provides a method for determining a cause of preview blocking. The method is applied to the first terminal equipment and comprises the following steps: displaying an image preview interface of a target application, wherein the target application is an application with a shooting function; in the process of displaying an image preview interface, acquiring preview image frames collected based on an image preview request continuously issued by a target application, outputting the preview image frames from a camera, and then streaming time information of each processing link in a preview image frame streaming path; when the preview jam abnormality occurs in the image preview interface, determining a preview jam reason causing the preview jam abnormality according to the time information of the preview image frame flowing in each processing link.
The target application may be, for example, a camera application of the first terminal device, or other three-party applications with shooting functions.
The first terminal device may be, for example, a device with an application having a shooting function installed, such as a mobile phone, a tablet computer, a notebook computer, and an intelligent transfer device.
In some implementations, the running of the application with the shooting function in the first terminal device may be based on a camera of the first terminal device, or may be an external camera.
The image preview interface may be, for example, the interface 10b described below.
The process of issuing the image preview request, and the camera collecting the preview image frames based on the image preview request issued by the target application continuously, and transmitting the collected preview image frames to the camera application, and the process of displaying on the image preview interface may be referred to the description of fig. 3 below, which is not repeated herein.
Therefore, by acquiring the time information of the preview image frames flowing in each processing link in the flowing path, detecting according to the time information corresponding to each processing link, quantifying the blocking position (the image frame when blocking occurs) in each link and the preview blocking reason of the same blocking position according to the preview image frames and the time information, the source can be traced back quickly and accurately, and the accurate determination of the preview blocking reason corresponding to each blocking is realized.
According to the first aspect, according to the circulation sequence of the image frames in the channel, the processing links comprise a frame outputting link, a buffer memory link, a synthesizing link and a transmitting and displaying link; according to the time information of the preview image frame flowing in each processing link, determining the preview clamping reason causing the preview clamping abnormality comprises the following steps: determining whether a frame loss phenomenon exists in a preview image frame output by a frame output link near the preview image frame corresponding to the preview cartoon abnormality when the preview cartoon abnormality occurs according to the time information of the preview image frame in the frame output link circulation; when the frame loss phenomenon exists in the preview image frames output near the preview image frames corresponding to the preview jam abnormality in the frame outputting link, determining that the preview jam reason causing the preview jam abnormality comes from the frame outputting link; when the frame loss phenomenon does not exist in the preview image frames output near the preview image frames corresponding to the preview cartoon abnormality in the frame output link, determining whether the transmission time of the preview image frames is uniform or not according to the time information of the preview image frames in the transmission link; when the sending and displaying time of the preview image frames is uniform, determining that the preview clamping cause causing the preview clamping abnormality comes from a display screen of the first terminal device; when the sending and displaying time of the preview image frames is not uniform, determining whether the synthesizing time of the preview image frames is uniform or not according to the time information of the preview image frames in the synthesizing link circulation; when the synthesis time of the preview image frames is uniform, determining that the preview blocking reason causing the preview blocking abnormality comes from a transmitting and displaying link; when the synthesis time of the preview image frames is not uniform, determining whether the enqueuing time of the preview image frames added to the buffer queue is uniform or not according to the time information of the preview image frames flowing in the buffer link; when the enqueuing time of the preview image frames added to the buffer queue is uniform, determining that a preview clamping reason causing the preview clamping abnormality comes from a synthesizing link; when the enqueue time of the preview image frames added to the buffer queue is not uniform, determining that the preview jam reason causing the preview jam abnormality comes from a frame-out link.
The frame outputting link is a link for outputting an image frame driven by a camera; a cache link, namely a link of adding an image frame processed by a camera hardware abstraction layer service process (camera provider) to a camera buffer queue through a camera service; a synthesizing link, namely a link that an SF thread reads an image frame from a CameraBufferQuene through BBQ to perform synthesizing processing; and transmitting a display link, namely a link that when a hardware synthesizer receives a VSync-HW signal, the image frames synthesized by the SF threads are transmitted to a display driver, and the display driver drives a display screen to display the image frames.
For the detection of the above 4 processing links, reference may be made to the following description for the portions shown in fig. 8 to 13, and no further description is given here.
The time information related to each processing element is, for example, a time stamp related to the detection of each processing element in the following description of the portions shown in fig. 8 to 13.
Therefore, the processing sequence of the preview image frames in the preview image frame sending and displaying process based on the 4 links can be traced back to the source rapidly and accurately, and the preview jamming reason caused by a certain link can be determined from the link for the newly added jamming position in the link, so that the accurate determination of the preview jamming reason corresponding to each jamming is realized.
According to the first aspect, or any implementation manner of the first aspect, the operation performed by the buffer link is an operation of adding the preview image frame output by the camera hardware abstraction layer service process to the buffer queue; according to the time information of the preview image frame flowing in the buffer link, determining whether the enqueue time of the preview image frame added to the buffer queue is uniform or not comprises the following steps: obtaining enqueue time of each frame of preview image frame added to a buffer queue; determining a enqueuing time interval between two adjacent preview image frames according to the enqueuing time corresponding to each preview image frame; determining theoretical synthesis time according to the enqueue time interval and the VSync signal period followed by the synthesis link; and determining whether the enqueuing time of the preview image frame added to the buffer queue is uniform or not according to the theoretical synthesis time.
For example, the implementation of this portion may be referred to the description of the portions shown in fig. 8, 12 and 13, which are not repeated here.
According to a first aspect, or any implementation manner of the first aspect, determining a theoretical synthesis time according to an enqueue time interval and a VSync signal period followed by a synthesis link includes: according to the time corresponding to the received VSync-SF signal and the VSync signal period corresponding to the VSync-SF signal, periodically detecting whether preview image frames exist in a buffer queue, wherein the VSync-SF signal is used for indicating a synthesizing link to acquire the preview image frames from the buffer queue for synthesizing; and for the detection of the buffer queue in each detection period, when the preview image frames exist in the buffer queue, synthesizing the preview image frames, and taking the difference value between the enqueuing time interval corresponding to the preview image frames in the synthesis processing and the enqueuing time interval corresponding to the preview image frames in the buffer queue detected in the first detection period after the synthesis processing of the preview image frames as theoretical synthesis time.
Furthermore, it can be appreciated that when there is no preview image frame in the buffer queue, it is detected whether there is a preview image frame in the buffer queue when the next time it is periodically detected whether there is a preview image frame in the buffer queue.
For example, the implementation of this portion may be referred to the description of the portions shown in fig. 8, 12 and 13, which are not repeated here.
According to the first aspect, or any implementation manner of the first aspect, the frame outputting link refers to a link that the camera drives to output a preview image frame acquired by the camera according to an image preview request; determining whether a frame loss phenomenon exists in a preview image frame output by a frame output link near a preview image frame corresponding to the preview cartoon abnormality when the preview cartoon abnormality occurs according to time information of the preview image frame flowing in the frame output link, wherein the method comprises the following steps: near the preview image frames corresponding to the preview cartoon abnormality, for each two adjacent preview image frames, acquiring output time respectively corresponding to the two adjacent preview image frames driven by the camera to be output; determining a first time interval corresponding to two adjacent preview image frames according to the output time corresponding to the two adjacent preview image frames respectively; and determining whether the preview image frame output by the camera driver has a frame loss phenomenon or not according to the first time interval.
The output time is the time stamp in the portions shown in fig. 8 and 9, and the first time interval is the time interval determined according to the two time stamps.
For example, for the detection of the out-frame link, refer to the descriptions of the parts shown in fig. 8 and 9, which are not repeated here.
According to a first aspect, or any implementation manner of the first aspect, according to a first time interval, determining whether a frame loss phenomenon exists in a preview image frame output by a camera driver includes: determining whether first time intervals corresponding to every two adjacent preview image frames are the same; when the first time intervals corresponding to every two adjacent preview image frames are the same, determining that the preview image frames driven and output by the camera are continuous; when the preview image frames output by the camera drive are continuous, determining that the preview image frames output by the camera drive do not have a frame loss phenomenon; when the first time intervals corresponding to every two adjacent preview image frames are different, determining that the preview image frames driven and output by the camera are discontinuous; when the preview image frames output by the camera driver are discontinuous, determining that the frame loss phenomenon exists in the preview image frames output by the camera driver.
For example, the implementation of this portion may be referred to the description of the portions shown in fig. 8 and 9, which are not repeated here.
According to the first aspect, or any implementation manner of the first aspect, when the first time intervals corresponding to every two adjacent preview image frames are different, the method further includes: determining whether the first time interval is greater than a first time interval threshold, the first time interval threshold being determined based on a sampling frequency followed by the camera when acquiring the preview image frame; when the first time interval is not greater than the first time interval threshold, determining that the preview image frames output by the camera driver are continuous; and when the first time interval is greater than the first time interval threshold, determining that the preview image frames output by the camera driver are discontinuous.
For example, the implementation of this portion may be referred to the description of the portions shown in fig. 8 and 9, which are not repeated here.
According to the first aspect, or any implementation manner of the first aspect, according to time information of the preview image frame in the display link circulation, determining whether display time of the preview image frame is uniform includes: determining the display time length of each frame of preview image frame displayed on an image preview interface; when the display time lengths corresponding to the preview frames of each frame of image are the same, determining that the sending and displaying time of the preview image frames is uniform; otherwise, determining that the display time of the preview image frame is not uniform.
For example, in some implementations, when a display duration corresponding to each image preview frame fluctuates within a set range or is not greater than a corresponding VSync signal period, it may also be determined that a display time of the preview image frame is uniform.
For example, the implementation of this portion may be referred to the description of the portions shown in fig. 8 and 10, which are not repeated here.
According to the first aspect, or any implementation manner of the first aspect, determining a display duration of each preview image frame displayed on the image preview interface includes: when a first VSync-HW signal is received and a first preview image frame is acquired from a synthesizing link, recording the current system time to obtain the first system time; when a second VSync-HW signal is received and a second preview image frame is acquired from the synthesizing link, recording the current system time to obtain a second system time; and determining the display time length of the first preview image displayed on the image preview interface according to the first system time and the second system time.
For example, the implementation of this portion may be referred to the description of the portions shown in fig. 8 and 10, which are not repeated here.
According to the first aspect, or any implementation manner of the first aspect, according to time information of the preview image frame in the synthesizing link circulation, determining whether the synthesizing time of the preview image frame is uniform includes: determining the synthesis time of each frame of preview image frame in a synthesis link; determining that the synthesis time of the preview image frames is uniform when the synthesis time corresponding to the preview image frames of each frame is the same; otherwise, determining that the synthesis time of the preview image frame is not uniform.
For example, in some implementations, when the composition time corresponding to each image preview frame fluctuates within a set range, or is not greater than the corresponding VSync signal period, it may also be determined that the composition time of the preview image frames is uniform.
For example, the implementation of this portion may be referred to the description of the portions shown in fig. 8 and 11, which is not repeated here.
According to the first aspect, or any implementation manner of the first aspect, determining a synthesis time of each preview image frame in a synthesis link includes: when a first VSync-SF signal is received, recording the current system time when a first preview image frame is acquired from a buffer queue, and obtaining a third system time; before the second VSync-SF signal arrives, the synthesizing link synthesizes the first preview image, and the second VSync-SF signal is the first VSync-SF signal received after the synthesizing link synthesizes the first preview image frame; when a second VSync-SF signal is received, recording the current system time to obtain a fourth system time; and determining the synthesis time of each frame of the preview image frame in the synthesis link according to the third system time and the fourth system time.
For example, the implementation of this portion may be referred to the description of the portions shown in fig. 8 and 11, which is not repeated here.
According to the first aspect, or any implementation manner of the first aspect, in a process of displaying an image preview interface, the method further includes: monitoring the refreshing frequency of an image preview interface; and when the refresh frequency is larger than the set refresh frequency threshold value, determining that the image preview interface has preview cartoon abnormality.
Therefore, the monitoring of the preview jam abnormality of the image preview interface is realized according to the actual refresh frequency, so that the preview jam abnormality has a standard, and is not perceived by a user.
According to the first aspect, or any implementation manner of the first aspect, the preview cause further includes a preview image frame causing a preview jam anomaly, and a duration of the preview jam anomaly.
Therefore, the corresponding optimization strategy can be conveniently made for specific preview jamming abnormality according to the preview jamming reason, and the effect after optimization processing can better meet the user requirement.
According to the first aspect, or any implementation manner of the first aspect, the method further includes: and storing the preview blocking reason corresponding to the preview blocking abnormality.
Therefore, a user (research personnel, testing personnel, maintenance personnel and the like) can conveniently acquire the preview blocking reason which occurs when the first terminal is provided with the shooting function from the specified path of the first terminal device, so that the processing links, hardware and the like which cause the preview blocking abnormality on the preview path can be optimized according to the preview blocking reason.
According to the first aspect, or any implementation manner of the first aspect, the method further includes: and responding to the received preview cartoon reason request, outputting the preview cartoon reason to the second terminal equipment, and/or displaying the preview cartoon reason on a display interface of the first terminal equipment.
For example, in some implementations, the second terminal device may be a server, such as a server, or may be a client device, such as a notebook computer, a tablet computer, a mobile phone, or the like, which is not limited in this application.
Therefore, after the preview cartoon reason request is received, the preview cartoon reason is output and/or directly displayed, so that a user (research personnel, test personnel, maintenance personnel and the like) can conveniently know the specific reason that the application with the shooting function installed in the first terminal device has the preview cartoon abnormality in the image preview interface, and the processing links or hardware which cause the preview cartoon abnormality and correspond to the preview image frames, the duration time of the preview cartoon abnormality and the like when the preview cartoon abnormality occurs can be optimized according to the processing links or hardware which cause the preview cartoon abnormality and are recorded in the preview cartoon reason.
For example, in some implementations, the optimization processing performed according to the reason of the preview click may be, for example, adjusting the frame rate, sampling frequency, etc. of the nodes such as the frame out and the composition, and adjusting the frame out and the composition to the same frame rate, as in the scenes shown in fig. 14 to 15 below, or the frame rate of the composition is smaller than the frame rate of the frame out, so as to ensure the smoothness of the pictures in the preview interface.
For example, in other implementations, when the frame rate of the frame output and the frame rate of the composite frame are the same, the optimization processing according to the reason of preview blocking may be, for example, by optimizing the uniformity of frame returning, so as to ensure the smoothness of the frames in the preview interface.
In a second aspect, the present application provides a terminal device. The terminal device includes: a memory and a processor, the memory and the processor coupled; the memory stores program instructions that, when executed by the processor, cause the terminal device to perform the instructions of the first aspect or of the method in any possible implementation of the first aspect.
Any implementation manner of the second aspect and the second aspect corresponds to any implementation manner of the first aspect and the first aspect, respectively. The technical effects corresponding to the second aspect and any implementation manner of the second aspect may be referred to the technical effects corresponding to the first aspect and any implementation manner of the first aspect, which are not described herein.
In a third aspect, the present application provides a computer readable medium for storing a computer program comprising instructions for performing the method of the first aspect or any possible implementation of the first aspect.
Any implementation manner of the third aspect and any implementation manner of the third aspect corresponds to any implementation manner of the first aspect and any implementation manner of the first aspect, respectively. The technical effects corresponding to the third aspect and any implementation manner of the third aspect may be referred to the technical effects corresponding to the first aspect and any implementation manner of the first aspect, which are not described herein.
In a fourth aspect, the present application provides a computer program comprising instructions for performing the method of the first aspect or any possible implementation of the first aspect.
Any implementation manner of the fourth aspect and any implementation manner of the fourth aspect corresponds to any implementation manner of the first aspect and any implementation manner of the first aspect, respectively. Technical effects corresponding to any implementation manner of the fourth aspect may be referred to the technical effects corresponding to any implementation manner of the first aspect, and are not described herein.
In a fifth aspect, the present application provides a chip comprising processing circuitry, a transceiver pin. Wherein the transceiver pin and the processing circuit communicate with each other via an internal connection path, the processing circuit performing the method of the first aspect or any one of the possible implementation manners of the first aspect to control the receiving pin to receive signals and to control the transmitting pin to transmit signals.
Any implementation manner of the fifth aspect and any implementation manner of the fifth aspect corresponds to any implementation manner of the first aspect and any implementation manner of the first aspect, respectively. Technical effects corresponding to any implementation manner of the fifth aspect may be referred to the technical effects corresponding to any implementation manner of the first aspect, and are not described herein.
Drawings
Fig. 1 is a schematic diagram of a hardware structure of an exemplary terminal device;
fig. 2 is a schematic diagram of a software architecture of an exemplary terminal device;
FIG. 3 is a schematic diagram illustrating functional modules and hardware interactions involved in a preview stream display process;
FIG. 4 is a schematic diagram illustrating a correspondence between image frames and time without timeout and frame loss in rendering;
5 a-5 c are schematic diagrams illustrating changes of display contents of a display interface under the condition that rendering is not overtime and frame loss is avoided;
FIG. 6 is a schematic diagram illustrating a correspondence between image frames and time in the event of a rendering timeout and a frame loss;
fig. 7a and fig. 7b are schematic diagrams illustrating changes of display contents of a display interface in the case of rendering timeout and frame loss;
FIG. 8 is a schematic diagram illustrating a relationship between a detected node and a preview click reason when determining the preview click reason;
FIG. 9 is a schematic diagram of an exemplary camera drive output image frame;
FIG. 10 is a schematic diagram illustrating a portion of a displayed image frame and a corresponding display time interval;
FIG. 11 is a schematic diagram illustrating the relationship between portions of an image frame and corresponding synthesis times;
fig. 12 is a schematic time diagram illustrating addition of a part of a camera provider processed image frame to a camera buffer queue;
FIG. 13 is a schematic diagram of a frame return curve for the frame return curve transition shown in FIG. 12, shown schematically;
fig. 14 is a schematic time chart illustrating addition of a part of the image frame processed by the CameraProvider to the camerabufferqueue;
FIG. 15 is a schematic diagram illustrating a relationship between a displayed image frame and a corresponding display time interval;
fig. 16 is a schematic diagram illustrating a relationship between an optimized displayed image frame and a corresponding display time interval.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
The term "and/or" is herein merely an association relationship describing an associated object, meaning that there may be three relationships, e.g., a and/or B, may represent: a exists alone, A and B exist together, and B exists alone.
The terms first and second and the like in the description and in the claims of embodiments of the present application are used for distinguishing between different objects and not necessarily for describing a particular sequential order of objects. For example, the first target object and the second target object, etc., are used to distinguish between different target objects, and are not used to describe a particular order of target objects.
In the embodiments of the present application, words such as "exemplary" or "such as" are used to mean serving as examples, illustrations, or descriptions. Any embodiment or design described herein as "exemplary" or "for example" should not be construed as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "exemplary" or "such as" is intended to present related concepts in a concrete fashion.
In the description of the embodiments of the present application, unless otherwise indicated, the meaning of "a plurality" means two or more. For example, the plurality of processing units refers to two or more processing units; the plurality of systems means two or more systems.
In order to better understand the technical scheme provided by the application, before describing the technical scheme of the application, a hardware structure of a terminal device (such as a mobile phone, a tablet computer, a touch-control PC and the like) with a shooting function, which is applicable to the application, is described with reference to the accompanying drawings.
Referring to fig. 1, the terminal device 100 may include: processor 110, external memory interface 120, internal memory 121, universal serial bus (universal serial bus, USB) interface 130, charge management module 140, power management module 141, battery 142, antenna 1, antenna 2, mobile communication module 150, wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, headset interface 170D, sensor module 180, keys 190, motor 191, indicator 192, camera 193, display 194, and subscriber identity module (subscriber identification module, SIM) card interface 195, etc.
By way of example, in some implementations, the sensor module 180 may include a pressure sensor, a gyroscope sensor, a barometric sensor, a magnetic sensor, an acceleration sensor, a distance sensor, a proximity sensor, a fingerprint sensor, a temperature sensor, a touch sensor, an ambient light sensor, a bone conduction sensor, etc., which are not further illustrated herein.
Based on the above sensor, when the user performs an operation, such as clicking, double clicking, sliding, or sliding without leaving the hand, on the display screen 194 during the use of the terminal device, the currently performed operation, and the position acted on by the operation, the report information of the position, and the like, can be precisely determined. In this way, the icon corresponding to the application acted by the current operation behavior can be accurately determined, for example, the user is determined to click on the icon of the camera application displayed in the display screen 194, and then the camera application is started, and the preview interface of the camera application is displayed.
Furthermore, it should be noted that the processor 110 may include one or more processing units, for example: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), a controller, a memory, a video codec, a digital signal processor (digital signal processor, DSP), a baseband processor, and/or a neural network processor (neural-network processing unit, NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors.
It is understood that the controller may be a neural hub and command center of the terminal device 100. In practical application, the controller can generate operation control signals according to the instruction operation codes and the time sequence signals to complete instruction fetching and instruction execution control.
It should be noted that, a memory may be further provided in the processor 110 for storing instructions and data. In some implementations, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby improving the efficiency of the system.
For example, in some implementations, the processor 110 may include one or more interfaces. The interfaces may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous receiver transmitter (universal asynchronous receiver/transmitter, UART) interface, a mobile industry processor interface (mobile industry processor interface, MIPI), a general-purpose input/output (GPIO) interface, a subscriber identity module (subscriber identity module, SIM) interface, and/or a universal serial bus (universal serial bus, USB) interface, among others.
With continued reference to fig. 1, the exemplary charge management module 140 is operable to receive a charge input from a charger. The charger can be a wireless charger or a wired charger. In some wired charging implementations, the charge management module 140 may receive a charging input of the wired charger through the USB interface 130. In some wireless charging implementations, the charging management module 140 may receive wireless charging input through a wireless charging coil of the terminal device 100. The charging management module 140 may also supply power to the terminal device 100 through the power management module 141 while charging the battery 142.
With continued reference to fig. 1, an exemplary power management module 141 is used to connect the battery 142, the charge management module 140, and the processor 110. The power management module 141 receives input from the battery 142 and/or the charge management module 140 and provides power to the processor 110, the internal memory 121, the external memory, the display 194, the camera 193, the wireless communication module 160, and the like. The power management module 141 may also be configured to monitor battery capacity, battery cycle number, battery health (leakage, impedance) and other parameters. In other implementations, the power management module 141 may also be provided in the processor 110. In other implementations, the power management module 141 and the charge management module 140 may also be disposed in the same device.
With continued reference to fig. 1, exemplary wireless communication functions of the terminal device 100 may be implemented by an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used to transmit and receive electromagnetic wave signals. Each antenna in the terminal device 100 may be used to cover a single or multiple communication bands. Different antennas may also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed into a diversity antenna of a wireless local area network. In other implementations, the antenna may be used in conjunction with a tuning switch.
With continued reference to fig. 1, the mobile communication module 150 may provide an exemplary solution for wireless communication including 2G/3G/4G/5G, etc. applied on the terminal device 100. The mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (low noise amplifier, LNA), etc. The mobile communication module 150 may receive electromagnetic waves from the antenna 1, perform processes such as filtering, amplifying, and the like on the received electromagnetic waves, and transmit the processed electromagnetic waves to the modem processor for demodulation. The mobile communication module 150 can amplify the signal modulated by the modem processor, and convert the signal into electromagnetic waves through the antenna 1 to radiate. In some implementations, at least some of the functional modules of the mobile communication module 150 may be disposed in the processor 110. In some implementations, at least some of the functional modules of the mobile communication module 150 may be disposed in the same device as at least some of the modules of the processor 110.
In addition, the modem processor may include a modulator and a demodulator. The modulator is used for modulating the low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then transmits the demodulated low frequency baseband signal to the baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs sound signals through an audio device (not limited to the speaker 170A, the receiver 170B, etc.), or displays images or video through the display screen 194. In some implementations, the modem processor may be a stand-alone device. In other implementations, the modem processor may be provided in the same device as the mobile communication module 150 or other functional module, independent of the processor 110.
With continued reference to fig. 1, exemplary wireless communication module 160 may provide solutions for wireless communication including wireless local area network (wireless local area networks, WLAN) (e.g., wireless fidelity (wireless fidelity, wi-Fi) network), bluetooth (BT), global navigation satellite system (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), near field wireless communication technology (near field communication, NFC), infrared technology (IR), etc., as applied on terminal device 100. The wireless communication module 160 may be one or more devices that integrate at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 2, modulates the electromagnetic wave signals, filters the electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 160 may also receive a signal to be transmitted from the processor 110, frequency modulate it, amplify it, and convert it to electromagnetic waves for radiation via the antenna 2.
In addition, it should be noted that the terminal device 100 implements a display function through the GPU, the display screen 194, the application processor, and the like. The GPU is a microprocessor for image processing, and is connected to the display 194 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
With continued reference to FIG. 1, exemplary display 194 is used to display images, videos, and the like. The display 194 includes a display panel. The display panel may employ a liquid crystal display (liquid crystal display, LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (AMOLED) or an active-matrix organic light-emitting diode (matrix organic light emitting diode), a flexible light-emitting diode (flex), a mini, a Micro led, a Micro-OLED, a quantum dot light-emitting diode (quantum dot light emitting diodes, QLED), or the like. In some implementations, the terminal device 100 may include 1 or N display screens 194, N being a positive integer greater than 1.
In addition, it should be noted that the terminal apparatus 100 may implement a photographing function through an ISP, a camera 193, a video codec, a GPU, a display 194, an application processor, and the like.
In addition, the ISP is used to process data fed back from the camera 193. For example, when photographing, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electric signal, and the camera photosensitive element transmits the electric signal to the ISP for processing and is converted into an image visible to naked eyes. ISP can also optimize the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some implementations, the ISP may be provided in the camera 193.
In addition, it is also noted that the camera 193 is used for capturing still images or videos. The object generates an optical image through the lens and projects the optical image onto the photosensitive element. The photosensitive element may be a charge coupled device (charge coupled device, CCD) or a Complementary Metal Oxide Semiconductor (CMOS) phototransistor. The photosensitive element converts the optical signal into an electrical signal, which is then transferred to the ISP to be converted into a digital image signal. The ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into an image signal in a standard RGB, YUV, or the like format. In some implementations, the terminal device 100 may include 1 or N cameras 193, N being a positive integer greater than 1.
Specifically, in the technical solution provided in this embodiment of the present application, the picture displayed in the preview interface corresponding to the camera application or other three-party application with a shooting function is a picture that is acquired by the camera 193 and rendered and synthesized by image frames after image processing (detection algorithm processing, color adjustment, etc.).
In addition, the digital signal processor is used to process digital signals, and may process other digital signals in addition to digital image signals. For example, when the terminal device 100 selects a frequency bin, the digital signal processor is used to fourier transform the frequency bin energy, or the like.
Furthermore, it should be noted that video codecs are used for compressing or decompressing digital video. The terminal device 100 may support one or more video codecs. In this way, the terminal device 100 can play or record video in various encoding formats, for example: dynamic picture experts group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, MPEG4, etc.
With continued reference to fig. 1, an exemplary external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to enable expansion of the memory capabilities of the terminal device 100. The external memory card communicates with the processor 110 through an external memory interface 120 to implement data storage functions. For example, files such as music, video, etc. are stored in an external memory card.
With continued reference to fig. 1, by way of example, internal memory 121 may be used to store computer-executable program code that includes instructions. The processor 110 executes various functional applications of the terminal device 100 and data processing by executing instructions stored in the internal memory 121. The internal memory 121 may include a storage program area and a storage data area. The storage program area may store an application program (such as a sound playing function, an image playing function, etc.) required for at least one function of the operating system, etc. The storage data area may store data (such as audio data, phonebook, etc.) created during use of the terminal device 100, and the like. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (universal flash storage, UFS), and the like.
Specifically, in the technical solution provided in the embodiment of the present application, the determined preview click cause may be stored under a specified directory in the internal memory 121 in a form of a preview click cause, so as to be used for subsequent acquisition.
In addition, it should be noted that the terminal device 100 may implement audio functions through the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the earphone interface 170D, and the application processor. Such as music playing, recording, etc.
In addition, it should be noted that the audio module 170 is configured to convert digital audio information into an analog audio signal output, and also configured to convert an analog audio input into a digital audio signal. The audio module 170 may also be used to encode and decode audio signals. In some implementations, the audio module 170 may be disposed in the processor 110, or some functional modules of the audio module 170 may be disposed in the processor 110.
With continued reference to FIG. 1, exemplary keys 190 include a power-on key, a volume key, and the like. The keys 190 may be mechanical keys. Or may be a touch key. The terminal device 100 may receive key inputs, generating key signal inputs related to user settings and function controls of the terminal device 100.
With continued reference to FIG. 1, exemplary, motor 191 may generate a vibration alert. The motor 191 may be used for incoming call vibration alerting as well as for touch vibration feedback. For example, touch operations acting on different applications (e.g., photographing, audio playing, etc.) may correspond to different vibration feedback effects. The motor 191 may also correspond to different vibration feedback effects by touching different areas of the display screen 194. Different application scenarios (such as time reminding, receiving information, alarm clock, game, etc.) can also correspond to different vibration feedback effects. The touch vibration feedback effect may also support customization.
With continued reference to fig. 1, the indicator 192 may be, for example, an indicator light, may be used to indicate a state of charge, a change in charge, may be used to indicate a message, missed call, notification, or the like.
As to the hardware structure of the terminal device 100, it should be understood that the terminal device 100 shown in fig. 1 is only one example, and in a specific implementation, the terminal device 100 may have more or fewer components than shown in the drawings, may combine two or more components, or may have different component configurations. The various components shown in fig. 1 may be implemented in hardware, software, or a combination of hardware and software, including one or more signal processing and/or application specific integrated circuits.
In order to better understand the software structure of the terminal device 100 shown in fig. 1, the software structure of the terminal device 100 will be described below. Before explaining the software structure of the terminal device 100, an architecture that the software system of the terminal device 100 can employ will be first described.
Specifically, in practical applications, the software system of the terminal device 100 may adopt a layered architecture, an event-driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture.
Furthermore, it is understood that software systems currently used by mainstream terminal devices include, but are not limited to, windows systems, android systems, and iOS systems. For convenience of explanation, the embodiment of the present application takes an Android system with a layered architecture as an example, and illustrates a software structure of the terminal device 100.
In addition, the following scheme for determining the reason of the preview blocking provided in the embodiment of the present application is also applicable to other systems in specific implementation.
Referring to fig. 2, a software architecture block diagram of a terminal device 100 according to an embodiment of the present application is shown.
As shown in fig. 2, the layered architecture of the terminal device 100 divides the software into several layers, each of which has a clear role and division of labor. The layers communicate with each other through a software interface. In some implementations, the Android system is divided into five layers, from top to bottom, an Application layer (APP), an Application framework layer (Application Framework, FWK), an Zhuoyun rows (Android run) and system libraries, a hardware abstraction layer (Hardware Abstraction Layer, HAL), and a kernel layer (kernel), respectively.
The application layer may include a series of application packages, among other things. As shown in fig. 2, the application package may include applications such as cameras, gallery, calendar, phone calls, maps, navigation, WLAN, bluetooth, video, etc., which are not further listed herein and are not limiting in this application.
Wherein the application framework layer provides an application programming interface (application programming interface, API) and programming framework for application programs of the application layer. In some implementations, these programming interfaces and programming frameworks can be described as functions. As shown in FIG. 2, the application framework layer may include functions of a window manager, a content provider, a view system, a camera service, a cross-process synchronization cache module, a display composition module (SurfaceFlinger), and the like, which are not specifically recited herein, but are not limiting.
It should be noted that, in this embodiment, the crash process synchronization buffer module is specifically configured to create BlastBufferQueue (BBQ). The BBQ is essentially a Buffer zone (Buffer) for providing a communication bridge between the camera application and the surface eFlinger to realize the communication of the collapse process, and providing a synchronous interface to synchronize the Buffer submitted by the camera application to the surface eFlinger.
The window manager is used for managing window programs. The window manager can acquire the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like.
The content provider is used to store and retrieve data and make such data accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, phonebooks, etc.
The view system includes visual controls, such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, a display interface including a text message notification icon may include a view displaying text and a view displaying a picture.
The camera service is used for responding to a request of a camera application (or other three-party application with shooting function), calling a camera (comprising a front camera and/or a rear camera), so that after the camera application is started, the called camera can continuously collect image frames, and the collected image frames are transmitted upwards to a camera hardware abstraction layer service process (camera provider) of a hardware abstraction layer in an image stream mode for processing.
Accordingly, the camera provider transmits the processed image frames to the camera service, and the camera service caches the received image frames to a cache queue corresponding to the camera application.
The BBQ is used for taking out the image frames to be processed from the cache queues corresponding to the camera application, synchronizing the time stamp when the taken-out image frames are added to the CameraBufferQuene to a display synthesis module (SurfaceFlinger), and carrying out synthesis processing by the SurfaceFlinger. Finally, surfeflinger outputs the synthesized content to a hardware synthesizer (HardWare Composer, HWC) in the HAL layer, which transmits the synthesized content to a display driver, which in turn drives a display screen display.
Android run time includes a core library and virtual machines. Android run is responsible for scheduling and management of the Android system.
The core library consists of two parts: one part is a function which needs to be called by java language, and the other part is a core library of android.
The application layer and the application framework layer run in a virtual machine. The virtual machine executes java files of the application program layer and the application program framework layer as binary files. The virtual machine is used for executing the functions of object life cycle management, stack management, thread management, security and exception management, garbage collection and the like.
The system library may include a plurality of functional modules. For example: surface manager (surface manager), media Libraries (Media Libraries), three-dimensional (3D) graphics processing Libraries (e.g., openGL ES), two-dimensional (2D) graphics engines (e.g., SGL), etc.
The surface manager is used to manage the display subsystem and provides a fusion of 2D and 3D layers for multiple applications.
Media libraries support a variety of commonly used audio, video format playback and recording, still image files, and the like. The media library may support a variety of audio video encoding formats, such as: MPEG4, h.264, MP3, AAC, AMR, JPG, PNG, etc.
The three-dimensional graphic processing library is used for realizing three-dimensional graphic drawing, image rendering, synthesis, layer processing and the like.
It will be appreciated that the 2D graphics engine described above is a drawing engine for 2D drawing.
The hardware abstraction layer may include various hardware corresponding HALs, such as an audio HAL corresponding to an audio module, a sensor HAL corresponding to a sensor module, a display HAL corresponding to a display screen, and so on. In particular, in this embodiment, for image processing of shot content, there is a need to involve HAL related to the camera, such as the camera hardware abstraction layer service process shown in fig. 2, namely, camera provider, and the above-mentioned hardware synthesizer (HWC) and virtual signal generation module for receiving the content synthesized by surfeflinger.
The camera provider is used for detecting the image frames acquired by the camera according to a detection algorithm corresponding to the currently selected shooting mode, such as face detection, scene detection, object detection, smiling face detection, etc., which are not listed here, and the embodiment is not limited in this regard.
The virtual signal generating module (hereafter referred to as DispSync) is configured to allocate a corresponding software signal, specifically a VSync-APP signal, for the camera application and a corresponding software signal, specifically a VSync-SF signal, for the surface eflinger according to a vertical synchronization (vetical synchronization, VSync) signal (hardware VSync signal, VSync-HW signal) generated by the hardware synthesizer. Regarding the use of the VSync-HW signal, the VSync-APP signal, and the VSync-SF signal, a description of the preview flow display flow is provided below Wen Zhen, and will not be repeated here.
With continued reference to FIG. 2, the kernel layer in the Android system is illustratively the layer between hardware and software. The inner core layer at least comprises display drive, camera drive, audio drive, sensor drive and the like.
For example, the camera driver may be configured to issue an image preview request issued by the camera application through the camera service to the camera, and further drive the camera to collect image frames according to the image preview request issued continuously.
Illustratively, the camera driver may also be configured to transmit the raw image stream continuously reported by the camera to the camera provider in the HAL layer for processing.
Illustratively, the display driver is used to drive the display screen to display the synthesized image by the hardware synthesizer.
As to the software structure of the terminal device 100, it will be understood that the layers and the components included in the layers in the software structure shown in fig. 2 do not constitute a specific limitation on the terminal device 100. In other embodiments of the present application, terminal device 100 may include more or fewer layers than shown, and more or fewer components may be included in each layer, as the present application is not limited.
For ease of understanding, a description of concepts related to the embodiments of the application in part are given by way of example for reference.
1. Frame: refers to a single picture of the minimum unit in the interface display. A frame is understood to mean a still picture, and displaying a plurality of successive frames in rapid succession may create the illusion of object motion.
2. Frame rate: refers to the number of frames in which a picture is refreshed in 1 second, and can also be understood as the number of times a graphics processor in the terminal device refreshes a picture per second. A high frame rate may result in a smoother and more realistic animation. The more frames per second, the smoother the displayed motion.
It should be noted that, before the frame is displayed on the interface, the process of drawing, rendering, synthesizing, etc. is usually required.
3. And (3) frame drawing: refers to picture drawing of a display interface. The display interface may be composed of one or more views, each of which may be drawn by a visual control of the view system, each of which is composed of sub-views, one of which corresponds to a widget in the view, e.g., one of which corresponds to a symbol in the picture view.
4. And (3) frame rendering: the rendered view is subjected to coloring operation, 3D effect is added, or the like. For example: the 3D effect may be a light effect, a shadow effect, a texture effect, etc.
5. And (3) frame synthesis: is a process of combining a plurality of the one or more rendered views into a display interface.
6. Vertical synchronization (vetical synchronization, VSync) signal: and the signals are used for controlling the process initiation of drawing rendering, synthesis, display sending and the like of the frames.
It should be noted that, in order to ensure the smoothness of display and avoid the phenomenon of display blocking, the terminal device generally performs display based on the VSync signal, so as to synchronize the processes of drawing, rendering, synthesizing, refreshing and displaying the screen.
It will be appreciated that the VSync signal is a periodic signal, and the VSync signal period may be set according to the screen refresh rate, for example, when the screen refresh rate is 60Hz, the VSync signal period may be 16.6ms, that is, the terminal device generates a control signal every 16.6ms to trigger the VSync signal period. Also for example, at a screen refresh rate of 90Hz, the VSync signal period may be 11.1ms, i.e., the end device generates a control signal every 11.1ms interval to trigger the VSync signal period.
In addition, it should be noted that the VSync signal includes software VSync signals, such as the VSync-APP signal and the VSync-SF signal described above, and hardware VSync signals, such as the VSync-HW signal described above. The VSync-APP is used for triggering a drawing rendering process; VSync-SF is used to trigger the composition flow. VSync-HW is used to trigger the screen display refresh process. Typically, software VSync and hardware VSync remain periodically synchronized. Taking 60Hz and 120Hz changes as an example, if the VSync-HW signal is switched from 60Hz to 120Hz, the VSync-APP signal and the VSync-SF signal are synchronously changed, and the VSync-HW signal is switched from 60Hz to 120Hz.
The preview stream display process will be described below by taking an application having a photographing function as a camera application of the terminal device.
Referring to fig. 3, by way of example, when an operation action acting on a camera application is received, in response to the operation action, in a process of starting the camera application, a process corresponding to the camera application (hereinafter described as a camera process) is created, a camera service, a cache queue (camera buffer queue) corresponding to the camera application, BBQ, surfaceFlinger, cameraProvider, HWC, dispSync, and the like are instantiated, and a layer of a control to be displayed, such as a layer of control 1, control 2, control 3, control 4, and the like in fig. 3, is created in the camera process.
With continued reference to fig. 3, exemplary, the HWC periodically generates a VSync-HW signal according to the frequency refresh frequency, and the dispync allocates a corresponding VSync-APP signal to a User Interface (UI) thread corresponding to the camera application according to the periodically generated VSync-HW signal, and periodically sends the generated VSync-APP signal to the UI thread of the camera application, so that the UI thread can trigger rendering processing after receiving the VSync-APP signal, such as rendering for the control 1, the control 2, the control 3, the control 4, and the like.
With continued reference to fig. 3, exemplary DispSync further distributes corresponding VSync-SF signals to SurfaceFlinger according to periodically generated VSync-HW signals, and periodically sends the generated VSync-SF signals to SurfaceFlinger, so that SurfaceFlinger can acquire an image frame 'from a camera buffer queue through BBQ after receiving the VSync-SF signals, and further performs synthesis processing on the acquired image frame'.
With continued reference to fig. 3, exemplary, regarding the buffered image frames' in the camera buffer queue, specifically, the image frames in the original image stream acquired by the camera are processed by the camera provider. As regards the acquisition of the original image stream, for example, an image preview Request generated by the camera according to the camera application, as obtained by the continuous acquisition of the Request in fig. 3.
For example, the generation of the image preview request may be generated by the camera application according to the shooting mode currently in after the camera application is started.
It can be understood that the detection algorithms and image processing logic corresponding to different shooting modes may be different, so that an image preview request is generated according to the shooting mode, and when the image preview request is transmitted to the camera provider through the camera service, the camera provider can know which detection and processing are performed on the image frames in the original image stream acquired by the camera according to the image preview request.
With continued reference to fig. 3, after receiving an image preview Request transmitted by a camera service, the camera provider transmits the image preview Request to a camera driver, and then the camera driver issues the image preview Request to the camera, and drives the camera to collect image frames according to the continuously issued image preview Request (Request).
With continued reference to fig. 3, after the camera acquires an image frame according to a continuously issued image preview Request (Request), the camera continuously reports the image frame in the original image stream to the camera driver, and the camera driver transmits the image frame to the camera provider according to a fixed frame rate, and the camera provider performs corresponding detection and processing on the image frame.
Taking the currently selected shooting mode as an example of a human image mode, if a detection algorithm corresponding to the human image mode relates to a human face detection algorithm, the CameraProvider can detect the human face of the image frame based on the human face detection algorithm, and further determine characteristic point information of the human face in the currently processed image frame, such as coordinate information of the position of the human face, so as to frame the shape set by the human face according to the determined coordinate information. Accordingly, the image frame of the final framed face is the image frame' shown in fig. 3.
Illustratively, the camera provider transmits the processed image frame 'to the camera service, which adds the image frame' to the camera buffer queue, i.e., the queue operation in fig. 3.
From the above description, it can be seen that the synthesis of the buffered image frames' in the camel bufferqueue by surfeflinger is triggered based on the VSync-SF signal periodically generated by DispSync. Therefore, after receiving the VSync-SF signal, the surface efringer acquires the buffered image frame 'from the camerabufferqueue through the BBQ, i.e. the acquire operation in fig. 3, and the BBQ synchronizes the timestamp of each acquired image frame' when it is added to the camerabufferqueue to the surface efringer, i.e. the Apply/setTransactionState operation in fig. 3.
With continued reference to fig. 3, exemplary, if the SurfaceFlinger obtains a buffered image frame ' from the camelabufferqueue through the BBQ, the image frame ' is subjected to a synthesis process to obtain a synthesized image corresponding to the image frame ', and the obtained synthesized image is transmitted to the HWC. Meanwhile, the surfeflink transmits an instruction for destroying (release in fig. 3) the image frame 'to the BBQ, and the BBQ transmits the destroying instruction to the CameraBufferQuene, cameraBufferQuene to inform the CameraService to call the dequeue function, so as to delete the image frame' cached in the camerabufferqueue.
It will be appreciated that the dequeue function is the first function in the specified queue to remove each matching element and performs the function that is removed, i.e., the image frame' at the head of the queue is deleted. When the SurfaceFlinger acquires the cached image frame ' from the CameraBufferQuene through the BBQ, the buffered image frame ' is acquired from the head of the team, so that after the image is synthesized, the used image frame ' can be deleted from the CameraBufferQuene by calling the dequeue function in the above manner.
With continued reference to FIG. 3, the HWC may illustratively send the composite image issued by SurfaceFlinger to a display driver, which drives the display to display the corresponding frame of the composite image.
Taking control 1 in the camera application as an example, a Surface View (Surface View) control for displaying the preview image is taken as an example, a picture obtained according to the above processing is finally displayed in control 1 displayed in the display screen.
It will be appreciated that, as is apparent from the above description, the compositing operation of the image frame 'by the surfeflinger thread (SF thread) is triggered based on the VSync-SF signal, and the operation of displaying the image frame' corresponding to the final driving display screen is triggered based on the VSync-HW signal, and the duration of the interval between these two signals is usually fixed, such as one VSync signal period. Referring to fig. 4, for example, it is assumed that the time interval between any adjacent two time points of t1 to t8 is the same, such as one VSync signal period. If the SF thread receives the VSync-SF signal at each time point from t1 to t8, the corresponding image frames' are acquired through BBQ for synthesis.
With continued reference to fig. 4, exemplary, in the absence of frame loss, the synthesis time of each frame is completed within a fixed time, such as in the case of completion within one VSync signal period, such as in the time [ t1, t2 ]) the synthesis process for frame 1 is completed, in the time [ t2, t3 ]) the synthesis process for frame 2 is completed, in the time [ t3, t4 ]) the synthesis process for frame 3 is completed, in the time [ t4, t5 ]) the synthesis process for frame 4 is completed, in the time [ t5, t6 ]) the synthesis process for frame 5 is completed, in the time [ t6, t7 ]) the synthesis process for frame 6 is completed, in the time [ t7, t8 ]) the synthesis process for frame 7 is completed, and so on. After a VSync-SF signal is delayed by one VSync signal period, a VSync-HW signal is received at a time point t2, and at this time, since the SF thread has completed the synthesis process for the frame 1, the hardware synthesizer will output the received synthesized frame 1 from the SF thread, and then send the frame 1 to the display driver for display, that is, the display driver will drive the display screen to display the frame 1 corresponding to the VSync signal period [ t2, t3 ].
With continued reference to fig. 4, after receiving the next VSync-HW signal at time t3, the hardware synthesizer receives the synthesized frame 2 from the SF thread because the SF thread has completed the synthesis process for the frame 2, and then sends the synthesized frame 2 to the display driver for display, i.e. the display driver drives the display screen to display the frame 2 corresponding to the VSync signal period [ t3, t4 ]. Based on this principle, the display driver drives the display screen to display the picture corresponding to frame 3 in the VSync signal period of [ t4, t 5), the picture corresponding to frame 4 in the VSync signal period of [ t5, t 6), the picture corresponding to frame 5 in the VSync signal period of [ t6, t 7), the picture corresponding to frame 6 in the VSync signal period of [ t7, t 8), and so on. The display interfaces corresponding to the terminal devices (for example, mobile phones) are modified as described in fig. 5a to 5 c.
Referring to fig. 5a (1), an interface 10a for a cell phone is shown by way of example. In this embodiment, taking the interface 10a as a desktop of the mobile phone as an example, one or more application icons, such as an icon of a clock application, an icon of a calendar application, an icon of a gallery application, an icon of a memo application, an icon of a file management application, an icon of an email application, an icon of a music application, an icon of a calculator application, an icon of a video application, an icon of a recorder application, an icon of a weather application, an icon of a browser application, an icon of a setting application, an icon of an address book application, an icon of a telephone application, an icon of an information application, an icon (control) 10a-1 of a camera application, and the like, may be displayed on the exemplary interface 10a. Wherein, when the user clicks the control 10a-1 in the interface 10a, the photographing function can be implemented using the camera application.
With continued reference to fig. 5a (1), for example, when the user clicks the control 10a-1 in the interface 10a, the mobile phone responds to the operation behavior, and recognizes that the control corresponding to the clicking operation of the user is the control 10a-1 of the camera application, and further invokes the corresponding interface in the application framework layer to start the camera application, and drives the camera to acquire (collect) the image by invoking the camera driver of the kernel layer. At this time, the mobile phone switches from the interface 10a to the interface 10b shown in fig. 5a (2).
Referring to fig. 5a (2), for example, one or more controls may be included in the interface 10b, such as a control 10b-1 for displaying preview images, a control 10b-2 for triggering photographing, a control 10b-3 for switching front and rear cameras, and a control 10b-4 for selecting a photographing mode.
With continued reference to FIG. 5a (2), one or more sets of capture modes, such as aperture mode, night view mode, portrait mode, photo mode, video mode, smiling face mode, etc., may be provided in the control 10b-4, for example.
For example, if there are other modes of shooting, when all the modes of shooting cannot be displayed in the area of the control 10b-4, a "more" option may be provided, and when the user operates the option, the mobile phone responds to the operation, and a selection list may be popped up on the control 10b-4 for the user to slide to select other modes of shooting.
It should be understood that the above description is only an example for better understanding of the technical solution of the present embodiment, and is not to be taken as the only limitation of the present embodiment. In practical applications, the user interface form may be set according to needs, and is not limited herein.
Illustratively, after the camera application is started, if the SF thread completes the composition of frame 1 in [ t1, t 2) time, completes the composition of frame 2 in [ t2, t 3) time, completes the composition of frame 3 in [ t3, t 4) time, completes the composition of frame 4 in [ t4, t 5) time, completes the composition of frame 5 in [ t5, t 6) time, completes the composition of frame 6 in [ t6, t 7) time, completes the composition of frame 7 in [ t7, t 8) time, then normally displays the corresponding picture 1 of frame 1 in the control 10b-1 for displaying the preview image in [ t2, t 3) time, as shown in (2) of fig. 5a, if the SF thread completes the composition of frame 1 in [ t1, t 2) time, respectively. Accordingly, in [ t3, t 4) time, a screen 2 corresponding to the frame 2 is displayed in the control 10b-1 for displaying the preview image, as shown in (1) of fig. 5 b. Accordingly, in [ t4, t 5), the screen 3 corresponding to the frame 3 is displayed in the control 10b-1 for displaying the preview image, as shown in (2) of fig. 5 b. Accordingly, in [ t5, t 6), the screen 4 corresponding to the frame 4 is displayed in the control 10b-1 for displaying the preview image, as shown in (1) of fig. 5 c. Accordingly, in [ t6, t 7) time, a screen 5 corresponding to the frame 5 is displayed in the control 10b-1 for displaying the preview image, as shown in (2) of fig. 5 c. Thereby, the display of the preview image on the preview interface of the camera application is realized.
It should be understood that the above description is only an example for better understanding of the technical solution of the present embodiment, and is not to be taken as the only limitation of the present embodiment.
However, fig. 4, 5a to 5c only show the ideal situation, i.e. no frame loss, no time-out of the synthesis process. However, as the shooting requirements of users are continuously improved, the shooting pixels are improved, and the detection and processing links related to the shooting modes are increased, the time length of an image processing process is increased, so that the display time is uneven, even in the using process of terminal equipment, the damage of hardware such as a camera and a display screen can also cause unstable transmission and display of image frames, and the preview interface is blocked.
As shown in fig. 6, a schematic diagram of SF thread synthesis timeout and SF thread and display driver processing image frames in case of frame loss is exemplarily shown. Again, the time interval between any two adjacent time points from t1 to t8 is the same, for example, one VSync signal period. In the time of [ t1, t 2), the SF thread synthesizes frame 1 normally, in the time of [ t2, t 3), the SF thread synthesizes frame 2 normally, but in the time of [ t3, t 4), frame 3 is not synthesized, and after the time of t4, synthesis of frame 3 is completed before the time of t5, so that frame 4 which should be synthesized originally at the time of t4 is lost, and synthesis of subsequent frames 5, 6 and 7 is shown in fig. 4, and normal processing is performed.
For the SF thread to image frame' synthesis process shown in fig. 6, by way of example,
if the SF thread completes the synthesis of frame 1 in [ t1, t 2) time, completes the synthesis of frame 2 in [ t2, t 3) time, completes the synthesis of frame 3 in [ t3, t 5) time, completes the synthesis of frame 5 in [ t5, t 6) time, completes the synthesis of frame 6 in [ t6, t 7) time, completes the synthesis of frame 7 in [ t7, t 8) time, and displays the corresponding picture 1 of frame 1 in the control 10b-1 for displaying preview image in [ t2, t 3) time, as shown in (1) in fig. 7a, as shown in fig. 6. In the time [ t3, t 4), the control 10b-1 for displaying the preview image will display the picture 2 corresponding to the frame 2, and since the SF thread has not completed the synthesis processing for the frame 3 when the VSync-HW signal is received at the time point t4, in the time [ t4, t5 "), the control 10b-1 for displaying the preview image will still display the picture 2 corresponding to the frame 2, i.e. in the time [ t3, t 5"), the control 10b-1 for displaying the preview image will display the picture 2 corresponding to the frame 2, as shown in (2) of fig. 7 a.
With continued reference to fig. 6, by way of example, upon receiving the VSync-SF signal at time t5, the SF thread may begin synthesizing frame 5 at time t5 because the synthesizing process for frame 3 has been completed. Correspondingly, the display driver can acquire the picture 4 corresponding to the frame 3 synthesized by the SF thread at the time point t5, and drive the display screen to display the picture corresponding to the frame 3 in the time of [ t5, t6 ].
With continued reference to fig. 6, for example, since the SF thread completes the synthesis processing of the frame 5 in the time of [ t5, t 6), the display driver may acquire the frame 5 corresponding to the frame 5 synthesized by the SF thread at the time point of t6, and drive the display screen to display the frame 5 corresponding to the frame in the time of [ t6, t 7).
That is, in the scenario shown in fig. 6, 7a, and 7b, the screen displayed in the control 10b-1 in the interface 10b will be stuck (the screen corresponding to the frame 2 lasts for two VSync signal periods), and jump (from the screen 3 to the screen 5). However, because of the effect of video preview and photo preview, there is no specific quantitative standard at present, and most of the results are evaluated through subjective feeling, which leads to different people having different views of the same preview image, and the problems are not found in time in the stages of testing, research, development, production and the like, so that terminal equipment and camera application with the problems are put into the market, and the use experience of consumers is further affected.
In view of this, the application provides a method for determining the reason of previewing the blocking, through going out the frame from the camera (the image frame is gathered and is transmitted to the camera provider through the camera drive) to finally display each processing node on the display screen and detect and quantify, then confirm the reason that causes the picture to block, jump according to the quantization result of each processing node, guarantee the accuracy of the determined reason of previewing the blocking, thereby can formulate the optimization strategy according to the accurate reason of previewing the blocking before the release of the product, carry out optimization processing, in order to reduce the blocking and promote fluency, promote user experience.
In addition, based on the method for determining the preview blocking reason provided by the embodiment of the application, in the after-sale link after the product is released, after-sale personnel can also determine the accurate preview blocking reason according to the method, position the problem in time, maintain the terminal equipment or update and iterate the camera application with the shooting function and the system of the terminal equipment.
As can be seen from the description of the preview streaming display flow shown in fig. 3, the image frames are finally displayed in the preview interface from acquisition, mainly related to a processing link (hereinafter referred to as a frame-out link) where the camera head transmits the acquired image frames to the camera provider through the camera head driver, a link where the camera provider processes the original image frames, a processing link (hereinafter referred to as a buffer link) where the camera service adds the image frames processed by the camera provider to the camera buffer queue, a link (hereinafter referred to as a compositing link) where the camera buffer processor extracts image frames' from the camera buffer queue through BBQ by the surviving, and a link (hereinafter referred to as a compositing link) where the hardware synthesizer transmits the frames composited by the camera buffer to the display driver, so as to drive the display screen to display the composited frames (hereinafter referred to as a display link). Therefore, the method for determining the preview blocking reason provided by the embodiment of the application mainly starts from the 4 links to determine the preview blocking reason causing the picture blocking and jumping of the preview interface. That is, the determination of the preview blocking reason is mainly determined by the detection and quantification results of the 4 links, namely, the frame-out link detection, the buffer link detection, the display-transmitting link detection and the synthesis link detection.
Referring to fig. 8, a schematic diagram of detecting and determining the reason for the preview jam for the above 4 links is shown.
Exemplary, regarding the detection of the frame out link, specifically, determining whether to drop a frame.
It should be noted that, the camera samples at a fixed sampling frequency (e.g. 30 fps), which is absolutely uniform, if the sampled image frame is not driven by the camera to be timely taken and transmitted to the camera provider, the subsequent image frame will cover the existing but not taken image frame, which will cause frame loss. After the frame loss occurs, even if the subsequent links are processed normally, the problem of picture blocking can occur in the preview image displayed on the preview interface due to the lack of the image frame. As shown in fig. 9, if the camera application issues a Request1 for acquiring a preview image at a time T1, after the Request1 reaches the camera driver of the Kernel layer, the camera driver drives the camera to acquire an image frame according to the Request1 and a fixed sampling frequency, and outputs the acquired image frame to the camera provider of the HAL layer for processing.
With continued reference to fig. 9, exemplary camera drives output currently acquired frame 1 to the camera provider, and then continue to acquire image frames, such as frame 2, frame 3, etc., at a fixed acquisition frequency. In the process of continuously collecting image frames, the camera provider detects and processes the frame 1 transmitted by the camera driver based on a detection algorithm, an image processing algorithm and the like corresponding to the current shooting mode, the processed frame 1 (hereinafter referred to as a frame 1 ') is transmitted to a camera service of the FWK layer, the frame 1' is added to a camera buffer queue by the camera service, and after receiving a VSync-SF signal, a subsequent SF thread synchronizes the frame 1 'in the camera buffer queue to a BBQ through the BBQ, and the frame 1' is transmitted to the SF thread for synthesis processing.
With continued reference to fig. 9, after completing the synthesis of the frame 1', the SF thread transmits the frame 1' after the synthesis to the hardware synthesizer, and then the hardware synthesizer transmits the frame to the display driver, and the display driver drives the display screen to display the picture corresponding to the frame 1', that is, the picture corresponding to the frame 1' is displayed in the preview interface of the camera application.
With continued reference to fig. 9, exemplary, after the camera driver outputs frame 1, the camera collects and uploads frame 2 to the camera driver at a fixed sampling frequency until the frame corresponding to the synthesized frame 1' is displayed on the preview interface of the camera application, but after the camera driver receives the frame, a new preview image Request such as Request2 is not received. Therefore, frame 2 is not transmitted to the camel provider, but when the Request2 is received, frame 2 is discarded because frame 3 is received, and after the Request2 is received, frame 3 is transmitted to the camel provider for processing, and finally is synthesized by SF and then transmitted for display.
Based on the principle, whether the frame link loses the frame can be determined by judging the continuity of the image frame output by the camera driver.
Accordingly, if it is determined that the frame loss phenomenon occurs, it can be determined that the reason for the preview jam causing the frame jam is at least because the camera drives the frame loss to cause the jam (display is not smooth); otherwise, if no frame loss occurs, the picture blocking is irrelevant to the driving of the camera, i.e. the frame outputting link has no problem.
Illustratively, the determination as to the continuity of the image frames output by the camera drive may in some implementations be determined by the time stamps of the image frames output by the two adjacent times, as well as the sampling frequency followed by the camera.
It will be appreciated that each time an image frame is output by the camera driver, a corresponding time stamp is added to the image frame, and the sampling frequency followed by the camera is fixed, so that the time interval between any two adjacent frames is fixed, for example, T, in the image frame uploaded by the camera output to the camera driver. Therefore, each time the camera driver transmits an image frame to the camera provider, a timestamp added to the transmitted image frame, that is, a timestamp current_time of the currently transmitted image frame, and a timestamp history_time corresponding to an image frame adjacent to the currently transmitted image frame, which is transmitted before the image frame, are acquired. It is then determined whether the time interval T' between the history_time and the current_time is greater than T.
Accordingly, if it is greater than T, it is considered that one or several image frames are lost between the current_time and the history_time. Otherwise, the frame loss phenomenon does not exist in the frame-out link, so that the current picture blocking is irrelevant to the driving of the camera.
With continued reference to fig. 8, for example, when there is no frame loss, that is, there is no problem in the frame output link, the sending and displaying link may be further detected. For example, the sending and displaying link detection may be, for example, by determining whether the sending and displaying time is uniform.
In some implementations, for example, in the case that there is no problem in the frame output link, the hardware synthesizer may record that the image frames synthesized by the SF process are delivered to the display driver, and the display driver drives the display screen to display the display duration of each image frame, so as to determine whether the sending display time is uniform according to the display duration corresponding to the sent image frame.
Referring to fig. 10, a schematic diagram between a partially displayed image frame and a corresponding display time interval is illustrated. Illustratively, in fig. 10, the abscissa indicates the image frames to be displayed, and the ordinate indicates the display time interval (in units of ms, for example). The transmission time intervals of the 19 th frame, the 82 nd frame and the 172 th frame, namely the frames 19, 82 and 172 are about 66.66ms, and the transmission time intervals of the other frames are about 33.33ms.
It will be appreciated that if a straight line is shown in the schematic diagram between the image frames and the corresponding display time intervals, for example, all the image frames have corresponding display time intervals of 33.33ms, it indicates that the display is uniform, i.e. there is no problem in the display link. On the contrary, if the display time intervals of the three frames of the frame 19, the frame 82 and the frame 172 are larger than the normal 33.33ms, as shown in fig. 10, if the display time interval between the image frame and the corresponding display time interval is not a straight line, it is indicated that the display in the current preview interface is blocked when the three frames are displayed. In this case, if it is desired to determine whether the reason for the preview jam is due to the hardware of the display screen, and if the frame-out link is normal, it is necessary to further detect the synthetic link.
Specifically, if the frames with the stuck in the schematic diagram between the image frames corresponding to the sending and displaying links and the corresponding sending and displaying time intervals are different from the frames with the stuck in the schematic diagram between the image frames corresponding to the synthesizing links and the corresponding synthesizing time, the preview stuck reasons corresponding to the frames with the stuck in the schematic diagram between the image frames and the corresponding sending and displaying time intervals are from the display screen. That is, in fig. 8, when the transmission time is uniform, the preview jam is from the display screen when the transmission time is satisfied.
With continued reference to fig. 8, in an exemplary case where the presentation time is not uniform, the composite link may be further detected. For example, the synthesis link detection may be, for example, by determining whether the synthesis time is uniform.
For example, in some implementations, under the condition that the frame outputting link and the display sending link are both free of problems, the synthesis time of the SF thread for synthesizing each frame of image frame can be recorded, and then whether the synthesis time is uniform or not is determined according to the synthesis time corresponding to the synthesized image frame.
Referring to fig. 11, a schematic diagram between a portion of an image frame and a corresponding synthesis time is illustrated. Illustratively, the abscissa in fig. 11 is an image frame subjected to the composition processing, and the ordinate is the composition time (in units of ms, for example). The combination time corresponding to the 80 th frame and the 169 th frame, namely the frame 80 and the frame 169, is about 66.66ms, and the combination time corresponding to the other frames is between 30ms and 40ms, which is about 33.33ms.
It can be understood that if the schematic diagram between the image frames and the corresponding synthesis time is shown as a straight line, for example, the corresponding transmission time intervals of all the image frames are between 30ms and 40ms and tend to be 33.33ms, the synthesis is uniform, that is, the synthesis link has no problem. On the contrary, if the image frame and the corresponding composition time are not shown in a straight line, as shown in fig. 11, the composition time of the two frames 80 and 169 is not between the normal composition time interval, such as 30 ms-40 ms, then it is indicated that the display screen in the current preview interface is jammed in the two frames. As can be seen from a comparison of fig. 10 and fig. 11, the frames in fig. 10 in which the jam occurs are different from the frames in fig. 11 in which the jam occurs, and thus it can be determined that the reason for the preview jam caused by the three frames of frame 19, frame 82, and frame 172 shown in fig. 10 is from the display screen.
For example, if the frame in which the jam occurs in the schematic diagram between the image frame corresponding to the display link and the corresponding display time interval shown in fig. 10 is the same as the frame in which the jam occurs in the schematic diagram between the image frame corresponding to the composite link and the corresponding composite time in fig. 11, the problem that the display screen in the current preview interface is jammed will not come from the display screen, i.e. the display screen is indicated to be normal, if the display time is uniform.
With continued reference to fig. 11, for example, since the frames in fig. 11 that are stuck are different from the frames in fig. 10, in this case, it is desirable to determine whether the preview stuck cause of the two frames, frame 80 and frame 169 in fig. 11, is from the composite link, and further detection of the buffer queue is required.
With continued reference to fig. 8, the cache queue may be further detected, for example, in the event that the composition time is not uniform. For example, regarding the buffer queue detection, it may be determined whether the enqueuing time of the image frames in the buffer queue is uniform, for example.
Since the BBQ is used to synchronize the image frames in the camel bufferqueue to the SF thread, that is, the BBQ is acquired from the camel bufferqueue and the information of the image frames synchronized to the SF thread is the same as when the image frames are added to the camel bufferqueue. Thus, the CameraBufferQuene is the last buffer queue in the HAL layer that the CameraProvider returns frames to the SF thread via CameraService. Therefore, in some implementations, there is no problem in the frame out link and the display send link, but in the case of uneven composite ring time, the buffer link, that is, the enqueuing time to the camel buffer queue output by the camel provider processing, that is, the timestamp added to each image frame when the camel provider processing and the camel service processing are added to the camel buffer queue. Because the time stamp can be synchronized to the SF thread by the BBQ, by judging the enqueuing time interval between the enqueuing time of two adjacent image frames in the CameraBufferQuene, namely judging whether the enqueuing time interval corresponding to each two image frames is uniform or not, the problem of whether the reason for the non-uniform synthesis time is the problem of the synthesis link or the problem of frame returning can be determined under the condition that the non-uniform synthesis time is determined through the judgment when the preview is abnormal.
It can be understood that, because the corresponding detection algorithm and image processing algorithm under different shooting modes are different, and the shooting objects are different, the processing time of the camera provider is also different, so that the different image frames are output after being processed by the camera provider, and the adding time of the camera provider is also different. As shown in fig. 12, a time diagram of adding a part of the image frame processed by the CameraProvider to the camerabufferqueue is exemplarily shown. Illustratively, in fig. 12, the abscissa is an image frame added to the camerabufferqueue after being processed by the CameraProvider, and the ordinate is a time interval (unit is ms, for example) of each frame added to the camerabufferqueue and a preceding frame adjacent thereto, that is, an enqueuing time interval.
With continued reference to fig. 12, it is not difficult to find that the enqueue time interval added to the camerabufferqueue is also different because the length of time required for the CameraProvider to process each frame of image frame is different. Therefore, the influence on the synthesis flow is hardly seen from the frame-returning curve shown in fig. 12. Thus, in some implementations, the frame-returning curve shown in fig. 12 may be fitted to obtain a frame-returning curve similar to the patterns shown in fig. 10 and 11.
For example, regarding the fitting process on the frame returning curve shown in fig. 12, when the SF thread receives the VSync-SF signal (the VSync-SF signal is regarded as the first VSync-SF signal), according to the time corresponding to the currently received VSync-SF signal and the issuing period followed by the VSync-SF signal, such as one VSync signal period, whether the image frame to be processed in the camel bufferquery is detected periodically, that is, when the image frame cached in the camel bufferquery is acquired through the BBQ, whether the image frame to be processed in the camel bufferquery is first determined.
Accordingly, if an image frame to be processed is present in the camel bufferqueue, before receiving the next VSync-SF signal (determined according to the first VSync-SF signal and the fixed VSync signal period, and subsequently referred to as the second VSync-SF signal) in a synthesis period corresponding to the current VSync-SF signal, the image frame at the head of the queue is taken out from the camel bufferqueue through the BBQ to be subjected to synthesis processing, and after the image frame is processed, when the second VSync-SF signal is reached, if the image frame to be processed is detected to exist in the camel bufferqueue, a difference between a enqueuing time interval corresponding to the image frame detected when the first VSync-SF signal and a enqueuing time interval corresponding to the image frame detected when the second VSync-SF signal is detected is used as an ideal synthesis time corresponding to the image frame detected when the first VSync-SF signal.
Accordingly, after the synthesis process of the image frames detected when the first VSync-SF signal is completed, the second VSync-SF signal may be regarded as a new first VSync-SF signal, and after the new first VSync-SF signal, the VSync-SF signal satisfying the synthesis condition may be regarded as a new second VSync-SF signal according to the processing logic. Based on this processing logic, the frame return curve shown in fig. 12 can be converted into the frame return curve shown in fig. 13.
Referring to fig. 13, exemplary, the abscissa is an image frame added to the camel buffer queue after being processed by the camel provider, and the ordinate is the ideal synthesis time (for example, in ms) described above. The ideal combination time for the 80 th frame and the 169 th frame, namely the frame 80 and the frame 169, is about 66.66ms, and the ideal combination time for the other frames is between 30ms and 40ms, which is about 33.33ms.
It will be appreciated that if a straight line is shown in the schematic diagram between the image frames and the corresponding ideal composition time, for example, the ideal composition time for all the image frames is between 30ms and 40ms and tends to be 33.33ms, it indicates that the time for adding the image frames to the camerabufferqueue is uniform, in which case, if there is a picture stuck, it indicates that there is a problem in the composition link. In contrast, if the ideal combination time of the two frames, i.e., the frame 80 and the frame 169, is not in the normal interval, i.e., 30ms to 40ms, as shown in fig. 13, the image frame is added to the camerabufferqueue in a non-uniform manner, and if there is a picture jam, the jam is caused by frame returning, i.e., the image frame output by the CameraProvider has a problem in the buffer link. As can be seen from a comparison between fig. 11 and fig. 13, the frames in fig. 11 in which the jam occurs are the same as the frames in fig. 13, so it can be determined that the jam caused by the two frames 80 and 169 in fig. 11 is not caused by the composite link, and therefore the frames in fig. 11 corresponding to the composite link in which the jam occurs are the same as the frames in fig. 13, that is, the reason why the jam caused by the uneven time of adding the image frames to the camerabufferqueue is the problem of frame returning.
For example, if the frame returning curve shown in fig. 13 is a straight line, that is, in the case that the time when the image frame is added to the camerabufferqueue is uniform, the blocking caused by the two frames, that is, the frame 80 and the frame 169 shown in fig. 11 is a problem of the synthesizing link itself, and is not related to the buffer link, that is, is not caused by the frame returning.
Therefore, by detecting the 4 links, quantifying the blocking positions (image frames when blocking occurs) occurring in each link in the form of a graph, and previewing the blocking reasons of the same blocking positions, the processing sequence of the image frames in the previewing stream display flow of the 4 links can be traced back to the source rapidly and accurately, and for the newly increased blocking positions in a certain link, the previewing blocking reasons caused by the frames come from the link, so that the accurate determination of the previewing blocking reasons corresponding to each blocking is realized.
Further, based on the method for determining the preview jam reason provided by the embodiment of the present application, after determining the preview jam reason, a link that causes a jam in the preview streaming display process may be optimized according to the preview jam reason.
For example, in the case where the frame output link, that is, the frequency of driving the output image frame by the camera is 30fps, and the synthesis link, that is, the sampling frequency of the image frame followed in the SF thread synthesis processing is 60fps, the image frame processed by the camera provider and added to the camera buffer queue by the camera service and the corresponding time (frame returning curve) are shown in fig. 14.
For example, in the case where the frequency of the outgoing frames is 30fps and the frequency of the synthesized frames is 60fps, the relationship between the image frames to be displayed and the corresponding display time intervals may be as shown in fig. 15.
Based on the results shown in fig. 14 and 15, when the frequency of the outgoing frames is 30fps and the frequency of the synthesis is 60fps, the frequency of the synthesis and the frequency of the outgoing frames do not match. Thus, in the case where the preview click reason is determined to be from the out-of-frame link and the in-frame link, and the scene click is caused by frequency mismatch of the two, in some implementations, the synthesized comment may be modified from 60fps to 30fps. In this way, based on the method for determining the reason for the preview jam provided in the embodiment of the present application, the relationship between the image frame to be displayed and the corresponding display time interval is re-acquired, for example, as shown in fig. 16.
It can be understood that, as described above for the detection of the sending and displaying link, when a straight line is shown in the schematic diagram of the relationship between the sending and displaying image frame and the corresponding sending and displaying time interval, it indicates that the current link has no clip frame, so that the picture displayed on the preview interface will not clip, and the picture is smooth.
That is, according to the preview click cause, the phenomenon of non-uniformity in composition caused by small fluctuation in the frame returning can be solved by keeping the frequency of the frame output and the frequency of the composition at all times.
In fig. 14, 15, and 16, the abscissa indicates the number of frames of an image frame, and the ordinate indicates the corresponding time.
It should be understood that the above description is only an example for better understanding of the technical solution of the present embodiment, and is not to be taken as the only limitation of the present embodiment.
For example, in other implementations, if the frame returning interval is determined to be 0ms,33.33ms,67.6ms, and so on according to the method for determining the reason of preview blocking provided in the embodiment of the present application, in the case where the frequency of frame outputting is 30fps and the frequency of synthesizing is also 30fps, under such frame returning interval, even if the frequency of frame outputting and the frequency of synthesizing are consistent, the phenomenon of non-uniformity of synthesizing still occurs due to large fluctuation of the frame returning. Assuming that the period of each VSync signal is 33.33ms, for the above-described frame back interval, since the frame back interval of the third image frame is 67.6ms, which exceeds 66.66ms, in the third VSync signal period, the third image frame is caused to be synthesized in the fourth VSync signal period, which operates to display clamping. Because the reason of the preview blocking is that the frame returning is uneven, the synthesis is uneven, namely the frame returning problem occurs in a buffer link, the camera provider can be optimized based on the determined reason of the preview blocking, for example, the resolution (size) of the image frame output by the camera driver is reduced, only key information is reserved, and then the image frame after the downsampling is correspondingly detected and processed, so that the calculated amount of the image frame can be reduced, the image frame can be processed in a fixed period, and the frame returning interval is ensured to be even.
It should be understood that the above description is only an example for better understanding of the technical solution of the present embodiment, and is not to be taken as the only limitation of the present embodiment.
Furthermore, it is understood that the terminal device, in order to implement the above-mentioned functions, comprises corresponding hardware and/or software modules for performing the respective functions. The steps of an algorithm for each example described in connection with the embodiments disclosed herein may be embodied in hardware or a combination of hardware and computer software. Whether a function is implemented as hardware or computer software driven hardware depends upon the particular application and design constraints imposed on the solution. Those skilled in the art may implement the described functionality using different approaches for each particular application in conjunction with the embodiments, but such implementation is not to be considered as outside the scope of this application.
In addition, it should be noted that, in an actual application scenario, the method for determining the reason for the preview blocking provided in the foregoing embodiments implemented by the terminal device may also be performed by a chip system included in the terminal device, where the chip system may include a processor. The chip system may be coupled to a memory such that the chip system, when running, invokes a computer program stored in the memory, implementing the steps performed by the terminal device. The processor in the chip system can be an application processor or a non-application processor.
In addition, an embodiment of the present application further provides a computer readable storage medium, where computer instructions are stored, and when the computer instructions are executed on a terminal device, the computer instructions cause the terminal device to execute the related method steps to implement the method for determining the reason of the preview card in the foregoing embodiment.
In addition, the embodiment of the application also provides a computer program product, when the computer program product runs on the terminal equipment, the terminal equipment is caused to execute the related steps so as to realize the method for determining the reason of the preview clamping in the embodiment.
In addition, embodiments of the present application also provide a chip (which may also be a component or module) that may include one or more processing circuits and one or more transceiver pins; the transceiver pin and the processing circuit communicate with each other through an internal connection path, and the processing circuit executes the related method steps to implement the method for determining the reason of the preview card in the above embodiment, so as to control the receiving pin to receive signals and control the transmitting pin to transmit signals.
In addition, as can be seen from the foregoing description, the terminal device, the computer-readable storage medium, the computer program product, or the chip provided in the embodiments of the present application are used to perform the corresponding methods provided above, so that the advantages achieved by the method can be referred to the advantages in the corresponding methods provided above, which are not repeated herein.
The above embodiments are merely for illustrating the technical solution of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the corresponding technical solutions from the scope of the technical solutions of the embodiments of the present application.
Claims (16)
1. The method for determining the reason of the preview blocking is characterized by being applied to first terminal equipment, and comprises the following steps:
displaying an image preview interface of a target application, wherein the target application is an application with a shooting function;
acquiring time information of each processing link in a preview image frame circulation path after a preview image frame acquired based on an image preview request continuously issued by the target application is output from a camera in the process of displaying the image preview interface;
when the preview jam abnormality occurs in the image preview interface, determining a preview jam reason causing the preview jam abnormality according to the time information of the preview image frame in each processing link;
The processing link comprises a frame outputting link, a buffer memory link, a synthesizing link and a transmitting and displaying link according to the circulation sequence of the preview image frames in the channel;
determining a preview clamping reason causing a preview clamping abnormality according to time information of the preview image frame flowing in each processing link comprises:
determining whether a frame loss phenomenon exists in a preview image frame output by the frame-out link near the preview image frame corresponding to the preview stuck abnormality when the preview stuck abnormality occurs according to the time information of the preview image frame in the frame-out link circulation;
when the frame loss phenomenon exists in the preview image frames output near the preview image frames corresponding to the preview jam abnormality in the frame outputting link, determining that the preview jam reason causing the preview jam abnormality comes from the frame outputting link;
when the frame loss phenomenon does not exist in the preview image frames output near the preview image frames corresponding to the preview clamping abnormality in the frame output link, determining whether the transmission time of the preview image frames is uniform or not according to the time information of the circulation of the preview image frames in the transmission link;
when the sending and displaying time of the preview image frames is uniform, determining that a preview clamping reason causing the preview clamping abnormality comes from a display screen of the first terminal device;
When the display time of the preview image frames is not uniform, determining whether the synthesis time of the preview image frames is uniform or not according to the time information of the preview image frames in the synthesis link circulation;
when the synthesis time of the preview image frames is uniform, determining that a preview clamping reason causing the preview clamping abnormality comes from the sending and displaying link;
when the synthesis time of the preview image frames is not uniform, determining whether the enqueuing time of the preview image frames added to a buffer queue is uniform or not according to the time information of the preview image frames in the buffer link circulation;
when the enqueue time of the preview image frames added to the buffer queue is uniform, determining that a preview clamping cause causing the preview clamping abnormality comes from the synthesis link;
and when the enqueue time of the preview image frames added to the buffer queue is uneven, determining that the preview clamping reason causing the preview clamping abnormality is from the frame-out link.
2. The method according to claim 1, wherein the operation performed by the buffering step is an operation of adding a preview image frame output by a camera hardware abstraction layer service process to a buffering queue;
The determining whether the enqueuing time of the preview image frame added to the buffer queue is uniform according to the time information of the preview image frame flowing in the buffer link comprises:
obtaining enqueue time of each frame of preview image frame added to the buffer queue;
determining a enqueuing time interval between two adjacent preview image frames according to the enqueuing time corresponding to each preview image frame;
determining theoretical synthesis time according to the enqueue time interval and the VSync signal period followed by the synthesis link;
and determining whether the enqueuing time of the preview image frame added to the buffer queue is uniform or not according to the theoretical synthesis time.
3. The method of claim 2, wherein determining a theoretical integration time based on the enqueue interval and a VSync signal period followed by the integration link comprises:
periodically detecting whether preview image frames exist in the buffer queue according to the time corresponding to the received VSync-SF signal and the VSync signal period corresponding to the VSync-SF signal, wherein the VSync-SF signal is used for indicating a synthesizing link to acquire the preview image frames from the buffer queue for synthesizing;
And for the detection of each detection period on the buffer queue, when the preview image frames exist in the buffer queue, synthesizing the preview image frames, and taking the difference between the enqueue time interval corresponding to the preview image frames in the buffer queue detected by the first detection period after the synthesis of the preview image frames as the theoretical synthesis time.
4. The method of claim 1, wherein the out-of-frame link refers to a link in which a camera drives an output camera to acquire the preview image frame according to the image preview request;
determining whether a frame loss phenomenon exists in a preview image frame output by the frame-out link near the preview image frame corresponding to the preview cartoon abnormality when the preview cartoon abnormality occurs according to time information of the preview image frame flowing in the frame-out link, wherein the method comprises the following steps:
near the preview image frames corresponding to the preview cartoon abnormality, for every two adjacent preview image frames, acquiring output time respectively corresponding to the two adjacent preview image frames driven by the camera to be output;
Determining a first time interval corresponding to two adjacent preview image frames according to the output time corresponding to the two adjacent preview image frames respectively;
and determining whether the frame loss phenomenon exists in the preview image frames output by the camera driver according to the first time interval.
5. The method of claim 4, wherein determining whether a frame loss exists in the preview image frame output by the camera driver according to the first time interval comprises:
determining whether first time intervals corresponding to every two adjacent preview image frames are the same;
when the first time intervals corresponding to every two adjacent preview image frames are the same, determining that the preview image frames driven and output by the camera are continuous;
when the preview image frames output by the camera drive are continuous, determining that the preview image frames output by the camera drive do not have a frame loss phenomenon;
when the first time intervals corresponding to every two adjacent preview image frames are different, determining that the preview image frames driven and output by the camera are discontinuous;
and when the preview image frames output by the camera driver are discontinuous, determining that the frame loss phenomenon exists in the preview image frames output by the camera driver.
6. The method of claim 5, wherein when the corresponding first time intervals of each two adjacent preview image frames are different, the method further comprises:
determining whether the first time interval is greater than a first time interval threshold, the first time interval threshold being determined based on a sampling frequency followed by the camera when acquiring the preview image frame;
when the first time interval is not greater than the first time interval threshold, determining that the preview image frames output by the camera driver are continuous;
and when the first time interval is larger than the first time interval threshold, determining that the preview image frames output by the camera driver are discontinuous.
7. The method according to claim 1, wherein determining whether the display time of the preview image frame is uniform according to the time information of the preview image frame flowing in the display link comprises:
determining the display time length of each frame of preview image frame displayed on the image preview interface;
when the display time lengths corresponding to the preview frames of each frame of image are the same, determining that the sending and displaying time of the preview frames is uniform;
otherwise, determining that the sending and displaying time of the preview image frame is not uniform.
8. The method of claim 7, wherein determining a display duration for each preview image frame displayed on the image preview interface comprises:
when a first VSync-HW signal is received and a first preview image frame is acquired from the synthesizing link, recording the current system time to obtain the first system time;
when a second VSync-HW signal is received and a second preview image frame is acquired from the synthesizing link, recording the current system time to obtain a second system time;
and determining the display duration of the first preview image displayed on the image preview interface according to the first system time and the second system time.
9. The method according to claim 1, wherein determining whether the composition time of the preview image frame is uniform according to the time information of the preview image frame flowing in the composition link comprises:
determining the synthesis time of each frame of preview image frame in the synthesis link;
determining that the synthesis time of the preview image frames is uniform when the synthesis time corresponding to each frame of image preview frame is the same;
otherwise, determining that the synthesis time of the preview image frame is not uniform.
10. The method of claim 9, wherein determining the composition time of each frame of the preview image frame in the composition link comprises:
when a first VSync-SF signal is received and a first preview image frame is acquired from the buffer queue, recording the current system time to obtain a third system time;
before a second VSync-SF signal arrives, the synthesizing link synthesizes the first preview image, wherein the second VSync-SF signal is the first VSync-SF signal received after the synthesizing link synthesizes the first preview image frame;
when a second VSync-SF signal is received, recording the current system time to obtain a fourth system time;
and determining the synthesis time of each frame of preview image frame in the synthesis link according to the third system time and the fourth system time.
11. The method of claim 1, wherein during displaying the image preview interface, the method further comprises:
monitoring the refreshing frequency of the image preview interface;
and when the refresh frequency is larger than a set refresh frequency threshold value, determining that the preview card is abnormal on the image preview interface.
12. The method of claim 1, wherein the preview cause further comprises a preview image frame that caused the preview jam anomaly, and a duration of the preview jam anomaly.
13. The method according to any one of claims 1 to 12, further comprising:
and storing the preview blocking reason corresponding to the preview blocking abnormality.
14. The method of claim 13, wherein the method further comprises:
and responding to the received preview cartoon reason request, outputting the preview cartoon reason to second terminal equipment, and/or displaying the preview cartoon reason on a display interface of the first terminal equipment.
15. A terminal device, characterized in that the terminal device comprises: a memory and a processor, the memory and the processor coupled; the memory stores program instructions that, when executed by the processor, cause the terminal device to perform the method of determining a preview card cause according to any one of claims 1 to 13.
16. A computer readable storage medium comprising a computer program which, when run on a terminal device, causes the terminal device to perform the method of determining a preview click-through reason as claimed in any one of claims 1 to 13.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211629927.2A CN116708753B (en) | 2022-12-19 | 2022-12-19 | Method, device and storage medium for determining preview blocking reason |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211629927.2A CN116708753B (en) | 2022-12-19 | 2022-12-19 | Method, device and storage medium for determining preview blocking reason |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116708753A CN116708753A (en) | 2023-09-05 |
CN116708753B true CN116708753B (en) | 2024-04-12 |
Family
ID=87822764
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211629927.2A Active CN116708753B (en) | 2022-12-19 | 2022-12-19 | Method, device and storage medium for determining preview blocking reason |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116708753B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117061825B (en) * | 2023-10-12 | 2024-01-26 | 深圳云天畅想信息科技有限公司 | Method and device for detecting bad frames of streaming media video and computer equipment |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109391847A (en) * | 2017-08-08 | 2019-02-26 | 中国电信股份有限公司 | The monitoring method and monitoring device of Streaming Media Caton |
WO2021052292A1 (en) * | 2019-09-18 | 2021-03-25 | 华为技术有限公司 | Video acquisition method and electronic device |
CN112887754A (en) * | 2021-04-28 | 2021-06-01 | 武汉星巡智能科技有限公司 | Video data processing method, device, equipment and medium based on real-time network |
CN113014962A (en) * | 2021-02-08 | 2021-06-22 | 西安万像电子科技有限公司 | Jam reminding method and device |
CN113395512A (en) * | 2021-05-27 | 2021-09-14 | 北京达佳互联信息技术有限公司 | Stuck detection method and device, stuck detection server and storage medium |
CN113810596A (en) * | 2021-07-27 | 2021-12-17 | 荣耀终端有限公司 | Time-delay shooting method and device |
WO2022127787A1 (en) * | 2020-12-18 | 2022-06-23 | 华为技术有限公司 | Image display method and electronic device |
CN114928806A (en) * | 2022-04-27 | 2022-08-19 | 深圳市长丰影像器材有限公司 | Audio real-time monitoring replacement method, device and equipment for microphone system |
CN115394428A (en) * | 2022-08-03 | 2022-11-25 | 宁波大学科学技术学院 | Multi-model cooperative patient state monitoring method |
WO2022252924A1 (en) * | 2021-05-31 | 2022-12-08 | 华为技术有限公司 | Image transmission and display method and related device and system |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP6519164B2 (en) * | 2014-12-11 | 2019-05-29 | ブラザー工業株式会社 | INFORMATION PROCESSING APPARATUS, RECORDING SYSTEM, PRINT PROGRAM, AND EXTERNAL PROGRAM |
-
2022
- 2022-12-19 CN CN202211629927.2A patent/CN116708753B/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109391847A (en) * | 2017-08-08 | 2019-02-26 | 中国电信股份有限公司 | The monitoring method and monitoring device of Streaming Media Caton |
WO2021052292A1 (en) * | 2019-09-18 | 2021-03-25 | 华为技术有限公司 | Video acquisition method and electronic device |
WO2022127787A1 (en) * | 2020-12-18 | 2022-06-23 | 华为技术有限公司 | Image display method and electronic device |
CN113014962A (en) * | 2021-02-08 | 2021-06-22 | 西安万像电子科技有限公司 | Jam reminding method and device |
CN112887754A (en) * | 2021-04-28 | 2021-06-01 | 武汉星巡智能科技有限公司 | Video data processing method, device, equipment and medium based on real-time network |
CN113395512A (en) * | 2021-05-27 | 2021-09-14 | 北京达佳互联信息技术有限公司 | Stuck detection method and device, stuck detection server and storage medium |
WO2022252924A1 (en) * | 2021-05-31 | 2022-12-08 | 华为技术有限公司 | Image transmission and display method and related device and system |
CN113810596A (en) * | 2021-07-27 | 2021-12-17 | 荣耀终端有限公司 | Time-delay shooting method and device |
CN114928806A (en) * | 2022-04-27 | 2022-08-19 | 深圳市长丰影像器材有限公司 | Audio real-time monitoring replacement method, device and equipment for microphone system |
CN115394428A (en) * | 2022-08-03 | 2022-11-25 | 宁波大学科学技术学院 | Multi-model cooperative patient state monitoring method |
Also Published As
Publication number | Publication date |
---|---|
CN116708753A (en) | 2023-09-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114579075B (en) | Data processing method and related device | |
CN114461051B (en) | Frame rate switching method and device and storage medium | |
CN116055786B (en) | Method for displaying multiple windows and electronic equipment | |
CN115048012B (en) | Data processing method and related device | |
CN113254120A (en) | Data processing method and related device | |
CN114531519B (en) | Control method based on vertical synchronous signal and electronic equipment | |
CN116991354A (en) | Data processing method and related device | |
EP4436198A1 (en) | Method for capturing images in video, and electronic device | |
WO2023035921A1 (en) | Method for image snapshot in video recording, and electronic device | |
CN116708753B (en) | Method, device and storage medium for determining preview blocking reason | |
CN115714908B (en) | Switching control method of working modes, electronic equipment and readable storage medium | |
CN115904184B (en) | Data processing method and related device | |
CN116069187A (en) | Display method and electronic equipment | |
CN116347217A (en) | Image processing method, device and storage medium | |
CN116414337A (en) | Frame rate switching method and device | |
CN115686403A (en) | Display parameter adjusting method, electronic device, chip and readable storage medium | |
WO2023124225A1 (en) | Frame rate switching method and apparatus | |
WO2023124227A1 (en) | Frame rate switching method and device | |
CN116708889B (en) | Audio and video synchronization method, equipment and storage medium | |
CN116700578B (en) | Layer synthesis method, electronic device and storage medium | |
WO2024066834A9 (en) | Vsync signal control method, electronic device, storage medium and chip | |
CN116684521B (en) | Audio processing method, device and storage medium | |
CN117082295B (en) | Image stream processing method, device and storage medium | |
WO2024159950A1 (en) | Display method and apparatus, electronic device, and storage medium | |
WO2024198633A1 (en) | Video switching method and electronic device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |