CN113840091B - Image processing method, apparatus, electronic device, and computer-readable storage medium - Google Patents

Image processing method, apparatus, electronic device, and computer-readable storage medium Download PDF

Info

Publication number
CN113840091B
CN113840091B CN202111274607.5A CN202111274607A CN113840091B CN 113840091 B CN113840091 B CN 113840091B CN 202111274607 A CN202111274607 A CN 202111274607A CN 113840091 B CN113840091 B CN 113840091B
Authority
CN
China
Prior art keywords
image
parameter information
frame
application layer
queue
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111274607.5A
Other languages
Chinese (zh)
Other versions
CN113840091A (en
Inventor
张光辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202111274607.5A priority Critical patent/CN113840091B/en
Publication of CN113840091A publication Critical patent/CN113840091A/en
Application granted granted Critical
Publication of CN113840091B publication Critical patent/CN113840091B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/64Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The application relates to an image processing method, an image processing device, electronic equipment and a computer readable storage medium, wherein multi-frame images are acquired from a hardware abstraction layer through an image reader in an application layer. If the transmission mode of the image parameter information is determined to be the first transmission mode by the application layer, the application layer acquires the image parameter information of the multi-frame image from the first preset queue. The first preset queue is a shared queue used for transmitting image parameter information between the application layer and the hardware abstraction layer. And processing each image frame in the multi-frame image to generate a target image frame by an application layer based on the image parameter information of the multi-frame image. Since the fluctuation range of the transmission time is small when the transmission is performed using the shared queue, the fluctuation range of the time difference is also small. Thus, the time interval of the generated target image frames is relatively uniform. Furthermore, the problem that a preview image or video formed based on the target image frame is frequently stuck and unsmooth is avoided.

Description

Image processing method, apparatus, electronic device, and computer-readable storage medium
Technical Field
The present invention relates to the field of computer technologies, and in particular, to an image processing method, an image processing apparatus, an electronic device, and a computer readable storage medium.
Background
With the development of electronic devices, more and more users take images through the electronic devices. In the process of processing an image, the electronic device needs to receive both the image frame and the image parameter information of the image frame, and the pixel information and the image parameter information are required to be the information of the same image frame. At this time, the electronic device may perform image processing on the image frame based on the same received image frame and the image parameter information to generate a processed image frame.
However, when capturing a video or generating a preview image, the conventional method often causes problems of blocking and unsmooth in the preview image or the video formed based on the processed image due to a time difference between the times when the electronic device receives the same image frame and the image parameter information and a large fluctuation range of the time difference.
Disclosure of Invention
The embodiment of the application provides an image processing method, an image processing device, electronic equipment and a computer readable storage medium, which can reduce the situation that a preview image or video formed based on an image frequently has a clamping effect and improve the smoothness of the preview image or video.
In one aspect, an image processing method is provided and applied to an electronic device, and an android system is operated on the electronic device, and the method includes:
Acquiring multi-frame images from the hardware abstraction layer through an image reader in the application layer;
if the transmission mode of the image parameter information is determined to be a first transmission mode by the application layer, acquiring the image parameter information of the multi-frame image from a first preset queue by the application layer; the first preset queue is a shared queue used for transmitting image parameter information between the application layer and the hardware abstraction layer;
and processing each image frame in the multi-frame image to generate a target image frame by the application layer based on the image parameter information of the multi-frame image.
In another aspect, an image processing method is provided, and is applied to an android system, and the method includes:
the control hardware abstraction layer matches the acquired multi-frame images with the image parameter information of the multi-frame images to generate a plurality of groups of matching results; the matching result comprises mutually matched image frames and the image parameter information;
if the transmission mode of the matching result is determined to be a third transmission mode by the application layer, acquiring a plurality of groups of matching results from a second preset queue by the application layer; the second preset queue is a shared queue used for transmitting the matching result between the application layer and the hardware abstraction layer;
And processing each image frame in the multiple groups of matching results based on the multiple groups of matching results through the application layer to generate a target image frame.
In another aspect, an image processing apparatus is provided, applied to an electronic device, on which an android system is running, the apparatus includes:
a multi-frame image acquisition module for passing through the application layerImage readerAcquiring multi-frame images from a hardware abstraction layer;
the shared queue module is used for determining a first target transmission mode of the image parameter information through the application layer, and if the first target transmission mode is determined to be the first transmission mode, acquiring the image parameter information of the multi-frame image from a first preset queue through the application layer; the first preset queue is a shared queue used for transmitting image parameter information between the application layer and the hardware abstraction layer;
and the image processing module is used for processing each image frame in the multi-frame image to generate a target image frame based on the image parameter information of the multi-frame image through the application layer.
In another aspect, there is provided an electronic device comprising a memory and a processor, the memory storing a computer program which, when executed by the processor, causes the processor to perform the steps of the image processing method as described above.
In another aspect, a computer-readable storage medium is provided, on which a computer program is stored which, when being executed by a processor, implements the steps of the image processing method as described above.
According to the image processing method, the multi-frame image is acquired from the hardware abstraction layer through the image reader in the application layer. If the transmission mode of the image parameter information is determined to be the first transmission mode by the application layer, the application layer acquires the image parameter information of the multi-frame image from the first preset queue. The first preset queue is a shared queue used for transmitting image parameter information between the application layer and the hardware abstraction layer. And finally, processing each image frame in the multi-frame image to generate a target image frame by an application layer based on the image parameter information of the multi-frame image.
Wherein, the multi-frame image is obtained from the hardware abstract layer through the image reader in the application layer. The image reader adopts a shared queue mode to transmit multi-frame images between a hardware abstraction layer and an application layer. If the transmission mode of the image parameter information is determined to be the first transmission mode by the application layer, the application layer acquires the image parameter information of the multi-frame image from the first preset queue. Because the first preset queue is a shared queue used for transmitting the image parameter information between the application layer and the hardware abstract layer, the image parameter information is transmitted in a shared queue mode when the image parameter information is transmitted between the application layer and the hardware abstract layer. The multi-frame image and the image parameter information are transmitted between the application layer and the hardware abstraction layer by adopting a sharing queue mode, so the time difference between the electronic equipment receiving the same image frame and the image parameter information is smaller. And the fluctuation range of the transmission time is smaller when the shared queue is adopted for transmission, so that the fluctuation range of the time difference is smaller. Therefore, the application layer is used for processing each image frame in the multi-frame image to generate the target image frame based on the image parameter information of the multi-frame image, and the time interval of the generated target image frame is uniform. Furthermore, the problem that a preview image or video formed based on the target image frame is frequently stuck and unsmooth is avoided.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a diagram of an application environment for an image processing method in one embodiment;
FIG. 2 is a schematic diagram of a software architecture of a conventional image processing method in one embodiment;
FIG. 3 is a schematic diagram of a transmission time axis of multi-frame images and image parameter information in a conventional image processing method according to an embodiment;
FIG. 4 is a flow chart of an image processing method in one embodiment;
FIG. 5 is a flowchart of an image processing method in another embodiment;
FIG. 6 is a flowchart of an image processing method in yet another embodiment;
FIG. 7 is a schematic diagram of a transmission time axis of multi-frame images and image parameter information according to an embodiment;
FIG. 8 is a flowchart of an image processing method in one embodiment;
FIG. 9 is a software architecture diagram of transferring image parameter information between an application layer and a hardware abstraction layer in one embodiment;
FIG. 10 is a flowchart of an image processing method in another embodiment;
FIG. 11 is a diagram of a software architecture for transferring matching results between an application layer and a hardware abstraction layer in one embodiment;
FIG. 12 is a flowchart of an image processing method in another embodiment;
FIG. 13 is a block diagram showing the structure of an image processing apparatus in one embodiment;
fig. 14 is a block diagram showing the structure of an image processing apparatus in another embodiment;
fig. 15 is a schematic structural diagram of an electronic device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
It will be understood that the terms "first," "second," and the like, as used herein, may be used to describe various elements, but these elements are not limited by these terms. These terms are only used to distinguish one element from another element. For example, a first preset queue may be referred to as a second preset queue, and similarly, a second preset queue may be referred to as a first preset queue without departing from the scope of the present application. Both the first preset queue and the second preset queue are preset queues, but they are not the same preset queue.
Fig. 1 is a schematic view of an application environment of an image processing method in an embodiment. As shown in fig. 1, the application environment includes an electronic device 120 on which an android system is running. The software Framework of the Camera application in the android system includes an application layer 142 (Camera Applications), an application Framework layer 144 (Camera Framework), and a hardware abstraction layer 146 (Camera HAL). Wherein, the English abbreviation of the hardware abstract layer is HAL (Hardware Abstraction Layer). In the process of transmitting the multi-frame image and the image parameter information from the Camera HAL layer to the Camera Applications layer respectively, the multi-frame image (image) and the image parameter information (metadata) are respectively transmitted upwards by dividing into two independent branches. The process of transmitting the multi-frame image upwards from the Camera HAL layer to the Camera Applications layer and the process of transmitting the image parameter information upwards from the Camera HAL layer to the Camera Applications layer are independent and do not affect each other.
Fig. 2 is a schematic diagram of a software architecture of a conventional image processing method. The image processing method is applied to the electronic equipment, and an android system is operated on the electronic equipment. The software Framework of the Camera application in the android system includes an application layer 142 (Camera Applications), an application Framework layer 144 (Camera Framework), and a hardware abstraction layer 146 (Camera HAL). In the conventional method, for multi-frame images, the multi-frame images are acquired from a hardware abstraction layer through an image reader ImageReader in an application layer. The Image reader adopts a shared queue mode to transmit multi-frame images (images) between the hardware abstraction layer and the application layer, and compared with the transmission adopting a callback mode, the Image reader has the advantages of higher and stable transmission speed. And for image parameter information (metadata), the application layer acquires the image parameter information from the hardware abstraction layer in a callback mode. The callback method involves a cross-process operation, and since different processes cannot directly access respective memory spaces, the callback method (callback) needs to cross-process the image parameter information between the application layer and the hardware abstraction layer Cheng Kaobei. In this way, the application layer can obtain the image parameter information from the hardware abstraction layer in a callback mode. However, the process of cross-process copying is long in time consumption, and the image loaded by the CPU is large, namely under the condition of different CPU loads, the speed of the cross-process copying is different, and the time consumption fluctuation range of the cross-process copying is large. Therefore, the application layer acquires the image parameter information from the hardware abstraction layer in a callback mode, so that the transmission duration is longer, and the fluctuation range is larger. Then, there is a time difference between the times when the same image frame and the image parameter information are received by the electronic device, and the fluctuation range of the time difference is large.
In the process of processing an image, the electronic device needs to receive both an image frame (image frame pixel information) and image parameter information, and the pixel information and the image parameter information are required to be the same image frame information. Therefore, the application layer matches the Image frame (Image) and the Image parameter information (metadata) after receiving the Image frame (Image) and the Image parameter information (metadata), and generates a plurality of sets of matched Image frame (Image) and Image parameter information (metadata), that is, a matching queue. At this time, the electronic device may perform image processing on the image frame by using an algorithm based on the same received image frame and the image parameter information to generate a processed image frame.
Because the electronic equipment receives the same image frame and the image parameter information, and the fluctuation range of the time difference is large, the time interval of the image frames generated by the electronic equipment after the image processing of the continuous image frames is uneven, or is long or short. Eventually, problems of jams and unsmooth in preview images or videos formed based on the processed images often occur. Fig. 3 is a schematic diagram of a transmission time axis of multi-frame images and image parameter information in a conventional image processing method. As can be seen from fig. 3, there is a time difference between the transmission times of the same image frame and image parameter information, and the fluctuation range of the time difference is large. Further, the time interval of the image frames after the image processing generation processing by the electronic device on the continuous image frames is caused to be uneven, or long or short. Eventually, problems of jams and unsmooth in preview images or videos formed based on processed image frames often occur.
In order to solve the above-mentioned problems, an image processing method is provided in the embodiments of the present application, which is described by taking an electronic device running in fig. 1 as an example, and an android system (hereinafter referred to as a system) is running on the electronic device. Fig. 4 is a flowchart of an image processing method according to an embodiment, and steps 420 to 460 are performed. Wherein, the liquid crystal display device comprises a liquid crystal display device,
in step 420, the multi-frame image is obtained from the hardware abstraction layer by the image reader in the application layer.
The image reader ImageReader is a class used for directly acquiring image data from a hardware abstraction layer in an application layer of a native android system. And the Image reader adopts a shared queue mode to transmit multi-frame images (images) between the hardware abstraction layer and the application layer, so that the transmission speed is high and stable. The multi-frame image includes a multi-frame image generated by continuous shooting, for example, a multi-frame preview image or a multi-frame image in a video, which is not limited in this application. The multi-frame image herein includes a plurality of image frames, and the image frames are specifically pixel information (such as RGB information or RGBW information) of the image frames, and may generally be image data in a RAW format, which is not limited in this application.
In the android system, an ImageReader class is called in an application layer, and multi-frame images are obtained from a preset queue through the ImageReader class. The preset queue is a shared queue used for transmitting multi-frame images between the application layer and the hardware abstraction layer. Specifically, the multi-frame image is transmitted to the application framework layer through the ImageReader class control hardware abstraction layer. And then controlling the application framework layer to execute dequeuing operation on the buffer area in the preset queue. And finally, writing the multi-frame images into a buffer area, executing enqueuing operation on the buffer area written with the multi-frame images, and updating a preset queue. Therefore, the multi-frame images can be obtained from the preset queue in real time through the ImageReader class.
Step 440, if the transmission mode of the image parameter information is determined to be the first transmission mode by the application layer, the application layer acquires the image parameter information of the multi-frame image from the first preset queue; the first transmission mode is a mode of transmitting through a shared queue; the first preset queue is a shared queue used for transmitting image parameter information between the application layer and the hardware abstraction layer.
The image frame parameter information may include shooting parameter information or shooting scene information, among others. For example, the photographing parameter information includes at least one of size information of a photographed image, an exposure parameter, a white balance parameter, an ISO (International Standards Organization, sensitivity) parameter, and the like, which is not limited in this application. The shooting scene information includes at least one of shooting address information, shooting time information, and shooting scene information, which is not limited in this application.
The software architecture of different vendors may be different for the transmission process of the image parameter information. For example, some manufacturers acquire image parameter information of multi-frame images from a buffer area of a hardware abstraction layer by adopting a callback mode (callback mode). Some manufacturers adopt a sharing queue mode, and acquire image parameter information of multi-frame images from the sharing queue through an application layer.
In order to realize multiplexing among software architectures of different manufacturers, a step of determining a transmission mode of image parameter information through an application layer is added. Specifically, the application layer determines whether the current android system can support a mode of adopting a shared queue or a callback mode of only supporting the original android system based on system configuration parameters to acquire image parameter information of multi-frame images from the hardware abstraction layer. The method comprises the steps of taking a shared queue mode as a first transmission mode, taking a native callback mode of an android system as a second transmission mode, wherein the transmission speed and stability of the first transmission mode are higher than those of the second transmission mode.
Therefore, if the application layer determines that the current android system can support the first transmission mode which is the mode of adopting the shared queue, the first transmission mode is preferentially adopted, that is, the application layer acquires the image parameter information of the multi-frame image from the first preset queue. The first preset queue is a shared queue used for transmitting image parameter information between the application layer and the hardware abstraction layer. The first preset queue is a circular queue or a chain queue, which is not limited in this application. The circular queue is formed by connecting the sequence queue end to end and logically viewing the table storing the queue elements as a ring. The chain type queue is realized by a linked list, and is a unidirectional linked list with limited operation. In other words, the first preset queue is a queue located in the application framework layer, and the hardware abstraction layer can access the first preset queue, and the application layer can also access the first preset queue. Therefore, the image parameter information of the multi-frame images is obtained from the first preset queue through the application layer.
In step 460, the application layer processes each image frame in the multi-frame image to generate the target image frame based on the image parameter information of the multi-frame image.
After the multi-frame images are obtained from the hardware abstraction layer through the ImageReader class, and the image parameter information of the multi-frame images is obtained from the first preset queue through the application layer, the android system can process each image frame in the multi-frame images through the application layer based on the image parameter information of the multi-frame images to generate target image frames.
Specifically, the image parameter information of the multi-frame images is matched with the multi-frame images through an application layer, and a plurality of groups of matching results are generated; the matching result comprises the image frames and the image parameter information which are matched with each other, namely, the one-to-one matching of the same image frame and the image parameter information is realized. And processing the image frame to generate a target image frame by an application layer based on the image parameter information in the matching result. Finally, a preview image or video is formed based on the target image frame. For example, the pixel information of the same image frame is matched with the shooting parameter information or shooting scene information of the same image frame, so that the same image frame can be subjected to image processing based on the shooting parameter information or shooting scene information of the same image frame to generate a plurality of target image frames. Finally, a preview image or video may be obtained based on the plurality of target image frames.
In the embodiment of the application, the multi-frame image is acquired from the hardware abstraction layer through the image reader in the application layer. The image reader adopts a shared queue mode to transmit multi-frame images between a hardware abstraction layer and an application layer. If the transmission mode of the image parameter information is determined to be the first transmission mode by the application layer, the application layer acquires the image parameter information of the multi-frame image from the first preset queue. Because the first preset queue is a shared queue used for transmitting the image parameter information between the application layer and the hardware abstract layer, the image parameter information is transmitted in a shared queue mode when the image parameter information is transmitted between the application layer and the hardware abstract layer. The multi-frame image and the image parameter information are transmitted between the application layer and the hardware abstraction layer by adopting a sharing queue mode, so the time difference between the electronic equipment receiving the same image frame and the image parameter information is smaller. And the fluctuation range of the transmission time is smaller when the shared queue is adopted for transmission, so that the fluctuation range of the time difference is smaller. Therefore, the application layer is used for processing each image frame in the multi-frame image to generate the target image frame based on the image parameter information of the multi-frame image, and the time interval of the generated target image frame is uniform. Furthermore, the problem that a preview image or video formed based on the target image frame is frequently stuck and unsmooth is avoided.
In the previous embodiment, it is described that if the transmission mode of the image parameter information is determined to be the first transmission mode by the application layer, the image parameter information of the multi-frame image is obtained from the first preset queue by the application layer. In this embodiment, it is further described that the image parameter information is stored in the first preset buffer in the form of a first preset queue; the first preset buffer zone is a shared buffer zone used for transmitting image parameter information between the application layer and the hardware abstraction layer.
The first preset queue is a queue in the application framework layer and is a shared queue between the application layer and the hardware abstraction layer for transmitting image parameter information. The image parameter information is stored in a first preset buffer zone in the form of a first preset queue, the first preset buffer zone comprises a plurality of sub-buffer zones, and each sub-buffer zone is a shared buffer zone used for transmitting the image parameter information between an application layer and a hardware abstraction layer.
In general, the system sequentially acquires image parameter information from the hardware abstraction layer, sequentially writes the image parameter information into sub-buffers in the first preset buffer according to a queue form based on the acquisition order, and stores one piece of image parameter information in each sub-buffer. For example, the parameter information of the first frame is written into the sub-buffer 1, the parameter information of the second frame is written into the sub-buffer 2, and so on until the first preset buffer is completely written, the parameter information of the next frame is rewritten into the sub-buffer 1, and the original information is covered. Since the size of each piece of image parameter information is not uniform, when the image parameter information is written into the sub-buffer, the writing start position and the data size of the image parameter information need to be recorded in the queue. So that the subsequent application layer can accurately and completely read each piece of image parameter information in the first preset buffer zone.
In this embodiment, on the premise that the image parameter information is stored in a first preset buffer area in the form of a first preset queue, and the first preset buffer area is a shared buffer area between the application layer and the hardware abstraction layer for transmitting the image parameter information, the first preset buffer area is further divided into sub-buffer areas, and each sub-buffer area stores one piece of image parameter information. Therefore, the system can acquire the image parameter information of the multi-frame images from the first preset buffer zone through the application layer. I.e. by sharing the buffer, the image parameter information is strided Cheng Chuanshu between the application layer and the hardware abstraction layer. The image parameter information is not required to be transmitted in a callback mode like the traditional method, and the image parameter information is not required to be spanned Cheng Kaobei between the application layer and the hardware abstraction layer, so that the image parameter information is not influenced by the load of a CPU. Namely, when the image parameter information is transferred between processes, buffer (buffer zone) copying is avoided, memory and power consumption are saved, and the performance of the data transmission process is improved.
Therefore, the time difference between the same image frame and the image parameter information received by the system is shortened by transmitting the image parameter information in a mode of sharing the buffer area, and the fluctuation range of the time difference is smaller.
Since the image parameter information is stored in the first preset buffer area in the form of a first preset queue, and the first preset buffer area is a shared buffer area between the application layer and the hardware abstraction layer for transmitting the image parameter information, in one embodiment, the specific implementation step of acquiring the image parameter information of the multi-frame image from the first preset queue through the application layer is described in detail, including:
and acquiring image parameter information of the multi-frame images from the first preset buffer zone through the application layer.
Specifically, the system sequentially acquires image parameter information from the hardware abstraction layer, sequentially writes the image parameter information into sub-buffers in the first preset buffer area according to a queue form based on the acquisition order, and stores one piece of image parameter information in each sub-buffer area. Since the size of each piece of image parameter information is not uniform, when the image parameter information is written into the sub-buffer, the starting position of writing and the data size of the image parameter information need to be recorded in the queue. Therefore, when acquiring the image parameter information of the multi-frame image from the first preset queue, the application layer first identifies each sub-buffer in the first preset buffer. Specifically, according to the writing start position and the data size of the image parameter information recorded in the queue, each sub-buffer storing each image parameter information can be identified from the first preset buffer.
Next, after the sub-buffers storing the parameter information of each image are identified from the first preset buffers, the application layer may acquire the parameter information of the image of the multi-frame image from each sub-buffer.
In this embodiment of the present application, according to the writing start position and the data size of the image parameter information recorded in the queue, each sub-buffer may be accurately and completely identified from the first preset buffer by the application layer. Therefore, the application layer can accurately and completely acquire the image parameter information of the multi-frame image from the first preset buffer zone, and the condition of information omission or information reading error is avoided.
Based on the embodiment shown in fig. 4, in one embodiment, as shown in fig. 5, before the image parameter information of the multi-frame image is obtained from the first preset queue by the application layer in step 440, the method includes:
in step 520, the control hardware abstraction layer transmits the image parameter information of the multi-frame image to the application framework layer.
After the image parameter information of the multi-frame images is sequentially acquired from the image processing chip, the hardware abstraction layer stores the image parameter information of the multi-frame images in a buffer area of the hardware abstraction layer. The system controls the hardware abstract layer to transmit the image parameter information of the multi-frame images stored in the buffer area to the application framework layer.
In step 540, the application framework layer is controlled to execute dequeuing operation on the queue elements in the first preset queue.
Step 560, writing the image parameter information of the multi-frame image into the queue element, executing the enqueuing operation on the queue element written with the image parameter information of the multi-frame image, and updating the first preset queue.
After the hardware abstraction layer transmits the image parameter information of the multi-frame image stored in the self buffer area to the application framework layer, the system is triggered to control the application framework layer to store the image parameter information of the multi-frame image through the queue based on the image parameter information of the multi-frame image.
Specifically, first, the system controls the application framework layer to execute dequeuing operation on the queue elements in the first preset queue. Secondly, writing the image parameter information of the multi-frame images into a queue element, executing the enqueuing operation on the queue element written with the image parameter information of the multi-frame images, and updating a first preset queue. The dequeuing operation is an operation of taking out a queue element from a first preset queue, and the enqueuing operation is an operation of returning the taken out queue element to the first preset queue. Thus, the above procedure can be understood as: after the system control application framework layer takes out one queue element from the first preset queue, the image parameter information of the multi-frame image is written into the queue element, and the queue element written with the image parameter information of the multi-frame image is returned to the first preset queue, so that the first preset queue is updated.
After the hardware abstract layer obtains one piece of image parameter information, the system controls the hardware abstract layer to transmit the image parameter information stored in the buffer area to the application framework layer. After the system control application framework layer takes out a queue element from the first preset queue, the image parameter information is written into the queue element, and the queue element written with the image parameter information is returned to the first preset queue, so that the first preset queue is updated.
In the embodiment of the application, firstly, a system control hardware abstraction layer transmits image parameter information of multi-frame images to an application framework layer; secondly, after the system control application framework layer takes out a queue element from the first preset queue, writing one piece of image parameter information into the queue element, and returning the queue element written with the image parameter information into the first preset queue, so that the first preset queue is updated. The loop is executed, so that image parameter information of multi-frame images is transmitted from the hardware abstraction layer to a first preset queue of the application framework layer. Then, the application layer can acquire the image parameter information of the multi-frame images from the first preset queue. Therefore, the system controls the application layer to acquire the image parameter information of the multi-frame image from the hardware abstraction layer based on the mode of sharing the buffer zone, and the image parameter information does not need to be transmitted in a callback mode like the traditional method, namely the image parameter information does not need to be spanned Cheng Kaobei between the application layer and the hardware abstraction layer, and the image parameter information is not influenced by the load of a CPU. Therefore, the time difference between the same image frame and the image parameter information received by the system is shortened by transmitting the image parameter information in a mode of sharing the buffer area, and the fluctuation range of the time difference is smaller.
On the basis of the embodiment shown in fig. 4, in one embodiment, as shown in fig. 6, there is provided an image processing method, further including:
step 480, if the transmission mode of the image parameter information is determined to be the second transmission mode by the application layer, the control application layer acquires the image parameter information of the multi-frame image from the buffer area of the hardware abstraction layer by a callback mode; the second transmission mode is a mode of transmitting through a callback mode.
The software architecture of different vendors may be different for the transmission process of the image parameter information. For example, some manufacturers acquire image parameter information of multi-frame images from a buffer area of a hardware abstraction layer by adopting a callback mode (callback mode). Some manufacturers adopt a sharing queue (or called sharing buffer) mode, and acquire image parameter information of multi-frame images from the sharing queue through an application layer.
In order to realize multiplexing among software architectures of different manufacturers, a step of determining a transmission mode of image parameter information through an application layer is added. Specifically, the application layer determines whether the current android system can support a mode of adopting a shared queue or a callback mode of only supporting the original android system based on system configuration parameters to acquire image parameter information of multi-frame images from the hardware abstraction layer. The method comprises the steps of taking a shared queue mode as a first transmission mode, taking a native callback mode of an android system as a second transmission mode, wherein the transmission speed and stability of the first transmission mode are higher than those of the second transmission mode.
Therefore, if the application layer determines that the current android system only supports the callback mode of the original android system, namely the second transmission mode, the system control application layer acquires image parameter information of the multi-frame image from the buffer zone of the hardware abstraction layer through the callback mode. If the transmission mode of the image parameter information is determined to be the first transmission mode by the application layer, the application layer preferentially acquires the image parameter information of the multi-frame image from the first preset queue. Therefore, multiplexing among software architectures of different manufacturers is realized, and the software architectures of multiple manufacturers are compatible.
The callback method involves cross-process operation, and since different processes cannot directly access respective memory spaces, the callback method needs to cross Cheng Kaobei image parameter information between an application layer and a hardware abstraction layer. In this way, the application layer can obtain the image parameter information from the hardware abstraction layer in a callback mode.
In the embodiment of the application, in order to realize multiplexing among software architectures of different manufacturers, a step of determining a transmission mode of image parameter information through an application layer is added. The method comprises the steps that whether the current android system can support a mode of adopting a shared queue or only support a callback mode of the original android system is determined through an application layer, and image parameter information of multi-frame images is obtained from a hardware abstraction layer. If the application layer determines that the current android system only supports the callback mode of the original android system, namely the second transmission mode, the system control application layer acquires image parameter information of multi-frame images from a buffer zone of the hardware abstraction layer in the callback mode. If the transmission mode of the image parameter information is determined to be the first transmission mode by the application layer, the application layer preferentially acquires the image parameter information of the multi-frame image from the first preset queue. Therefore, multiplexing among software architectures of different manufacturers is realized, and the software architectures of multiple manufacturers are compatible.
In step 480, the control application layer obtains the image parameter information of the multi-frame image from the buffer area of the hardware abstraction layer in a callback manner, including:
copying image parameter information of multi-frame images in a buffer area of a hardware abstraction layer to a buffer area of an application layer;
and controlling the application layer to acquire the image parameter information of the multi-frame images from the buffer area of the application layer.
Specifically, when the control application layer obtains the image parameter information of the multi-frame image from the buffer area of the hardware abstraction layer in a callback mode, firstly, the image parameter information of the multi-frame image in the buffer area of the hardware abstraction layer is copied into the buffer area of the application layer. After the image parameter information of the multi-frame images is sequentially acquired from the image processing chip, the hardware abstraction layer stores the image parameter information of the multi-frame images in a buffer area of the hardware abstraction layer. Because the application layer cannot directly access the buffer area of the hardware abstraction layer, that is, inter-process isolation exists between the process of the application layer and the process of the hardware abstraction layer, the system needs to copy the image parameter information of the multi-frame images in the buffer area of the hardware abstraction layer into the buffer area of the application layer. Therefore, the system control application layer directly acquires the image parameter information of the multi-frame images from the buffer area of the system control application layer.
In this embodiment of the present application, because there is inter-process isolation between the process of the application layer and the process of the hardware abstraction layer, the system needs to copy the image parameter information of the multi-frame image in the buffer area of the hardware abstraction layer to the buffer area of the application layer. Therefore, the system can control the application layer to directly acquire the image parameter information of the multi-frame images from the buffer area of the system. Therefore, the image parameter information of the multi-frame images is transmitted from the hardware abstraction layer to the application layer in a cross-process copy mode.
Based on the embodiment shown in fig. 4, a detailed description is given of step 460, which is a specific implementation step for processing each image frame in the multi-frame image to generate the target image frame by the application layer based on the image parameter information of the multi-frame image, including:
matching the image parameter information of the multi-frame images with the multi-frame images through an application layer to generate a plurality of groups of corresponding matching results; the matching result comprises mutually matched image frames and image parameter information;
and processing the image frames to generate target image frames by an application layer based on the image parameter information in the matching result.
Specifically, firstly, calling an ImageReader class in an application layer, and acquiring multi-frame images from a preset queue through the ImageReader class. Secondly, determining whether the current android system can support a mode of adopting a shared queue or a callback mode of only supporting the original android system through an application layer to acquire image parameter information of multi-frame images from a hardware abstraction layer. If the application layer determines that the current android system only supports the callback mode of the original android system, namely the second transmission mode, the system control application layer acquires image parameter information of multi-frame images from a buffer zone of the hardware abstraction layer in the callback mode. If the transmission mode of the image parameter information is determined to be the first transmission mode by the application layer, the application layer preferentially acquires the image parameter information of the multi-frame image from the first preset queue.
Finally, after the multi-frame images and the image parameter information of the multi-frame images are obtained, the image parameter information of the multi-frame images is matched with the multi-frame images through an application layer, and a plurality of groups of matching results are generated. The matching result comprises one-to-one matching image frames and image parameter information. Specifically, when the image parameter information of the multi-frame image is matched with the multi-frame image by the application layer, the matching can be performed based on the generation time stamp of the image parameter information of the multi-frame image and the generation time stamp of the multi-frame image. If the generation time stamps of the two are matched with each other, the image frame and the image parameter information are considered to be matched with each other. A set of matching results is generated based on the image frame and the image parameter information. Thus, the image parameter information of the multi-frame images is matched with the multi-frame images in sequence, and a plurality of groups of matching results are generated.
Fig. 7 is a schematic diagram of a transmission time axis of multi-frame images and image parameter information in an embodiment. For the image parameter information, transmission by way of a shared queue is employed in fig. 7. As can be seen from fig. 7, there is little or no time difference between the transmission times of the same image frame and image parameter information, and the time difference fluctuation range is small or there is no fluctuation. Therefore, the system receives the same image frame and the image parameter information almost simultaneously, and the application layer can process the image frame to generate the target image frame based on the image parameter information in time. Thus, the time interval for generating the target image frames by performing image processing on the continuous image frames is relatively uniform, i.e., the frame interval is uniform. Finally, the problem that a preview image or video formed based on the target image frame is frequently stuck and unsmooth is solved.
Wherein the image parameter information includes shooting parameter information or shooting scene information. Specifically, when the application layer processes the image frame based on the image parameter information in the matching result to generate the target image frame, the application layer processes the image frame by adopting an image processing algorithm based on the shooting parameter information or shooting scene information to generate the target image frame, so that the shooting effect of the target image frame is better. For example, shooting parameter information of the image frame may be adjusted based on the shooting parameter information; the present application is not limited to this, and may be to adjust shooting parameter information, shooting style, or the like of the image frame based on shooting scene information.
In the embodiment of the application, the application layer is used for matching the image parameter information of the multi-frame image with the multi-frame image to generate a plurality of groups of matching results; the matching result includes the image frames and the image parameter information that match each other. In this way, after the image parameter information corresponding to each image frame is acquired, the image frame can be processed to generate the target image frame based on the image parameter information in time by the application layer. In this way, a preview image or video is generated based on the sequentially generated target image frames.
In one embodiment, the matching the image parameter information of the multi-frame image with the multi-frame image by the application layer to generate a plurality of sets of matching results includes:
for each image frame in a multi-frame image, acquiring a first generation time stamp of the image frame through an application layer;
acquiring a second generation time stamp of the parameter information through an application layer aiming at each parameter information of the multi-frame image;
if the first generation time stamp is the same as the second generation time stamp, generating a matching result by the application layer based on the image frame corresponding to the first generation time stamp and the parameter information corresponding to the second generation time stamp.
Fig. 7 is a schematic diagram of a transmission time axis of multi-frame images and image parameter information in an embodiment. The first generation time stamp of the multi-frame image is obtained together when the multi-frame image is obtained from the hardware abstraction layer through the image reader in the application layer. Therefore, when the image parameter information of the multi-frame image is matched with the multi-frame image by the application layer, the first generation time stamp t of the image frame can be directly acquired by the application layer 1 `。
When the application layer acquires the image parameter information of the multi-frame image from the first preset queue, a second generation time stamp of the image parameter information of the multi-frame image is also acquired. Therefore, for each parameter information of the multi-frame image, the second generation time stamp t of the parameter information can be directly obtained through the application layer 1 And (3) a step of performing the following steps. Judging the first generation time stamp t 1 And second generation time stamp t 1 If the difference value between the two is the same or within the preset range, if the first generation time stamp t 1 And second generation time stamp t 1 And if the two are identical or the difference value is within a preset range, the Image frame Image1 corresponding to the first generation time stamp and the parameter information Metadata1 corresponding to the second generation time stamp are mutually matched. Finally, the matching result is generated by the application layer based on the Image frame Image1 corresponding to the first generation timestamp and the parameter information Metadata1 corresponding to the second generation timestamp. Similarly, generating a matching result based on the Image frame Image2 and the Image parameter information Metadata2 in sequence; generating a matching result based on the Image frame Image3 and the Image parameter information Metadata 3; based on image frame Image4, generating a matching result by the image parameter information Metadata 4.
In the embodiment of the application, when the image parameter information of the multi-frame image is matched with the multi-frame image through the application layer to generate a plurality of groups of matching results, the matching is performed based on the first generation time stamp of each image frame and the second generation time stamp of each image parameter information. Therefore, based on the matching result of the generated time stamp, the accuracy of the matching result of the obtained image frame and the image parameter information can be improved.
In a specific embodiment, an image processing method is provided and applied to an android system, wherein a software Framework of a Camera application in the android system includes an application layer 142 (Camera Applications), an application Framework layer 144 (Camera Framework), and a hardware abstraction layer 146 (Camera HAL). As shown in fig. 8, the method includes:
step 802; calling an ImageReader class in an application layer, and acquiring multi-frame images from a preset queue through the ImageReader class;
step 804; determining whether the current android system can support a mode of adopting a shared queue or a callback mode of only supporting the original android system based on system configuration parameters by an application layer to acquire image parameter information of multi-frame images from a hardware abstraction layer;
step 806; if the application layer determines that the current android system can support a mode of adopting a shared queue (or called shared buffer), controlling the hardware abstraction layer to transmit image parameter information of multi-frame images to the application framework layer;
referring to FIG. 9, a software architecture diagram for transferring image parameter information between an application layer and a hardware abstraction layer is shown in one embodiment. The method comprises the steps that an application layer determines whether a current android system can support a mode of adopting a shared queue or a callback mode of only supporting the original android system based on system configuration parameters, and image parameter information of multi-frame images is obtained from a hardware abstraction layer. If the application layer determines that the current android system can support a mode of adopting a shared queue, the application layer transmits transmission mode related parameters of image parameter information (metadata) of multi-frame images to the application framework layer, and the image parameter information of the multi-frame images is transmitted in a mode of adopting a shared queue (shared buffer) based on the transmission mode related parameters.
Step 808; controlling an application framework layer to execute dequeuing operation on queue elements in a first preset queue;
as shown in connection with fig. 9, the control application framework layer performs dequeue operation from the circular queue Buffer to the queue element stored in the sub-Buffer (Buffer).
Step 810; writing the image parameter information of the multi-frame image into a queue element, executing the enqueuing operation on the queue element written with the image parameter information of the multi-frame image, and updating the first preset queue.
As shown in fig. 9, the image parameter information is written into the queue element, and then the queue element into which the image parameter information of the multi-frame image is written is subjected to the enqueuing operation, that is, the queue element is subjected to the enqueue operation, and the queue element is returned to the circular queue BufferQueue. Thus, the first preset queue is updated.
Step 812; acquiring image parameter information of a multi-frame image from a first preset buffer zone through an application layer;
step 814; for each image frame in a multi-frame image, acquiring a first generation time stamp of the image frame through an application layer;
step 816; acquiring a second generation time stamp of the parameter information through an application layer aiming at each parameter information of the multi-frame image;
Step 818; if the first generation time stamp is the same as the second generation time stamp, generating a matching result by the application layer based on the image frame corresponding to the first generation time stamp and the parameter information corresponding to the second generation time stamp.
Step 820; and processing the image frames to generate target image frames by an application layer based on the image parameter information in the matching result.
Step 822; if the transmission mode of the image parameter information is determined to be a callback mode (callback) by the application layer, the control application layer acquires the image parameter information of the multi-frame image from a buffer zone of the hardware abstraction layer by the callback mode; step 814 is entered.
In the embodiment of the application, in order to realize multiplexing among software architectures of different manufacturers, a step of determining a transmission mode of image parameter information through an application layer is added. The method comprises the steps that whether the current android system can support a mode of adopting a shared queue or only support a callback mode of the original android system is determined through an application layer, and image parameter information of multi-frame images is obtained from a hardware abstraction layer. If the application layer determines that the current android system only supports the callback mode of the original android system, namely the second transmission mode, the system control application layer acquires image parameter information of multi-frame images from a buffer zone of the hardware abstraction layer in the callback mode. If the transmission mode of the image parameter information is determined to be the first transmission mode by the application layer, the application layer preferentially acquires the image parameter information of the multi-frame image from the first preset queue. Therefore, multiplexing among software architectures of different manufacturers is realized, and the software architectures of multiple manufacturers are compatible.
In one embodiment, as shown in fig. 10, there is provided an image processing method applied to an android system, the method including:
step 1020, controlling a hardware abstraction layer to match the acquired multi-frame images and the image parameter information of the multi-frame images to generate a plurality of groups of matching results; the matching result includes the image frames and the image parameter information that match each other.
Referring to FIG. 11, a software architecture diagram for transferring matching results between an application layer and a hardware abstraction layer is shown in one embodiment. The software Framework structure of the Camera application in the android system comprises an application layer (Camera Applications), an application Framework layer (Camera Framework) and a hardware abstraction layer (Camera HAL). Firstly, the hardware abstraction layer obtains the multi-frame images and the image parameter information of the multi-frame images from the image processing chip through different threads respectively, and then stores the multi-frame images and the image parameter information of the multi-frame images into a buffer zone of the hardware abstraction layer.
And secondly, controlling a hardware abstraction layer to match the acquired multi-frame images and the image parameter information of the multi-frame images to generate a plurality of groups of matching results. The matching may be performed based on the generation time stamp of the image parameter information of the multi-frame image and the generation time stamp of the multi-frame image. If the generation time stamps of the two are matched with each other, the image frame and the image parameter information are considered to be matched with each other. A set of matching results is generated based on the image frame and the image parameter information. In this way, the Image parameter information of the multi-frame Image is sequentially matched with the multi-frame Image, and a plurality of sets of matching results (Image and Metadata) are generated.
Step 1040, if the transmission mode of the matching result is determined to be the third transmission mode by the application layer, obtaining multiple groups of matching results from the second preset queue by the application layer; the second preset queue is a shared queue used for transmitting a matching result between the application layer and the hardware abstraction layer.
The software architecture of different vendors may be different for the transmission process of the matching results. For example, some vendors use callback (callback) to obtain matching results from the buffer of the hardware abstraction layer. Some manufacturers adopt a shared queue mode, and a matching result is obtained from the shared queue through an application layer.
In order to realize multiplexing among software architectures of different manufacturers, a step of determining a transmission mode of a matching result through an application layer is added. Specifically, the application layer determines whether the current android system can support a mode of adopting a shared queue or a callback mode of only supporting the original android system based on system configuration parameters to acquire a matching result from the hardware abstraction layer. When the matching result is transmitted, a mode of transmitting by adopting a shared queue is used as a third transmission mode, a mode of transmitting by adopting a callback mode of the android system is used as a fourth transmission mode, and the transmission speed and the stability of the third transmission mode are higher than those of the fourth transmission mode.
Therefore, if the application layer determines that the current android system can support the mode of adopting the shared queue, namely, the third transmission mode is preferentially adopted, namely, the application layer acquires a matching result from the second preset queue. The second preset queue is a shared queue used for transmitting a matching result between the application layer and the hardware abstraction layer. The second preset queue is a circular queue or a chain queue, which is not limited in this application. The circular queue is formed by connecting the sequence queue end to end and logically viewing the table storing the queue elements as a ring. The chain type queue is realized by a linked list, and is a unidirectional linked list with limited operation. In other words, the second preset queue is a queue in the application framework layer, and the hardware abstraction layer can access the second preset queue, and the application layer can also access the second preset queue. Thus, the matching result is obtained from the second preset queue through the application layer.
In step 1060, the application layer processes each image frame in the plurality of sets of matching results based on the plurality of sets of matching results to generate a target image frame.
Specifically, after the application layer obtains multiple sets of matching results from the second preset queue, the application layer may process each image frame in the multiple sets of matching results to generate a target image frame based on the multiple sets of matching results. Each matching result comprises image frames and image parameter information which are matched with each other. Specifically, for each matching result, the application layer processes the image frames in the matching result based on the image parameter information in the matching result to generate target image frames. Finally, a preview image or video is formed based on the target image frame. For example, the pixel information of the same image frame is matched with the shooting parameter information or shooting scene information of the same image frame, so that the same image frame can be subjected to image processing based on the shooting parameter information or shooting scene information of the same image frame to generate a plurality of target image frames.
In the embodiment of the application, the control hardware abstraction layer matches the acquired multi-frame images and the image parameter information of the multi-frame images to generate a plurality of groups of matching results. The matching result comprises mutually matched image frames and image parameter information. If the transmission mode of the matching result is determined to be the third transmission mode by the application layer, a plurality of groups of matching results are obtained from the second preset queue by the application layer. The second preset queue is a shared queue used for transmitting a matching result between the application layer and the hardware abstraction layer. And finally, processing each image frame in the multiple groups of matching results to generate a target image frame by an application layer based on the multiple groups of matching results. And matching the multi-frame images and the image parameter information of the multi-frame images in a hardware abstraction layer to generate a plurality of groups of matching results. The matching step is executed before the information transmission, so that on one hand, the multi-frame image and the image parameter information of the multi-frame image do not need to be matched in an application layer, the calculation pressure of the application layer is reduced, and on the other hand, errors of the information in the transmission process are avoided, and further errors of a matching result are avoided. In addition, the application layer acquires a plurality of groups of matching results in a shared queue mode, so that the transmission speed is higher, and the fluctuation range of the transmission time is smaller. Thus, the time interval of the target image frames generated by the method is smaller and more uniform. Furthermore, the problem that a preview image or video formed based on the target image frame is frequently stuck and unsmooth is avoided.
In the previous embodiment, it is described that if the transmission mode of the matching result determined by the application layer is the third transmission mode, the application layer obtains multiple sets of matching results from the second preset queue. In this embodiment, it is further described that the plurality of sets of matching results are stored in the second preset buffer in the form of a second preset queue; the second preset buffer zone is a shared buffer zone used for transmitting a matching result between the application layer and the hardware abstraction layer;
obtaining, by the application layer, a plurality of sets of matching results from a second preset queue, including:
and obtaining a plurality of groups of matching results from the second preset buffer zone through the application layer.
The second preset queue is a queue in the application framework layer and is a shared queue used for transmitting a matching result between the application layer and the hardware abstraction layer. The matching result is stored in a second preset buffer zone in the form of a second preset queue, the second preset buffer zone comprises a plurality of sub-buffer zones, and each sub-buffer zone is a shared buffer zone used for transmitting the matching result between the application layer and the hardware abstraction layer.
Specifically, the system sequentially acquires matching results from the hardware abstraction layer, sequentially writes the matching results into sub-buffers in the second preset buffer according to the queue form based on the acquisition order, and stores one matching result in each sub-buffer. Because the size of each matching result is not consistent, when the matching result is written into the sub-buffer, the starting position of writing and the data size of the matching result also need to be recorded in the queue. Therefore, when the application layer acquires the matching result of the multi-frame images from the second preset queue, first, each sub-buffer in the second preset buffer is identified. Specifically, according to the writing start position recorded in the queue and the data size of the matching result, each sub-buffer storing each matching result can be identified from the second preset buffer.
Secondly, after the sub-buffers storing the matching results are identified from the second preset buffer, the application layer can acquire the matching results of the multi-frame images from the sub-buffers.
In this embodiment of the present application, according to the write start position recorded in the queue and the data size of the matching result, each sub-buffer may be accurately and completely identified from the second preset buffer by the application layer. Therefore, the application layer can accurately and completely acquire the matching result of the multi-frame images from the second preset buffer zone, and the condition of information omission or information reading errors is avoided.
In one embodiment, before the multiple sets of matching results are obtained from the second preset queue by the application layer, the method includes:
the control hardware abstraction layer transmits each group of matching results to the application framework layer;
controlling the application framework layer to execute dequeuing operation on the queue elements in the second preset queue;
and writing each group of matching results into the queue element, executing the enqueuing operation on the queue element written with the matching results, and updating the second preset queue.
The hardware abstraction layer sequentially acquires matching results from the image processing chip, and then stores the matching results in a buffer area of the hardware abstraction layer. The system then controls the hardware abstraction layer to transmit the matching result stored in the self buffer to the application framework layer.
After the hardware abstraction layer transmits the matching result stored in the self buffer to the application framework layer, the system is triggered to control the application framework layer to store the matching result through the queue based on the matching result.
Specifically, first, the system controls the application framework layer to execute dequeuing operation on the queue elements in the second preset queue. And secondly, writing the matching result into a queue element, executing enqueuing operation on the queue element written with the matching result, and updating a second preset queue. The dequeuing operation is an operation of taking out a queue element from the second preset queue, and the enqueuing operation is an operation of returning the taken out queue element to the second preset queue. Thus, the above procedure can be understood as: after the system control application framework layer takes out one queue element from the second preset queue, the matching result is written into the queue element, and the queue element written with the matching result is returned to the second preset queue, so that the second preset queue is updated.
After the hardware abstraction layer obtains a matching result, the system controls the hardware abstraction layer to transmit the matching result stored in the buffer of the system to the application framework layer. And after the system control application framework layer takes out one queue element from the second preset queue, writing the matching result into the queue element, and returning the queue element written with the matching result into the second preset queue, thereby updating the second preset queue.
In the embodiment of the application, firstly, a system control hardware abstraction layer transmits a matching result to an application framework layer; secondly, after the system control application framework layer takes out one queue element from the second preset queue, the matching result is written into the queue element, and the queue element written with the matching result is returned into the second preset queue, so that the second preset queue is updated. The loop is executed in such a way that the matching result is transferred from the hardware abstraction layer to the second preset queue of the application framework layer. Then, the application layer can acquire the matching result from the second preset queue. Therefore, the system control application layer obtains the matching result from the hardware abstraction layer based on the mode of sharing the buffer zone, and the matching result does not need to be transmitted in a callback mode like the traditional method, namely the matching result does not need to be spanned Cheng Kaobei between the application layer and the hardware abstraction layer, and the matching result is not influenced by CPU load. Therefore, the time difference between the same image frame and the matching result received by the system is shortened by transmitting the matching result in a mode of sharing the buffer area, and the fluctuation range of the time difference is smaller.
On the basis of fig. 10, in one embodiment, as shown in fig. 12, there is further provided an image processing method, further including:
Step 1080, if the transmission mode of the matching result is determined to be the fourth transmission mode by the application layer, controlling the application layer to acquire a plurality of groups of matching results from the buffer area of the hardware abstraction layer by a callback mode; the fourth transmission mode is a mode of transmitting through a callback mode.
The software architecture of different vendors may be different for the transmission process of the matching results. For example, some vendors use callback (callback) to obtain matching results from the buffer of the hardware abstraction layer. Some manufacturers adopt a shared queue mode, and a matching result is obtained from the shared queue through an application layer.
In order to realize multiplexing among software architectures of different manufacturers, a step of determining a transmission mode of a matching result through an application layer is added. Specifically, the application layer determines whether the current android system can support a mode of adopting a shared queue or a callback mode of only supporting the original android system based on system configuration parameters to acquire a matching result from the hardware abstraction layer. When a matching result is transmitted between the hardware abstraction layer and the application layer, a shared queue mode is used as a third transmission mode, a native callback mode of the android system is used as a fourth transmission mode, and the transmission speed and the stability of the third transmission mode are higher than those of the fourth transmission speed.
Therefore, if the application layer determines that the current android system only supports the callback mode of the android system, namely the second transmission mode, the system control application layer obtains a matching result from the buffer area of the hardware abstraction layer through the callback mode. If the transmission mode of the matching result is determined to be the mode of sharing the queue by the application layer, the application layer preferentially acquires the matching result from the second preset queue. Therefore, multiplexing among software architectures of different manufacturers is realized, and the software architectures of multiple manufacturers are compatible.
The callback method involves cross-process operation, and as different processes cannot directly access respective memory spaces, the callback method needs to cross Cheng Kaobei matching results between an application layer and a hardware abstraction layer. Thus, the application layer can obtain the matching result from the hardware abstraction layer in a callback mode.
In the embodiment of the application, in order to realize multiplexing among software architectures of different manufacturers, a step of determining a transmission mode of a matching result through an application layer is added. And determining whether the current android system can support a mode of adopting a shared queue or a callback mode of the original android system only by an application layer to acquire a matching result from the hardware abstraction layer. If the application layer determines that the current android system only supports the callback mode of the android system, namely the second transmission mode, the system control application layer obtains a matching result from the buffer zone of the hardware abstraction layer in the callback mode. If the transmission mode of the matching result is determined to be the mode of sharing the queue by the application layer, the application layer preferentially acquires the matching result from the second preset queue. Therefore, multiplexing among software architectures of different manufacturers is realized, and the software architectures of multiple manufacturers are compatible.
In one embodiment, as shown in fig. 13, there is provided an image processing apparatus 1300 applied to an android system, the apparatus comprising:
a multi-frame image acquisition module 1320 for passing through the application layerImage readerAcquiring multi-frame images from a hardware abstraction layer;
the image parameter information obtaining module 1340 is configured to determine, by using the application layer, a first target transmission mode of image parameter information, and if the first target transmission mode is determined to be the first transmission mode, obtain, by using the application layer, image parameter information of a plurality of frames of images from a first preset queue; the first transmission mode is a mode of transmitting through a shared queue; the first preset queue is a shared queue used for transmitting image parameter information between the application layer and the hardware abstraction layer;
the first image processing module 1360 is configured to process, by the application layer, each image frame in the multi-frame image based on the image parameter information of the multi-frame image to generate a target image frame.
In one embodiment, the image parameter information is stored in a first preset buffer in the form of a first preset queue; the first preset buffer zone is a shared buffer zone used for transmitting image parameter information between the application layer and the hardware abstraction layer.
In one embodiment, the image parameter information obtaining module 1340 is further configured to obtain, by the application layer, image parameter information of multiple frames of images from the first preset buffer.
In one embodiment, there is provided an image processing apparatus, further including: the first preset queue updating module is used for controlling the hardware abstraction layer to transmit the image parameter information of the multi-frame images to the application framework layer; controlling an application framework layer to execute dequeuing operation on queue elements in a first preset queue; writing the image parameter information of the multi-frame image into a queue element, executing the enqueuing operation on the queue element written with the image parameter information of the multi-frame image, and updating the first preset queue.
In one embodiment, there is provided an image processing apparatus, further including:
the first callback module is used for controlling the application layer to acquire the image parameter information of the multi-frame image from the buffer zone of the hardware abstraction layer in a callback mode if the transmission mode of the image parameter information is determined to be a second transmission mode by the application layer; the second transmission mode is a mode of transmitting through a callback mode.
In one embodiment, the first callback module is configured to copy image parameter information of a multi-frame image in a buffer area of the hardware abstraction layer to a buffer area of the application layer; and controlling the application layer to acquire the image parameter information of the multi-frame images from the buffer area of the application layer.
In one embodiment, the first image processing module 1360 includes:
the matching unit is used for matching the image parameter information of the multi-frame images with the multi-frame images through the application layer to generate a plurality of groups of corresponding matching results; the matching result comprises mutually matched image frames and image parameter information;
and the target image frame generation unit is used for processing the image frames to generate target image frames based on the image parameter information in the matching result through the application layer.
In one embodiment, a matching unit is configured to obtain, for each image frame in the multi-frame image, a first generation timestamp of the image frame through an application layer; acquiring a second generation time stamp of the parameter information through an application layer aiming at each parameter information of the multi-frame image; if the first generation time stamp is the same as the second generation time stamp, the application layer is utilized to generate a matching result based on the image frame corresponding to the first generation time stamp and the parameter information corresponding to the second generation time stamp.
In one embodiment, as shown in fig. 14, there is provided an image processing apparatus 1400 applied to an electronic device, on which an android system is running, the apparatus including:
a multi-group matching result generating module 1420, configured to control the hardware abstraction layer to match the acquired multi-frame image and the image parameter information of the multi-frame image to generate a multi-group matching result; the matching result comprises mutually matched image frames and image parameter information;
The multiple-group matching result transmission module 1440 is configured to obtain multiple groups of matching results from the second preset queue through the application layer if the transmission mode of the matching results is determined to be the third transmission mode through the application layer; the third transmission mode is a mode of transmitting through a shared queue; the second preset queue is a shared queue used for transmitting a matching result between the application layer and the hardware abstraction layer;
the second image processing module 1460 processes, by the application layer, each image frame in the plurality of sets of matching results based on the plurality of sets of matching results to generate a target image frame.
In one embodiment, the plurality of sets of matching results are stored in a second preset buffer in the form of a second preset queue; the second preset buffer zone is a shared buffer zone used for transmitting a matching result between the application layer and the hardware abstraction layer;
and the multi-group matching result transmission module is also used for acquiring a plurality of groups of matching results from the second preset buffer zone through the application layer.
In one embodiment, there is provided an image processing apparatus, further including: the second preset queue updating module is used for controlling the hardware abstraction layer to transmit each group of matching results to the application framework layer; controlling the application framework layer to execute dequeuing operation on the queue elements in the second preset queue; and writing each group of matching results into the queue element, executing the enqueuing operation on the queue element written with the matching results, and updating the second preset queue.
In one embodiment, there is provided an image processing apparatus, further including: the second callback module is used for controlling the application layer to acquire a plurality of groups of matching results from the buffer area of the hardware abstraction layer in a callback mode if the transmission mode of the matching results is determined to be a fourth transmission mode by the application layer; the fourth transmission mode is a mode of transmitting through a callback mode.
In one embodiment, the first predetermined queue is a circular queue or a chained queue.
In one embodiment, the image parameter information includes shooting parameter information or shooting scene information.
It should be understood that, although the steps in the above-described flowcharts are shown in order as indicated by the arrows, these steps are not necessarily performed in order as indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the flowcharts described above may include a plurality of sub-steps or stages that are not necessarily performed at the same time, but may be performed at different times, and the order of execution of the sub-steps or stages is not necessarily sequential, but may be performed alternately or alternately with at least a part of the sub-steps or stages of other steps or other steps.
The above-described division of the respective modules in the image processing apparatus is merely for illustration, and in other embodiments, the image processing apparatus may be divided into different modules as needed to accomplish all or part of the functions of the above-described image processing apparatus.
For specific limitations of the image processing apparatus, reference may be made to the above limitations of the image processing method, and no further description is given here. The respective modules in the above-described image processing apparatus may be implemented in whole or in part by software, hardware, and combinations thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
Fig. 15 is a schematic diagram of an internal structure of an electronic device in one embodiment. The electronic device may be any terminal device such as a mobile phone, a tablet computer, a notebook computer, a desktop computer, a PDA (Personal Digital Assistant ), a POS (Point of Sales), a car-mounted computer, and a wearable device. The electronic device includes a processor and a memory connected by a system bus. Wherein the processor may comprise one or more processing units. The processor may be a CPU (Central Processing Unit ) or DSP (Digital Signal Processing, digital signal processor), etc. The memory may include a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The computer program is executable by a processor for implementing an image processing method provided in the following embodiments. The internal memory provides a cached operating environment for operating system computer programs in the non-volatile storage medium.
The implementation of each module in the image processing apparatus provided in the embodiments of the present application may be in the form of a computer program. The computer program may run on a terminal or a server. Program modules of the computer program may be stored in the memory of the electronic device. Which when executed by a processor, performs the steps of the methods described in the embodiments of the present application.
Embodiments of the present application also provide a computer-readable storage medium. One or more non-transitory computer-readable storage media containing computer-executable instructions that, when executed by one or more processors, cause the processors to perform the steps of an image processing method.
Embodiments of the present application also provide a computer program product containing instructions that, when run on a computer, cause the computer to perform an image processing method.
Any reference to memory, storage, database, or other medium used herein may include non-volatile and/or volatile memory. The nonvolatile Memory may include a ROM (Read-Only Memory), a PROM (Programmable Read-Only Memory ), an EPROM (Erasable Programmable Read-Only Memory, erasable programmable Read-Only Memory), an EEPROM (Electrically Erasable Programmable Read-Only Memory), or a flash Memory. Volatile memory can include RAM (Random Access Memory ), which acts as external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as SRAM (Static Random Access Memory ), DRAM (Dynamic Random Access Memory, dynamic random access memory), SDRAM (Synchronous Dynamic Random Access Memory ), double data rate DDR SDRAM (Double Data Rate Synchronous Dynamic Random Access memory, double data rate synchronous dynamic random access memory), ESDRAM (Enhanced Synchronous Dynamic Random Access memory ), SLDRAM (Sync Link Dynamic Random Access Memory, synchronous link dynamic random access memory), RDRAM (Rambus Dynamic Random Access Memory, bus dynamic random access memory), DRDRAM (Direct Rambus Dynamic Random Access Memory, interface dynamic random access memory).
The foregoing examples represent only a few embodiments of the present application, which are described in more detail and are not thereby to be construed as limiting the scope of the present application. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application is to be determined by the claims appended hereto.

Claims (16)

1. An image processing method, applied to an electronic device, on which an android system is running, the method comprising:
acquiring multi-frame images from the hardware abstraction layer through an image reader in the application layer;
if the transmission mode of the image parameter information is determined to be a first transmission mode by the application layer, acquiring the image parameter information of the multi-frame image from a first preset queue by the application layer; the first transmission mode is a mode of transmitting through a shared queue; the first preset queue is a shared queue used for transmitting image parameter information between the application layer and the hardware abstraction layer;
and processing each image frame in the multi-frame image to generate a target image frame by the application layer based on the image parameter information of the multi-frame image.
2. The method according to claim 1, wherein the image parameter information is stored in the first preset buffer in the form of the first preset queue; the first preset buffer zone is a shared buffer zone used for transmitting the image parameter information between the application layer and the hardware abstraction layer.
3. The method according to claim 2, wherein the obtaining, by the application layer, the image parameter information of the multi-frame image from the first preset queue includes:
and acquiring the image parameter information of the multi-frame image from the first preset buffer zone through the application layer.
4. A method according to claim 3, comprising, prior to said obtaining, by the application layer, image parameter information of the multi-frame image from a first preset queue:
controlling the hardware abstraction layer to transmit the image parameter information of the multi-frame image to an application framework layer;
controlling the application framework layer to execute dequeuing operation on the queue elements in the first preset queue;
writing the image parameter information of the multi-frame image into the queue element, executing enqueuing operation on the queue element written with the image parameter information of the multi-frame image, and updating the first preset queue.
5. The method according to claim 1, wherein the method further comprises:
if the transmission mode of the image parameter information is determined to be a second transmission mode by the application layer, the application layer is controlled to acquire the image parameter information of the multi-frame image from a buffer area of the hardware abstraction layer in a callback mode; the second transmission mode is a mode of transmission through a callback mode.
6. The method according to claim 5, wherein the controlling the application layer to obtain the image parameter information of the multi-frame image from the buffer area of the hardware abstraction layer in a callback manner includes:
copying the image parameter information of the multi-frame images in the buffer area of the hardware abstraction layer to the buffer area of the application layer;
and controlling the application layer to acquire the image parameter information of the multi-frame image from the buffer area of the application layer.
7. The method according to claim 1, wherein the processing, by the application layer, each image frame in the multi-frame image based on the image parameter information of the multi-frame image to generate a target image frame includes:
matching the image parameter information of the multi-frame image with the multi-frame image through the application layer to generate a plurality of groups of corresponding matching results; the matching result comprises mutually matched image frames and the image parameter information;
And processing the image frames to generate target image frames by the application layer based on the image parameter information in the matching result.
8. The method of claim 7, wherein the matching, by the application layer, the image parameter information of the multi-frame image with the multi-frame image to generate a plurality of sets of matching results, comprises:
for each image frame in the multi-frame image, acquiring a first generation time stamp of the image frame through the application layer;
acquiring a second generation time stamp of the parameter information through the application layer aiming at each parameter information of the multi-frame image;
and if the first generation time stamp is the same as the second generation time stamp, generating the matching result by the application layer based on the image frame corresponding to the first generation time stamp and the parameter information corresponding to the second generation time stamp.
9. The method of claim 1, wherein the first predetermined queue is a circular queue or a chained queue.
10. An image processing method, applied to an electronic device, on which an android system is running, the method comprising:
The control hardware abstraction layer matches the acquired multi-frame images with the image parameter information of the multi-frame images to generate a plurality of groups of matching results; the matching result comprises mutually matched image frames and the image parameter information;
if the transmission mode of the matching result is determined to be a third transmission mode by the application layer, acquiring a plurality of groups of matching results from a second preset queue by the application layer; the third transmission mode is a mode of transmitting through a shared queue; the second preset queue is a shared queue used for transmitting the matching result between the application layer and the hardware abstraction layer;
and processing each image frame in the multiple groups of matching results based on the multiple groups of matching results through the application layer to generate a target image frame.
11. The method of claim 10, wherein the plurality of sets of match results are stored in the second preset buffer in the form of the second preset queue; the second preset buffer zone is a shared buffer zone used for transmitting the matching result between the application layer and the hardware abstraction layer;
the obtaining, by the application layer, the plurality of sets of matching results from a second preset queue includes:
And acquiring the multiple groups of matching results from the second preset buffer zone through the application layer.
12. The method of claim 11, comprising, prior to said obtaining, by the application layer, the plurality of sets of matching results from a second preset queue:
controlling the hardware abstraction layer to transmit each group of matching results to an application framework layer;
controlling the application framework layer to execute dequeuing operation on the queue elements in the second preset queue;
and writing each group of matching results into the queue element, executing enqueuing operation on the queue element written with the matching results, and updating the second preset queue.
13. The method according to claim 10, wherein the method further comprises:
if the transmission mode of the matching result is determined to be a fourth transmission mode by the application layer, the application layer is controlled to acquire a plurality of groups of matching results from a buffer area of the hardware abstraction layer by a callback mode; the fourth transmission mode is a mode of transmitting in a callback mode.
14. An image processing apparatus, for application to an android system, the apparatus comprising:
the multi-frame image acquisition module is used for acquiring multi-frame images from the hardware abstraction layer through an image reader in the application layer;
The image parameter information acquisition module is used for determining a first target transmission mode of the image parameter information through the application layer, and if the first target transmission mode is determined to be the first transmission mode, acquiring the image parameter information of the multi-frame image from a first preset queue through the application layer; the first preset queue is a shared queue used for transmitting image parameter information between the application layer and the hardware abstraction layer;
and the image processing module is used for processing each image frame in the multi-frame image to generate a target image frame based on the image parameter information of the multi-frame image through the application layer.
15. An electronic device comprising a memory and a processor, the memory having stored therein a computer program which, when executed by the processor, causes the processor to perform the steps of the image processing method according to any of claims 1 to 13.
16. A computer-readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the image processing method according to any one of claims 1 to 13.
CN202111274607.5A 2021-10-29 2021-10-29 Image processing method, apparatus, electronic device, and computer-readable storage medium Active CN113840091B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111274607.5A CN113840091B (en) 2021-10-29 2021-10-29 Image processing method, apparatus, electronic device, and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111274607.5A CN113840091B (en) 2021-10-29 2021-10-29 Image processing method, apparatus, electronic device, and computer-readable storage medium

Publications (2)

Publication Number Publication Date
CN113840091A CN113840091A (en) 2021-12-24
CN113840091B true CN113840091B (en) 2023-07-18

Family

ID=78966721

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111274607.5A Active CN113840091B (en) 2021-10-29 2021-10-29 Image processing method, apparatus, electronic device, and computer-readable storage medium

Country Status (1)

Country Link
CN (1) CN113840091B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106713913A (en) * 2015-12-09 2017-05-24 腾讯科技(深圳)有限公司 Video picture frame sending method and device and video picture frame receiving method and device
CN110266951A (en) * 2019-06-28 2019-09-20 Oppo广东移动通信有限公司 Image processor, image processing method, filming apparatus and electronic equipment
CN111314606A (en) * 2020-02-21 2020-06-19 Oppo广东移动通信有限公司 Photographing method and device, electronic equipment and storage medium
CN111491102A (en) * 2020-04-22 2020-08-04 Oppo广东移动通信有限公司 Detection method and system for photographing scene, mobile terminal and storage medium

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109167930A (en) * 2018-10-11 2019-01-08 Oppo广东移动通信有限公司 Image display method, device, electronic equipment and computer readable storage medium
CN110141861B (en) * 2019-01-29 2023-10-24 腾讯科技(深圳)有限公司 Control method, device and terminal
CN110062161B (en) * 2019-04-10 2021-06-25 Oppo广东移动通信有限公司 Image processor, image processing method, photographing device, and electronic apparatus
CN109963083B (en) * 2019-04-10 2021-09-24 Oppo广东移动通信有限公司 Image processor, image processing method, photographing device, and electronic apparatus
CN111832366B (en) * 2019-04-22 2024-04-02 富联精密电子(天津)有限公司 Image recognition apparatus and method
CN111510629A (en) * 2020-04-24 2020-08-07 Oppo广东移动通信有限公司 Data display method, image processor, photographing device and electronic equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106713913A (en) * 2015-12-09 2017-05-24 腾讯科技(深圳)有限公司 Video picture frame sending method and device and video picture frame receiving method and device
CN110266951A (en) * 2019-06-28 2019-09-20 Oppo广东移动通信有限公司 Image processor, image processing method, filming apparatus and electronic equipment
CN111314606A (en) * 2020-02-21 2020-06-19 Oppo广东移动通信有限公司 Photographing method and device, electronic equipment and storage medium
CN111491102A (en) * 2020-04-22 2020-08-04 Oppo广东移动通信有限公司 Detection method and system for photographing scene, mobile terminal and storage medium

Also Published As

Publication number Publication date
CN113840091A (en) 2021-12-24

Similar Documents

Publication Publication Date Title
US11074082B2 (en) Fully extensible camera processing pipeline interface
US11373275B2 (en) Method for generating high-resolution picture, computer device, and storage medium
CN109194960B (en) Image frame rendering method and device and electronic equipment
JP2013531853A5 (en)
WO2023065523A1 (en) Isp-based image processing method and apparatus, storage medium, and camera device
US20110231599A1 (en) Storage apparatus and storage system
US20110102465A1 (en) Image processor, electronic device including the same, and image processing method
US9927862B2 (en) Variable precision in hardware pipelines for power conservation
US20220114120A1 (en) Image processing accelerator
US20110242115A1 (en) Method for performing image signal processing with aid of a graphics processing unit, and associated apparatus
WO2021057596A1 (en) Image quality processing method, smart television and storage medium
CN113992854A (en) Image preview method and device, electronic equipment and computer readable storage medium
US8635270B2 (en) Thread mechanism for media and metadata upload
US9183131B2 (en) Memory control device, memory control method, data processing device, and image processing system
WO2022252640A1 (en) Image classification pre-processing method and apparatus, image classification method and apparatus, and device and storage medium
CN113840091B (en) Image processing method, apparatus, electronic device, and computer-readable storage medium
US20150112997A1 (en) Method for content control and electronic device thereof
JP2007164355A (en) Non-volatile storage device, data reading method therefor, and data writing method therefor
US7103702B2 (en) Memory device
WO2023115904A1 (en) Video playing method and system, and storage medium
WO2022046147A1 (en) Lookup table processing and programming for camera image signal processing
CN111897603B (en) Data distribution method, device, electronic equipment and storage medium
US20230156145A1 (en) Method for image processing
CN113519153B (en) Image acquisition method, image acquisition device, control device, computer equipment, readable storage medium, image acquisition equipment and remote driving system
US20220310128A1 (en) Methods and apparatus for enabling playback of content during an ongoing capture

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant