CN110418061B - Image processing method, image processor, photographing device and electronic equipment - Google Patents

Image processing method, image processor, photographing device and electronic equipment Download PDF

Info

Publication number
CN110418061B
CN110418061B CN201910789931.7A CN201910789931A CN110418061B CN 110418061 B CN110418061 B CN 110418061B CN 201910789931 A CN201910789931 A CN 201910789931A CN 110418061 B CN110418061 B CN 110418061B
Authority
CN
China
Prior art keywords
algorithm
memory
future
snapshot
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910789931.7A
Other languages
Chinese (zh)
Other versions
CN110418061A (en
Inventor
李小朋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201910789931.7A priority Critical patent/CN110418061B/en
Publication of CN110418061A publication Critical patent/CN110418061A/en
Application granted granted Critical
Publication of CN110418061B publication Critical patent/CN110418061B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)
  • Image Processing (AREA)

Abstract

The application discloses an image processing method, an image processor, a photographing device and electronic equipment. The image processing method comprises the following steps: the algorithm post-processing module calculates the current required memory occupied by all current snapshot tasks in the queue; an algorithm post-processing module calculates future required memory required to be occupied by a future snapshot task; and the application program module judges whether to execute the future snapshot task according to the current required memory, the future required memory and the residual memory of the electronic equipment. The image processing method can judge whether to execute the future snapshot task according to the current required memory, the future required memory and the residual memory of the electronic equipment, so that the quantity of the snapshot images can be dynamically and flexibly adjusted according to the memory occupation condition of the electronic equipment, the problem that the electronic equipment is blocked due to excessive memory occupation can be avoided, the problem of waste of the memory of the electronic equipment can be avoided, and meanwhile, the shooting experience of a user can be improved.

Description

Image processing method, image processor, photographing device and electronic equipment
Technical Field
The present disclosure relates to the field of imaging technologies, and in particular, to an image processing method, an image processor, a photographing apparatus, and an electronic device.
Background
The mobile phone can provide a snapshot function, so that a user can utilize the mobile phone to perform continuous shooting actions, and the use experience of the user is improved. When the snapshot is performed once, a certain time is needed for the background to process the RAW image obtained by the snapshot, and especially when special effects such as facial beautification and filter lens processing are needed to be performed on the RAW image, the time needed for completing the snapshot is longer. When the snapshot speed of the user is higher than the background processing speed, the memory of the mobile phone is continuously occupied, so that the mobile phone runs in a card state, or other background applications are forcibly closed. In the related art, the problem of large memory occupation can be avoided by limiting the number of snap-shot images, but the flexibility of the method is low.
Disclosure of Invention
The embodiment of the application provides an image processing method, an image processor, a photographing device and electronic equipment.
The image processing method of the embodiment of the application comprises the following steps: the algorithm post-processing module calculates the current required memory occupied by all current snapshot tasks in the queue; the algorithm post-processing module calculates future required memory required to be occupied by a future snapshot task; and the application program module judges whether to execute the future snapshot task according to the current required memory, the future required memory and the residual memory of the electronic equipment.
The image processor of the embodiment of the application comprises an algorithm post-processing module and an application program module. The algorithm post-processing module is used for: calculating the current required memory occupied by all current snapshot tasks in the queue; and calculating the future required memory occupied by the future snapshot task. And the application program module is used for judging whether to execute the future snapshot task according to the current required memory, the future required memory and the residual memory of the electronic equipment.
The photographing device of the embodiment of the application comprises an image processor and an image sensor. The image sensor is connected with the image processor. The image processor comprises an algorithm post-processing module and an application program module. The algorithm post-processing module is used for: calculating the current required memory occupied by all current snapshot tasks in the queue; and calculating the future required memory occupied by the future snapshot task. And the application program module is used for judging whether to execute the future snapshot task according to the current required memory, the future required memory and the residual memory of the electronic equipment.
The electronic equipment of the embodiment of the application comprises a photographing device and a shell. The photographing device is combined with the shell. The photographing device comprises an image processor and an image sensor. The image sensor is connected with the image processor. The image processor comprises an algorithm post-processing module and an application program module. The algorithm post-processing module is used for: calculating the current required memory occupied by all current snapshot tasks in the queue; and calculating the future required memory occupied by the future snapshot task. And the application program module is used for judging whether to execute the future snapshot task according to the current required memory, the future required memory and the residual memory of the electronic equipment.
The electronic equipment and the image processor can judge whether to execute the future snapshot task according to the current required memory required to be occupied by the current snapshot task, the future required memory required to be occupied by the future snapshot task and the residual memory of the electronic equipment, so that the quantity of the image to be snapshot can be dynamically and flexibly adjusted according to the memory occupation condition of the electronic equipment, the problem that the electronic equipment is jammed when running due to too much memory occupation can be avoided, the problem of wasting the memory of the electronic equipment can be avoided, and meanwhile the shooting experience of a user can be improved.
Additional aspects and advantages of embodiments of the present application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the present application.
Drawings
The above and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
fig. 1 and 2 are schematic structural views of electronic devices according to some embodiments of the present application.
FIG. 3 is a schematic diagram of a camera in accordance with certain embodiments of the present application.
Fig. 4 is a schematic diagram of a snapshot process in the related art.
FIG. 5 is a schematic diagram of a snapshot process in accordance with certain embodiments of the present application.
Fig. 6 is a flowchart illustrating an image processing method according to an embodiment of the present application.
Fig. 7 is a flowchart illustrating an image processing method according to another embodiment of the present application.
Fig. 8 is a flowchart illustrating an image processing method according to another embodiment of the present application.
Fig. 9 is a flowchart illustrating an image processing method according to still another embodiment of the present application.
Fig. 10 is a flowchart illustrating an image processing method according to still another embodiment of the present application.
Detailed Description
Reference will now be made in detail to embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below by referring to the drawings are exemplary only for the purpose of explaining the embodiments of the present application, and are not to be construed as limiting the embodiments of the present application.
Referring to fig. 1 and fig. 2, an electronic device 100 is provided. The electronic device 100 may be a mobile phone, a tablet computer, a notebook computer, an intelligent wearable device (an intelligent watch, an intelligent bracelet, an intelligent helmet, an intelligent glasses, etc.), a virtual reality device, and the like. The electronic device 100 is described as a mobile phone in the present application, but the form of the electronic device 100 is not limited to the mobile phone. The electronic device 100 includes a photographing apparatus 30, a housing 40, and a display 50. The photographing device 30 includes an image processor 10 and an image sensor 20. The image processor 10 is connected to the image sensor 20.
The photographing device 30 is combined with the housing 40. In one example, the housing 40 includes a main body 43 and a movable bracket 41, the movable bracket 41 is driven by a driving device to move relative to the main body 43, for example, the movable bracket 41 can slide relative to the main body 43 to slide into the main body 43 (as shown in fig. 2) or slide out of the main body 43 (as shown in fig. 1). The image sensor 20 of the photographing device 30 can be mounted on a movable support 41, and the movement of the movable support 41 can drive the photographing device 30 to retract into the main body 43 or extend out of the main body 43, and the image processor 10 is accommodated in the accommodating space formed by the housing 40. One or more collection windows are formed in the housing 40, and the image sensor 20 is aligned with the collection windows so that the image sensor 20 can receive light from the external environment to generate an original image. When the user needs to use the photographing device 30, the user can trigger the movable bracket 41 to slide out of the main body 43 to drive the image sensor 20 to extend out of the main body 43; when the user does not need to use the photographing device 30, the movable bracket 41 can be triggered to slide into the main body 43 to retract the image sensor 20 into the main body 43. In another example, a through hole is opened on the housing 40, the image sensor 20 in the photographing device 30 is installed in the housing 40 and aligned with the through hole, the through hole can be opened on the front or back of the housing 40, the photographing device 30 receives light passing through the through hole to generate an original image, and the image processor 10 is accommodated in the accommodating space formed by the housing 40. In another example, the image sensor 20 of the photographing device 30 is mounted in the housing 40 and located below the display screen 50, the display screen 50 is not opened with a through hole, the photographing device 30 receives light passing through the display screen 50 to generate an original image, that is, the photographing device 30 serves as an off-screen camera, and the image processor 10 is accommodated in an accommodating space formed by the housing 40.
Referring to fig. 3, the photographing apparatus 30 includes an image processor 10 and an image sensor 20. The image processor 10 is connected to the image sensor 20. The Image sensor 20 includes an Image pickup unit 22(sensor) and a RAW Image data unit 24 (IFE), and the number of the Image sensors 20 may be one or more. The image acquisition unit 22 is configured to receive light to obtain image data (RAW image). The RAW image data unit 24 is used to transmit the image data acquired by the image acquisition unit 22 to the image processor 10. The RAW image data unit 24 may process the RAW image acquired by the image acquisition unit 22 and output the processed RAW image to the image processor 10.
The image processor 10 includes a hardware abstraction module 12, an application program module 14(APP), and an algorithm post-processing module 16 (APS).
The hardware abstraction module 12 is configured to receive a RAW image, convert the RAW image into a YUV image, and transmit the RAW image and/or the YUV image. The hardware abstraction module 12 may be connected to the image sensor 20. Specifically, the hardware abstraction module 12 may include a buffer queue (buffer queue)122 connected to the Image sensor 20, a RAW to RGB processing unit (BPS) 124, and an Image Process Engine (IPE) 126 connected to the application module 14. The buffer unit 122 is used for buffering the RAW image from the image sensor 20 and transmitting the RAW image to the post-algorithm processing module 16 through the application module 14. The RAW-to-RGB processing unit 124 is configured to convert the RAW image from the buffer unit 122 into an RGB image. The denoising and YUV post-processing unit 126 is configured to process the RGB image to obtain a YUV image and transmit the YUV image to the algorithm post-processing module 16 through the application module 14. The hardware abstraction module 12 may also transmit metadata (metadata) of the image data, the metadata including 3a (automatic exposure control AE, automatic focus control AF, automatic white balance control AWB) information, picture information (e.g., image width, height), exposure parameters (aperture size, shutter speed, and sensitivity aperture value), etc., the metadata may assist in implementing post-photographing processing (e.g., including at least one of beauty processing, filter processing, rotation processing, watermarking processing, blurring processing, HDR processing, and multi-frame processing) of the RAW image and/or the YUV image. Illustratively, the metadata Includes Sensitivity (ISO) information according to which the brightness of the RAW image and/or the YUV image can be assisted to be adjusted, thereby implementing post-photographing processing related to the adjusted brightness.
The application module 14 is used to interface with the hardware abstraction module 12. The application module 14 may be configured to generate control commands according to user input and send the control commands to the image sensor 20 through the hardware abstraction module 12 to control the operation of the image sensor 20 accordingly. Wherein, the application module 14 can run with 64 bits (bit), and the static data link library (lib) of the image processing algorithm of the post-photographing processing can be configured with 64 bits to improve the operation speed. After receiving the RAW image and/or the YUV image transmitted by the hardware abstraction module, the application module 14 may perform post-photographing processing on the RAW image and/or the YUV image, and may also transmit the RAW image and/or the YUV image to the algorithm post-processing module 16 for post-photographing processing. Of course, it is also possible that the application module 14 performs some post-photographing processing (e.g., beauty processing, filter processing, rotation processing, watermarking processing, blurring processing, etc.), and the algorithm post-processing module 16 performs some other post-photographing processing (e.g., HDR processing, multi-frame processing, etc.). In the embodiment of the present application, the application module 14 transmits the RAW image and/or the YUV image to the post-algorithm processing module 16 for post-photographing processing.
The algorithm post-processing module 16 is connected to the hardware abstraction module 12 through the application module 14, at least one image processing algorithm (for example, at least one of a beauty processing algorithm, a filter processing algorithm, a rotation processing algorithm, a watermark processing algorithm, a blurring processing algorithm, an HDR processing algorithm, and a multi-frame processing algorithm) is stored in the algorithm post-processing module 16, and the algorithm post-processing module 16 is configured to process the RAW image and/or the YUV image by using the image processing algorithm to implement post-photographing processing. Since the post-photographing processing of the RAW image and/or the YUV image can be realized by the algorithm post-processing module 16, the process truncation is not required on the algorithm architecture of the hardware abstraction module 12, and only the external compatibility is required, so that the design difficulty is reduced. And because the post-photographing processing is realized by the algorithm post-processing module 16, the function of the algorithm post-processing module 16 is more single and more focused, thereby achieving the effects of fast transplantation, simple expansion of new image processing algorithms and the like.
When the photographing device 30 is in the snapshot mode (i.e., the user continuously clicks the shutter multiple times to control the photographing device 30 to continuously acquire multiple images), the application module 14 generates multiple control instructions according to the input of the user, and the application module 14 may sequentially send the multiple control instructions to the image sensor 20 through the hardware abstraction module 12 to control the image sensor 20 to acquire one RAW image according to each control instruction. In an example, as shown in fig. 3 and 4, after the image sensor 20 receives a control instruction and acquires a RAW image, the image sensor 20 transmits the RAW image to the hardware abstraction module 12, the hardware abstraction module 12 directly transmits the RAW image to the post-algorithm processing module 16 through the application module 14 or converts the RAW image into a YUV image and transmits the YUV image to the post-algorithm processing module 16 through the application module 14, the RAW image (or the converted YUV image) performs post-photographing processing in the post-algorithm processing module 16, and the post-photographing processing performed on each RAW image (or the converted YUV image) in the post-algorithm processing module 16 is a snapshot task (i.e., task shown in fig. 4). After the application module 14 needs to wait for the completion of the current snapshot task, that is, after the RAW image (or the converted YUV image) is processed after photographing and the post-algorithm processing module 16 transmits the processed image to the application module 14, the application module 14 will continue to transmit the next control instruction to the image sensor 20, so as to control the image sensor 20 to obtain a new RAW image according to the next control instruction. However, this control method will increase the time required for each snapshot, and seriously degrade the photographing experience of the user. As shown in fig. 5, in order to improve the photographing experience of the user, a queue 164 is added in the post-algorithm processing module 16 for storing snapshot tasks, after each RAW image (or converted YUV image) is transmitted to the application program module 14, the application program module generates a snapshot task that requires post-photographing processing of the RAW image (or converted YUV image) and transmits the snapshot task to the post-algorithm processing module 16, and the post-algorithm processing module 16 stores each received snapshot task in the queue 164. At this time, after the image sensor 20 receives a control command and acquires a RAW image, the application module 14 may transmit a next control command to the image sensor 20 to control the image sensor 20 to acquire a next RAW image without waiting for the RAW image (or the converted YUV image) to complete post-photographing processing. As shown in fig. 5, since the acquisition time of the RAW image is shorter than the time of the post-photographing processing of the RAW image (or the converted YUV image), in the case where the user clicks the shutter several times in succession, there may occur a case where a plurality of snapshot tasks (hereinafter, the snapshot tasks already stored in the queue 164 are referred to as current snapshot tasks) are stored in the queue 164, wherein the plurality of current snapshot tasks are sequentially stored in the queue 164 according to the order of the task formation, the current snapshot task formed earliest (such as task1 in fig. 5) is stored at the head of the queue, and the current snapshot task formed latest is stored at the end of the queue.
The post-algorithm processing module 16 may process one current snapshot task at a time or multiple current snapshot tasks at a time. When the post-algorithm processing module 16 processes a current snapshot task at a time, the post-algorithm processing module 16 sequentially processes the current snapshot tasks in the queue 164, specifically, according to the first-in first-out principle of the queue 164, the post-algorithm processing module 16 may take out one current snapshot task from the head of the queue 164 for processing, and when the post-algorithm processing module 16 takes out one current snapshot task, another current snapshot task adjacent to the current snapshot task is stored in the head of the queue, for example, after the task1 in fig. 5 is taken out, the task2 is stored in the head of the queue. Thus, the algorithm post-processing module 16 can sequentially process a plurality of current snapshot tasks according to the sequence formed by the current snapshot tasks. While the post-algorithm processing module 16 processes multiple current snapshot tasks at once, the post-algorithm processing module 16 may take out the multiple current snapshot tasks from the queue 164 at once for processing. Specifically, assuming that the algorithm post-processing module 16 can process 3 current snapshot tasks at a time, according to the first-in first-out principle of the queue 164, the algorithm post-processing module 16 first sequentially fetches 3 current snapshot tasks (such as task1, task2, and task3 in fig. 5) from the head of the queue 164, and then processes the fetched 3 current snapshot tasks. After the 3 current snapshot tasks at the head of the team are taken out, the current snapshot task (task 4 shown in fig. 5) adjacent to the 3 rd taken out current snapshot task is stored at the head of the team.
When processing each current snapshot task, the algorithm post-processing module 16 invokes all image processing algorithms related to the current snapshot task to perform post-photographing processing on the RAW image (or the converted YUV image) of the current snapshot task. Wherein, the image processing algorithm related to each current snapshot task is determined according to the user input, and the image processing algorithm related to each current snapshot task can be one or more. For example, when the user uses the photographing device 30 to take a photograph, the image processing algorithm involved in the current snapshot task corresponding to the snapshot includes a beauty processing algorithm if the beauty function is set; for another example, when the user uses the photographing device 30 to photograph, the user sets the beauty, filter, and HDR functions, and then the image processing algorithms related to the current snapshot task corresponding to the snapshot include a beauty processing algorithm, a filter processing algorithm, and an HDR processing algorithm. The image processing algorithms involved in the different current snap tasks may be the same or different. For example, if the user sets a beauty function in the first snapshot and sets a filter and a blurring function in the second snapshot, the image processing algorithm related to the current snapshot task corresponding to the first snapshot includes a beauty processing algorithm, and the image processing algorithm related to the current snapshot task corresponding to the second snapshot includes a filter processing algorithm and a blurring processing algorithm.
The algorithm post-processing module 16 needs to occupy the memory of the electronic device 100 when invoking an image processing algorithm to process the RAW image (or the converted YUV image) of the current snapshot task. When the speed of the user clicking the shutter is higher than the processing speed of the current snapshot task, a plurality of current snapshot tasks are continuously accumulated in the queue 164, and the memory occupied by the algorithm post-processing module 16 for processing the image is also increased. The continuous occupation of the memory of the electronic device 100 by the application of taking pictures can cause the electronic device 100 to run a card pause or other applications running in the background to be forcibly closed. In one example, the problem of large memory footprint can be avoided by limiting the number of snap shots. It should be understood, however, that, firstly, different electronic devices 100 may be configured with different memories (the memory herein refers to the operating memory of the electronic device 100), and if the number of snap-shot images in the electronic device 100 with a larger memory is also limited, not only the memory waste is caused, but also the snap-shot experience of the user is affected; secondly, for the same electronic device 100, the remaining memory of the same electronic device 100 at different time points may also be different, and a problem of memory waste may also be caused by directly limiting the number of snap-shot images.
The image processor 10 of the embodiment of the present application can dynamically adjust the number of snap-shot images. The algorithm post-processing module 16 may calculate the currently required memory that needs to be occupied by all of the current snapshot tasks in the queue 164 and calculate the future required memory that needs to be occupied by future snapshot tasks. The application module 14 then determines whether to execute the future snapshot task according to the currently required memory, the future required memory, and the remaining memory of the electronic device 100.
Specifically, the algorithm post-processing module 16 pre-calculates a budget memory that each image processing algorithm needs to occupy when running, and stores the budget memory of each image processing algorithm. Illustratively, before the electronic device 100 leaves the factory, a manufacturer designs a simulation program, and the post-algorithm processing module 16 simulates and runs each image processing algorithm by using the simulation program, and records the memory occupied by the image processing algorithm during the running of each image processing algorithm. In one example, the post-algorithm processing module 16 calculates the memory occupied by each image processing algorithm during operation once to obtain the budget memory of each image processing algorithm. For example, for the beauty treatment algorithm, the algorithm post-processing module 16 simulates and runs the beauty treatment algorithm once, records the memory occupied by the running of the beauty treatment algorithm, and uses the memory as the budget memory of the beauty treatment algorithm; for the watermark processing algorithm, the algorithm post-processing module 16 simulates and runs the primary watermark processing algorithm, records the memory occupied by the running of the watermark processing algorithm, and takes the memory as the budget memory of the watermark processing algorithm, and the like. In another example, the post-algorithm processing module 16 calculates the memory required to be occupied during the operation of each image processing algorithm for multiple times to obtain multiple memories, and calculates the budget memory required to be occupied during the operation of each image processing algorithm according to the multiple memories. For example, for the beauty treatment algorithm, the algorithm postprocessing module 16 may perform simulation operation on the beauty treatment algorithm for multiple times, and record the memory occupied by the beauty treatment algorithm during each operation, so as to obtain multiple memories during the operation of the beauty treatment algorithm. The algorithm post-processing module 16 then calculates the budget memory for the operation of the beauty processing algorithm according to the memories for the operation of the beauty processing algorithm. In one example, the algorithm post-processing module 16 may calculate an average value of a plurality of memories during the operation of the beauty treatment algorithm, and then use the average value as a budget memory of the beauty treatment algorithm. In another example, the algorithm post-processing module 16 selects the memory with the largest value from the plurality of memories as the budget memory of the beauty processing algorithm. In yet another example, the algorithm post-processing module 16 selects a median value from the plurality of memories as a budget memory for the beauty processing algorithm. In the embodiment of the present application, for each image processing algorithm, the post-algorithm processing module 16 selects a memory with a largest value from the plurality of memories as a budget memory of the image processing algorithm. It can be understood that when the same image processing algorithm is operated at different times, the memory occupied during the operation may be different. For example, for a multi-frame processing algorithm, in a first operation process, the multi-frame processing algorithm may only need to fuse 4 images, but in a second operation process, the multi-frame processing algorithm may need to fuse 8 images, so that a memory occupied by the multi-frame processing algorithm in the first operation process is inconsistent with a memory occupied by the multi-frame processing algorithm in the second operation process, and the memory occupied by the multi-frame processing algorithm in the second operation process is larger than the memory occupied by the multi-frame processing algorithm in the first operation process. The memory with the largest value in the memories represents the memory occupation of the highest peak which can be achieved when the image processing algorithm is operated. Then, the memory with the largest value is used as the budget memory of the image processing algorithm, so that it can be ensured that when the post-algorithm processing module 16 actually calls the image processing algorithm, the memory occupied by the image processing algorithm in operation is not larger than the budget memory.
After the post-algorithm processing module 16 stores the budget memories of the respective image processing algorithms, during the snapshot, the post-algorithm processing module 16 stores the currently required memories required to be occupied by all the current snapshot tasks in the computation queue 164 and the future required memories required to be occupied for computing future snapshot tasks. The application module 14 then determines whether to execute the future snapshot task according to the currently required memory, the future required memory, and the remaining memory of the electronic device 100. The execution of the future snapshot task means that the application module 14 continues to respond to the input of the user to generate a control command to control the image sensor 20 to obtain a RAW image, the application module 14 receives the RAW image (or the converted YUV image) corresponding to the control command to generate the snapshot task, and transmits the snapshot task to the post-algorithm processing module 16, and the post-algorithm processing module 16 receives the snapshot task and stores the snapshot task in the queue 164.
For the calculation of the currently required memory, the algorithm post-processing module 16 obtains all image processing algorithms related to each snapshot task, calculates the memory required to be occupied by each snapshot task according to all image processing algorithms related to each snapshot task, and finally calculates the currently required memory according to the memories required to be occupied by a plurality of snapshot tasks. For example, assume that four current snapshot tasks are stored in queue 164, namely task1, task2, task3, and task 4. The image processing algorithm referred to by task1 is a beauty processing algorithm (assuming budget memory is M)Beauty treatment) (ii) a the image processing algorithms referred to by task2 are a beauty processing algorithm and a filter processing algorithm (assuming budget memory is M)Filter lens) (ii) a the image processing algorithms referred to by task3 and task4 are all beauty processing algorithms, filter processing algorithms, and multi-frame processing algorithms (assuming budget memory is M)Multiple frames). Then, the post-algorithm processing module 16 first calculates the memories M1 and M2 and M3 of all the image processing algorithms related to task1 and M1 and M2 of all the image processing algorithms related to task2, respectivelyThe memory M3 occupied by all the image processing algorithms and the memory M4 occupied by all the image processing algorithms related to task4 are obtained, wherein M1 is MBeauty treatment,M2=MBeauty treatment+MFilter lens,M3=M4=MBeauty treatment+MFilter lens+MMultiple frames. Subsequently, the algorithm post-processing module 16 calculates the currently required memory M required to be occupied by all the current snapshot tasks in the queue 164 according to M1, M2, M3 and M4At presentWherein M isAt presentM1+ M2+ M3+ M4. Thus, according to the calculation method shown in this example, the algorithm post-processing module 16 can calculate the currently required memory required to be occupied by all the current snapshot tasks in the queue 164.
For the calculation of the future required memory, the algorithm post-processing module 16 first obtains all image processing algorithms related to the future snapshot task, and then calculates the future required memory according to all image processing algorithms related to the future snapshot task. However, since the future snapshot task is not generated yet, and the type of image processing algorithm involved in the future snapshot task is unknown, the post-algorithm processing module 16 first needs to determine which image processing algorithms are involved in the future snapshot task. In one example, the post-algorithm processing module 16 may determine all image processing algorithms involved in a future snapshot task in a predictive manner. Specifically, the algorithm post-processing module 16 may first obtain a predetermined number of current snapshot tasks from the queue 164, where the predetermined number of current snapshot tasks is close to the tail of the queue 164; subsequently, the algorithm post-processing module 16 determines all image processing algorithms related to a future snapshot task according to all image processing algorithms related to a predetermined number of current snapshot tasks. For example, suppose that there are 4 current snapshot tasks stored in queue 164, namely task1, task2, task3 and task4, wherein task1, task2, task3 and task4 are stored in queue 164 in order, and task1 is located at the head of the queue and task4 is located at the tail of the queue; moreover, the image processing algorithm referred to by task1 is a beauty processing algorithm; the image processing algorithm related to task2 is a beauty processing algorithm and a filter processing algorithm; the image processing algorithms related to task3 and task4 are all beauty processing algorithms, filter processing algorithms and multi-frameAnd (5) processing an algorithm. Then, the post-algorithm processing module 16 may first obtain 2 current snapshot tasks from the queue 164, i.e., task3 and task4, near the end of the queue. Subsequently, the post-algorithm processing module 16 may determine all image processing algorithms involved in the future snapshot task based on all image processing algorithms involved in task3 and task4, respectively. In this example, the image processing algorithms referred to by task3 and task4 are both beauty processing algorithms (assuming budget memory is M)Beauty treatment) Filter processing algorithm (assuming budget memory as M)Filter lens) And multi-frame processing algorithms (assuming budget memory of M)Multiple frames) Then the algorithm post-processing module 16 considers all image processing algorithms involved in the future snapshot task to be also beauty processing algorithms, filter processing algorithms, and multi-frame processing algorithms. After determining all image processing algorithms involved in the future snapshot task, the post-algorithm processing module 16 then calculates the future required memory M that the future snapshot task needs to occupyFuture of the day,MFuture of the day=MBeauty treatment+MFilter lens+MMultiple frames. Thus, according to the calculation manner shown in this example, the algorithm post-processing module 16 can calculate the future required memory required to be occupied by the future snapshot task.
It should be noted that the predetermined number may be 1, 3, 4, 5, 6, 10, etc. besides 2 shown in the above example, and is not limited herein. When the preset number is 1, the algorithm post-processing module 16 should select the current snapshot tasks at the tail of the team, and then determine all image processing algorithms related to the future snapshot tasks according to all image processing algorithms related to the current snapshot tasks at the tail of the team; when the predetermined number is multiple, the algorithm post-processing module 16 selects a predetermined number of continuously stored current snapshot tasks from the end of the queue, and determines all image processing algorithms related to future snapshot tasks according to all image processing algorithms related to the predetermined number of continuously stored current snapshot tasks. It can be understood that the later the generation time of the current snapshot task closer to the tail of the line is, in other words, the more recently generated snapshot task is, and all the image processing algorithms related to the future snapshot task are estimated according to all the image processing algorithms related to the newly generated snapshot task, so that the estimation results of all the image processing algorithms related to the future snapshot task can be more accurate.
In addition, in the above example, in the two current snapshot tasks task3 and task4, all image processing algorithms related to task3 and task4 are the same, and all image processing algorithms related to task3 or task4 can be directly regarded as all image processing algorithms related to the future snapshot task. When all image processing algorithms involved in different current snapshot tasks are different, for example, the image processing algorithm involved in task3 is a beauty processing algorithm, a filter processing algorithm and a multi-frame processing algorithm, and all image processing algorithms involved in task4 are beauty processing algorithms, filter processing algorithms, blurring processing algorithms and watermark processing algorithms, then: (1) the algorithm post-processing module 16 may take an intersection of all image processing algorithms related to the selected multiple current snapshot tasks, and take the intersection as all image processing algorithms related to a future snapshot task, taking the current snapshot task as task3 and task4 as an example, and taking the intersection of all image processing algorithms related to task3 and all image processing algorithms related to task4 as a beauty processing algorithm and a filter processing algorithm, then all image processing algorithms related to the future snapshot task are also the beauty processing algorithm and the filter processing algorithm; (2) the algorithm post-processing module 16 may take a union set of all image processing algorithms related to the selected multiple current snapshot tasks, and use the union set as all image processing algorithms related to a future snapshot task, taking the current snapshot tasks as task3 and task4 as examples, and taking the union set of all image processing algorithms related to task3 and all image processing algorithms related to task4 as a beauty processing algorithm, a filter processing algorithm, a multi-frame processing algorithm, a blurring processing algorithm, and a watermarking processing algorithm, so that all image processing algorithms related to the future snapshot task are also the beauty processing algorithm, the filter processing algorithm, the multi-frame processing algorithm, the blurring processing algorithm, and the watermarking processing algorithm.
After determining the current required memory and the future required memory, the application module 14 can then determine the current required memory, the future required memory, and the like,And the remaining memory of the electronic device 100 determines whether to execute the future snapshot task. In one example, the application module 14 first calculates a ratio of the sum of the currently required memory and the future required memory to the remaining memory, and then the application module 14 determines whether the ratio is greater than a predetermined ratio; when the ratio is greater than the predetermined ratio, the application module 14 does not perform the future snapshot task; when the ratio is less than the predetermined ratio, the application module 14 performs the future snapshot task. Specifically, assume that the currently required memory is MAt presentThe future required memory is MFuture of the dayThe remaining memory is MRemainder ofWhen the ratio V is equal to (M)At present+MFuture of the day)/MRemainder of. The application module 14 then determines whether the ratio V is greater than a predetermined ratio V0. When the ratio V is greater than the predetermined ratio V0, it indicates that if the application module 14 performs a future snapshot task, there is a problem of excessive memory usage, which may cause a jam in the operation of the electronic device 100 or a forced shutdown of a background application, and therefore, when the ratio V is greater than the predetermined ratio V0, the application module 14 does not perform the future snapshot task. When the ratio V is less than or equal to the predetermined ratio V0, it indicates that the electronic device 100 still has enough memory available for the image processor 30, and even if the application module 14 performs the future snapshot task, the problem of excessive memory will not occur, so that when the ratio V is less than or equal to the predetermined ratio V0, the application module 14 can perform the future snapshot task. Wherein the predetermined ratio V0 can be [0.5,0.9 ]]For example, the predetermined ratio V0 may be 0.5, 0.55, 0.6, 0.64, 0.72, 0.8, 0.87, 0.9, etc.
Except for the situation that the application program module 14 generates the first control instruction according to the first input of the user, the application program module 14 performs an operation of determining whether to execute a future snapshot task every time (including every time after the second time and the second time) the input of the user is received, so that the occupation of the memory of the electronic device 100 can be monitored in real time, and the number of the snapshot images can be dynamically adjusted according to the occupation of the memory.
In summary, the electronic device 100 and the image processor 10 of the present application can determine whether to execute the future snapshot task according to the current required memory required to be occupied by the current snapshot task, the future required memory required to be occupied by the future snapshot task, and the remaining memory of the electronic device 100, so that the number of the snapshot images can be dynamically and flexibly adjusted according to the memory occupation condition of the electronic device 100, and thus, not only can the problem that the electronic device 100 is stuck in operation due to excessive memory occupation be avoided, but also the problem that the memory of the electronic device 100 is wasted can be avoided, and meanwhile, the shooting experience of the user can be improved.
In addition, the budget memory of each image processing algorithm is determined according to the memory occupation of the image processing algorithm in the state of reaching the highest peak. In the process of actually executing the snapshot task, whether the state that the image processing algorithms related to the snapshot task reach the peak or not during running is unknown, and then the memory occupation of each image processing algorithm in the state that the image processing algorithm reaches the peak is used as the budget memory of the image processing algorithm, so that the problem that the memory occupation of the electronic device 100 is too much during the execution of the snapshot task can be avoided to the greatest extent, and the smooth running of the electronic device 100 is ensured.
Moreover, since the post-photographing processing of the RAW image and/or the YUV image can be realized by the algorithm post-processing module 16, the process truncation is not required on the algorithm architecture of the hardware abstraction module 12 itself, and only the external compatibility is required, so that the design difficulty is reduced. And because the post-photographing processing is realized by the algorithm post-processing module 16, the function of the algorithm post-processing module 16 is more single and more focused, thereby achieving the effects of fast transplantation, simple expansion of new image processing algorithms and the like.
Referring to fig. 3, in some embodiments, the algorithm post-processing module 16 may further include an encoding unit 162, and the encoding unit 162 is configured to convert the YUV image into a JPG image (or a JPEG image, etc.). Specifically, when the YUV image is processed by the post-algorithm processing module 16, the encoding unit 162 may directly encode the YUV image to form a JPG image, thereby increasing the output speed of the image. When the RAW image is processed by the post-algorithm processing module 16, the post-algorithm processing module 16 may transmit the RAW image processed to realize post-photographing processing back to the hardware abstraction module 12 through the application module 14, for example, back to the RAW to RGB processing unit 124, the RAW to RGB processing unit 124 may be configured to convert the RAW image processed by the post-algorithm processing module 16 to realize post-photographing processing and transmitted back through the application module 14 into an RGB image, the noise reduction and YUV post-processing unit 126 may convert the RGB image into a YUV image, and the YUV image may be transmitted to the encoding unit 162 of the post-algorithm processing module 16 again to convert the YUV image into a JPG image. In some embodiments, the algorithm post-processing module 16 may also transmit the RAW image processed to implement the post-photographing processing back to the buffer unit 122 through the application module 14, and the transmitted RAW image passes through the RAW to RGB processing unit 124 and the noise reduction and YUV post-processing unit 126 to form a YUV image, and then is transmitted to the encoding unit 162 to form the JPG image. After the JPG image is formed, an algorithmic post-processing module 16 may be used to transfer the JPG image to memory for storage.
Referring to fig. 3 and 6, the present application further provides an image processing algorithm. The image processing algorithm may be implemented by the image processor 10. The image processing algorithm comprises:
011: the algorithm post-processing module 16 calculates the current required memory occupied by all the current snapshot tasks in the queue;
012: the algorithm post-processing module 16 calculates the future required memory required to be occupied by the future snapshot task; and
013: the application module 14 determines whether to execute the future snapshot task according to the currently required memory, the future required memory, and the remaining memory of the electronic device 100.
The execution process of step 011, step 012 and step 013 is the same as the process of executing the snapshot function by the image processor 10, and is not described herein again.
Referring to fig. 3 and 7, in one example, the image processing algorithm includes:
021: the algorithm post-processing module 16 calculates a budget memory required to be occupied by each image processing algorithm during operation;
021: the algorithm post-processing module 16 calculates the current required memory occupied by all the current snapshot tasks in the queue;
023: the algorithm post-processing module 16 calculates the future required memory required to be occupied by the future snapshot task; and
024: the application module 14 determines whether to execute the future snapshot task according to the currently required memory, the future required memory, and the remaining memory of the electronic device 100.
The execution processes of step 021, step 022, step 023 and step 024 are the same as the process of executing the snapshot function by the image processor 10, and are not described herein again.
Referring to fig. 3 and 8, in another example, the image processing algorithm includes:
031: the algorithm post-processing module 16 calculates the memory occupied by each image processing algorithm during operation for multiple times to obtain multiple memories;
032: the algorithm post-processing module 16 calculates a budget memory required to be occupied by each image processing algorithm during operation according to the plurality of memories;
033: the algorithm post-processing module 16 calculates the current required memory occupied by all the current snapshot tasks in the queue;
034: the algorithm post-processing module 16 calculates the future required memory required to be occupied by the future snapshot task; and
035: the application module 14 determines whether to execute the future snapshot task according to the currently required memory, the future required memory, and the remaining memory of the electronic device 100.
The execution processes of step 031, step 033, step 034, and step 035 are the same as the process of executing the snapshot function by the image processor 10, and are not described herein again.
Referring to fig. 3 and 9, in yet another example, an image processing algorithm includes:
041: the algorithm post-processing module 16 acquires all image processing algorithms related to each snapshot task;
042: the algorithm post-processing module 16 calculates the internal memory occupied by each snapshot task according to all image processing algorithms related to each snapshot task;
043: the algorithm post-processing module 16 calculates the currently required memory according to the memories required to be occupied by the plurality of snapshot tasks;
044: the algorithm post-processing module 16 acquires all image processing algorithms related to a future snapshot task;
045: the algorithm post-processing module 16 calculates a future required memory according to all image processing algorithms related to a future snapshot task;
046: the application program module 14 calculates the ratio of the sum of the currently required memory and the future required memory to the remaining memory;
047: the application module 14 determines whether the ratio is greater than a predetermined ratio;
048: when the ratio is greater than the predetermined ratio, the application module 14 does not perform the future snapshot task;
049: when the ratio is less than the predetermined ratio, the application module 14 performs the future snapshot task.
The execution processes of step 041, step 042, step 043, step 044, capture 045, step 046, step 047, step 048, and step 049 are the same as the process of executing the snapshot function by the image processor 10, and are not described herein again.
Referring to fig. 3 and 10, in yet another example, the image processing algorithm includes:
051: the algorithm post-processing module 16 acquires all image processing algorithms related to each snapshot task;
052: the algorithm post-processing module 16 calculates the internal memory occupied by each snapshot task according to all image processing algorithms related to each snapshot task;
053: the algorithm post-processing module 16 calculates the currently required memory according to the memories required to be occupied by the plurality of snapshot tasks;
054: the algorithm post-processing module 16 acquires a preset number of current snapshot tasks from the queue, wherein the preset number of current snapshot tasks are close to the tail of the queue;
055: the algorithm post-processing module 16 determines all image processing algorithms related to a future snapshot task according to all image processing algorithms related to a preset number of current snapshot tasks;
056: the algorithm post-processing module 16 calculates a future required memory according to all image processing algorithms related to a future snapshot task;
057: the application program module 14 calculates the ratio of the sum of the currently required memory and the future required memory to the remaining memory;
058: the application module 14 determines whether the ratio is greater than a predetermined ratio;
059: when the ratio is larger than the preset ratio, the application program module does not execute the future snapshot task;
060: when the ratio is less than the predetermined ratio, the application module 14 performs the future snapshot task.
The execution processes of step 051, step 052, step 053, step 054, step 055, step 056, step 057, step 058, step 059 and step 060 are the same as the process of the image processor 10 executing the snapshot function, and are not described herein again.
In the description herein, reference to the description of the terms "one embodiment," "some embodiments," "an illustrative embodiment," "an example," "a specific example" or "some examples" or the like means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and the scope of the preferred embodiments of the present application includes other implementations in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present application.
Although embodiments of the present application have been shown and described above, it is to be understood that the above embodiments are exemplary and not to be construed as limiting the present application, and that changes, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present application.

Claims (14)

1. An image processing method, comprising:
the algorithm post-processing module calculates the current required memory occupied by all current snapshot tasks in the queue;
the algorithm post-processing module calculates future required memory required to be occupied by a future snapshot task according to the current snapshot task in the queue; and
the application program module judges whether to execute the future snapshot task according to the current required memory, the future required memory and the residual memory of the electronic equipment;
wherein, the calculation of the current required memory occupied by all the current snapshot tasks in the queue by the algorithm post-processing module comprises the following steps:
the algorithm post-processing module acquires all image processing algorithms related to each snapshot task;
the algorithm post-processing module calculates the memory occupied by each snapshot task according to all image processing algorithms related to each snapshot task; and
and the algorithm post-processing module calculates the current required memory according to the memories required to be occupied by the plurality of snapshot tasks.
2. The image processing method of claim 1, wherein a plurality of image processing algorithms are stored in the algorithm post-processing module, the image processing method further comprising:
and the algorithm post-processing module calculates the budget memory required to be occupied by each image processing algorithm during operation.
3. The image processing method according to claim 2, wherein the calculating a budget memory required to be occupied by each image processing algorithm in operation by the algorithm post-processing module comprises:
the algorithm post-processing module calculates the memory occupied by each image processing algorithm during operation for multiple times to obtain multiple memories; and
and the algorithm post-processing module calculates budget memory required to be occupied by each image processing algorithm during operation according to the memories.
4. The image processing method according to claim 1, wherein the algorithm post-processing module calculates future required memory required to be occupied by a future snapshot task, comprising:
the algorithm post-processing module acquires all image processing algorithms related to the future snapshot task; and
and the algorithm post-processing module calculates the future required memory according to all image processing algorithms related to the future snapshot task.
5. The image processing method according to claim 4, wherein the algorithm post-processing module obtains all image processing algorithms involved in the future snapshot task, and comprises:
the algorithm post-processing module acquires a preset number of current snapshot tasks from the queue, wherein the preset number of current snapshot tasks are close to the tail of the queue; and
and the algorithm post-processing module determines all image processing algorithms related to the future snapshot tasks according to all image processing algorithms related to the current snapshot tasks with the preset number.
6. The image processing method according to claim 1, wherein the determining, by the application module, whether to execute the future snapshot task according to the currently required memory, the future required memory, and a remaining memory of the electronic device includes:
the application program module calculates the ratio of the sum of the current required memory and the future required memory to the residual memory;
the application program module judges whether the ratio is larger than a preset ratio or not;
when the ratio is greater than the predetermined ratio, the application program module does not execute the future snapshot task;
when the ratio is less than the predetermined ratio, the application program module executes the future snapshot task.
7. An image processor, characterized in that the image processor comprises:
an algorithm post-processing module to:
calculating the current required memory occupied by all current snapshot tasks in the queue; and
calculating future required memory required to be occupied by a future snapshot task according to the current snapshot task in the queue; and
the application program module is used for judging whether to execute the future snapshot task according to the current required memory, the future required memory and the residual memory of the electronic equipment;
the algorithm post-processing module is further configured to:
acquiring all image processing algorithms related to each snapshot task;
calculating the memory occupied by each snapshot task according to all image processing algorithms related to each snapshot task; and
and calculating the current required memory according to the memories required to be occupied by the plurality of snapshot tasks.
8. The image processor of claim 7, wherein a plurality of image processing algorithms are stored in the algorithm post-processing module, and wherein the algorithm post-processing module is further configured to calculate a budget memory required to be occupied by each of the image processing algorithms during operation.
9. The image processor of claim 8, wherein the post-algorithm processing module is further configured to:
calculating the memory occupied by each image processing algorithm during running for multiple times to obtain multiple memories; and
and calculating budget memory required to be occupied by each image processing algorithm during operation according to the plurality of memories.
10. The image processor of claim 7, wherein the post-algorithm processing module is further configured to:
acquiring all image processing algorithms related to the future snapshot task; and
and calculating the future required memory required to be occupied by the future snapshot task according to all image processing algorithms related to the future snapshot task.
11. The image processor of claim 10, wherein the post-algorithm processing module is further configured to:
obtaining a preset number of current snapshot tasks from the queue, wherein the preset number of current snapshot tasks are close to the tail of the queue; and
and determining all image processing algorithms related to the future snapshot tasks according to all image processing algorithms related to the current snapshot tasks with the preset number.
12. The image processor of claim 7, wherein the application module is further configured to:
calculating the ratio of the sum of the current required memory and the future required memory to the residual memory;
judging whether the ratio is larger than a preset ratio or not;
not executing the future snapshot task when the ratio is greater than the predetermined ratio;
performing the future snap-shot task when the ratio is less than the predetermined ratio.
13. A photographing apparatus, comprising:
the image processor of any one of claims 7-12; and
an image sensor connected with the image processor.
14. An electronic device, comprising:
the photographing apparatus of claim 13; and
the casing, the device of shooing with the casing combines.
CN201910789931.7A 2019-08-26 2019-08-26 Image processing method, image processor, photographing device and electronic equipment Active CN110418061B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910789931.7A CN110418061B (en) 2019-08-26 2019-08-26 Image processing method, image processor, photographing device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910789931.7A CN110418061B (en) 2019-08-26 2019-08-26 Image processing method, image processor, photographing device and electronic equipment

Publications (2)

Publication Number Publication Date
CN110418061A CN110418061A (en) 2019-11-05
CN110418061B true CN110418061B (en) 2021-04-23

Family

ID=68369010

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910789931.7A Active CN110418061B (en) 2019-08-26 2019-08-26 Image processing method, image processor, photographing device and electronic equipment

Country Status (1)

Country Link
CN (1) CN110418061B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111696025B (en) * 2020-06-11 2023-03-24 西安电子科技大学 Image processing device and method based on reconfigurable memory computing technology
CN111862617B (en) * 2020-06-12 2022-06-03 浙江大华技术股份有限公司 License plate recognition method, device and system and computer equipment
CN114612287A (en) * 2022-03-18 2022-06-10 北京小米移动软件有限公司 Image processing method, device and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6084633A (en) * 1991-12-16 2000-07-04 Fuji Photo Film Co., Ltd. Digital electronic still-video camera, and method of controlling same
CN104735359A (en) * 2015-04-09 2015-06-24 广东欧珀移动通信有限公司 Mobile terminal camera running method and device
CN106815078A (en) * 2016-12-30 2017-06-09 广东欧珀移动通信有限公司 A kind of internal memory control method and equipment
CN106937052A (en) * 2017-03-29 2017-07-07 维沃移动通信有限公司 The processing method and mobile terminal of a kind of view data
JP2018011120A (en) * 2016-07-11 2018-01-18 キヤノン株式会社 Imaging apparatus, control method for imaging apparatus, and program
CN108206913A (en) * 2017-07-17 2018-06-26 北京市商汤科技开发有限公司 A kind of image-pickup method, device, embedded system and storage medium
CN108289172A (en) * 2018-01-20 2018-07-17 深圳天珑无线科技有限公司 Adjust the method, device and mobile terminal of shooting correlation function

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8798386B2 (en) * 2010-04-22 2014-08-05 Broadcom Corporation Method and system for processing image data on a per tile basis in an image sensor pipeline
GB2515573A (en) * 2013-06-28 2014-12-31 Univ Manchester Data processing system and method
CN105005485B (en) * 2014-04-15 2018-07-20 联想移动通信科技有限公司 A kind of method, apparatus and terminal that limitation application memory occupies
CN108282696A (en) * 2018-02-07 2018-07-13 北京易讯理想科技有限公司 A kind of hardware resource distribution method that sequence frame image plays

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6084633A (en) * 1991-12-16 2000-07-04 Fuji Photo Film Co., Ltd. Digital electronic still-video camera, and method of controlling same
CN104735359A (en) * 2015-04-09 2015-06-24 广东欧珀移动通信有限公司 Mobile terminal camera running method and device
JP2018011120A (en) * 2016-07-11 2018-01-18 キヤノン株式会社 Imaging apparatus, control method for imaging apparatus, and program
CN106815078A (en) * 2016-12-30 2017-06-09 广东欧珀移动通信有限公司 A kind of internal memory control method and equipment
CN106937052A (en) * 2017-03-29 2017-07-07 维沃移动通信有限公司 The processing method and mobile terminal of a kind of view data
CN108206913A (en) * 2017-07-17 2018-06-26 北京市商汤科技开发有限公司 A kind of image-pickup method, device, embedded system and storage medium
CN108289172A (en) * 2018-01-20 2018-07-17 深圳天珑无线科技有限公司 Adjust the method, device and mobile terminal of shooting correlation function

Also Published As

Publication number Publication date
CN110418061A (en) 2019-11-05

Similar Documents

Publication Publication Date Title
CN113592887B (en) Video shooting method, electronic device and computer-readable storage medium
CN110086967B (en) Image processing method, image processor, photographing device and electronic equipment
CN109922322B (en) Photographing method, image processor, photographing device and electronic equipment
EP3723359A1 (en) Image processing apparatus, method for image processing, and electronic device
CN110418061B (en) Image processing method, image processor, photographing device and electronic equipment
WO2020057198A1 (en) Image processing method and device, electronic device and storage medium
CN110996012B (en) Continuous shooting processing method, image processor, shooting device and electronic equipment
CN110177214B (en) Image processor, image processing method, photographing device and electronic equipment
CN109005366A (en) Camera module night scene image pickup processing method, device, electronic equipment and storage medium
CN110290288B (en) Image processor, image processing method, photographing device, and electronic apparatus
WO2020207192A1 (en) Image processor, image processing method, photography apparatus, and electronic device
CN110300240B (en) Image processor, image processing method, photographing device and electronic equipment
CN109729274B (en) Image processing method, image processing device, electronic equipment and storage medium
WO2020259250A1 (en) Image processing method, image processor, photographing apparatus, and electronic device
CN107615745B (en) Photographing method and terminal
CN115633262B (en) Image processing method and electronic device
CN111147695B (en) Image processing method, image processor, shooting device and electronic equipment
CN111314606B (en) Photographing method and device, electronic equipment and storage medium
CN110401800B (en) Image processing method, image processor, photographing device and electronic equipment
CN111193867B (en) Image processing method, image processor, photographing device and electronic equipment
CN111510629A (en) Data display method, image processor, photographing device and electronic equipment
CN102082909B (en) Digital photographing apparatus and control the method for this digital photographing apparatus
CN116723383B (en) Shooting method and related equipment
CN110602359B (en) Image processing method, image processor, photographing device and electronic equipment
CN113994660B (en) Intelligent flash intensity control system and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant