CN113132637A - Image processing method, image processing chip, application processing chip and electronic equipment - Google Patents

Image processing method, image processing chip, application processing chip and electronic equipment Download PDF

Info

Publication number
CN113132637A
CN113132637A CN202110418292.0A CN202110418292A CN113132637A CN 113132637 A CN113132637 A CN 113132637A CN 202110418292 A CN202110418292 A CN 202110418292A CN 113132637 A CN113132637 A CN 113132637A
Authority
CN
China
Prior art keywords
image
frame
data
processing chip
cached
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110418292.0A
Other languages
Chinese (zh)
Other versions
CN113132637B (en
Inventor
朱文波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202110418292.0A priority Critical patent/CN113132637B/en
Publication of CN113132637A publication Critical patent/CN113132637A/en
Application granted granted Critical
Publication of CN113132637B publication Critical patent/CN113132637B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/64Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/765Interface circuits between an apparatus for recording and another apparatus
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The application relates to an image processing method, an image processing chip, an application processing chip and an electronic device, wherein the image processing method comprises the following steps: acquiring continuous frame images, and performing variable parameter analysis on the continuous frame images to divide the current frame images into a plurality of image areas; selectively caching each image area of the current frame image to obtain cached image data; and when receiving a photographing request, sending the cached image data to an application processing chip. Therefore, on the basis of ensuring the photographing effect, the data volume cached and the pressure of data transmission are reduced, so that the occupation of system resources is reduced, and the image transmission processing efficiency is improved.

Description

Image processing method, image processing chip, application processing chip and electronic equipment
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to an image processing method, an image processing chip, an application processing chip, and an electronic device.
Background
In the current photographing schemes, there are two main categories, namely, the traditional photographing scheme (non-ZSL) and the ZSL (Zero Shutter Lag, Zero-Lag photographing), and the two photographing schemes have the following common points: the storage of the image data is multi-frame caching of the whole frame data, when a photographing request exists, the cached data frame is transmitted to the rear end, frame selection processing is carried out by the rear end, so that a proper image frame is selected for subsequent image processing, and a final photographing image is generated. However, when storing and transmitting image data, since the entire frame of data is stored, a large amount of system resources are occupied and the efficiency of image transmission processing is affected.
Disclosure of Invention
Therefore, it is necessary to provide an image processing method, an image processing chip, an application processing chip and an electronic device for reducing the amount of cached data and the pressure of data transmission on the basis of ensuring the photographing effect, thereby reducing the system resource occupation and improving the image transmission processing efficiency.
An image processing method applied to an image processing chip comprises the following steps:
acquiring continuous frame images, and performing variable parameter analysis on the continuous frame images to divide the current frame images into a plurality of image areas;
selectively caching each image area of the current frame image to obtain cached image data;
and when receiving a photographing request, sending the cached image data to an application processing chip.
An image processing chip comprising:
a memory;
the neural network processor is used for acquiring continuous frame images, performing variable parameter analysis on the continuous frame images to divide the current frame images into a plurality of image areas, and controlling the memory to selectively cache each image area of the current frame images to obtain cached image data;
the first data transmission interface is used for transmitting the cached image data;
the memory is used for storing the cache image data and sending the cache image data to the application processing chip through the first data transmission interface when receiving the photographing request.
An image processing method applied to an application processing chip comprises the following steps:
receiving cache image data, wherein the cache image data is obtained by performing variable parameter analysis on the obtained continuous frame images by an image processing chip so as to divide the current frame image into a plurality of image areas and selectively caching each image area of the current frame image;
when a photographing request is received, performing frame selection processing on the cached image data to obtain a selected data frame;
and performing fusion splicing and image post-processing according to the selected data frame to obtain a photographed image.
An application processing chip comprising:
the second data transmission interface is used for receiving the cached image data, wherein the cached image data is obtained by performing variable parameter analysis on the obtained continuous frame image by the image processing chip so as to divide the current frame image into a plurality of image areas and selectively caching each image area of the current frame image;
and the central processing unit is used for carrying out frame selection processing on the cached image data when receiving the photographing request to obtain a selected data frame, and carrying out fusion splicing and image post-processing according to the selected data frame to obtain a photographed image.
An electronic device, comprising:
an image processing chip as described above; and/or
The application processing chip as described above.
According to the image processing method, the image processing chip, the application processing chip and the electronic equipment, the continuous frame images are obtained through the image processing chip, the continuous frame images are subjected to variable parameter analysis to divide the current frame images into the plurality of image areas, each image area of the current frame image is selectively cached to obtain the cached image data, and then the cached image data are sent to the application processing chip when a photographing request is received, so that the cached and transmitted data amount can be reduced, the system resource occupation is reduced, the image transmission processing efficiency is improved, and the image areas are obtained through the variable parameter analysis based on the continuous frame images, so that the photographing effect can be guaranteed.
Drawings
FIG. 1 is a flow chart of an image processing method applied to an image processing chip according to an embodiment of the present invention;
FIG. 2 is a schematic illustration of a plurality of image regions according to one embodiment of the invention;
FIG. 3 is a diagram illustrating an image processing chip according to an embodiment of the present invention;
FIG. 4 is a flow chart of an image processing method applied to an application processing chip according to one embodiment of the present invention;
FIG. 5a is a schematic diagram of fusion splicing and post-processing of selected data frames according to an embodiment of the present invention;
FIG. 5b is a schematic diagram of fusion splicing and post-processing of selected data frames according to another embodiment of the present invention;
FIG. 5c is a schematic diagram of fusion splicing and post-processing of selected data frames according to yet another embodiment of the present invention;
FIG. 6 is a diagram illustrating an architecture of an application processing chip according to an embodiment of the present invention;
FIG. 7 is a schematic structural diagram of an electronic device according to an embodiment of the invention;
FIG. 8 is a flow chart of a method of taking a picture according to one embodiment of the present invention;
FIG. 9a is a diagram illustrating selective buffering of each image region of a current frame image according to an embodiment of the present invention;
FIG. 9b is a diagram illustrating selective buffering of each image region of a current frame image according to another embodiment of the present invention;
FIG. 9c is a diagram illustrating selective buffering of each image region of a current frame image according to another embodiment of the present invention;
FIG. 10 is a schematic structural diagram of an electronic device according to another embodiment of the invention;
fig. 11 is a schematic structural diagram of a photographing apparatus according to an embodiment of the invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
It should be noted that ZSL refers to zero-delay photographing, and is mainly used to reduce photographing delay caused by a conventional photographing manner, so that a user can obtain an image at the moment of photographing. In a specific implementation, in a current ZSL photographing scheme, after a camera starts to preview, a camera sensor starts to output a Data frame, generally, several latest frames (e.g., 5 frames, 8 frames, etc.) will be used as frame Data of a snapshot (snapshot) and stored in a buffer queue (buffer) of a buffer area, such as a DDR (Double Data Rate, Double synchronous dynamic random access memory), and when a photographing request is received, the buffered Data frame is transmitted to a back end, and the back end performs frame selection processing to select an appropriate image frame for subsequent image processing, so as to generate a final photographed image. However, the current ZSL photographing scheme mainly obtains a photographed image frame based on the processing of the whole image, and therefore the cached data amount and the data transmission pressure cannot be reduced on the basis of ensuring the photographing effect.
Fig. 1 is a flowchart of an image processing method applied to an image processing chip according to an embodiment of the present invention, and referring to fig. 1, the image processing method applied to the image processing chip may include:
step S101, acquiring a continuous frame image, and performing a variation parameter analysis on the continuous frame image to divide the current frame image into a plurality of image regions.
Specifically, a continuous frame image output by the camera sensor may be acquired by the image processing chip, and the continuous frame image may be subjected to variation parameter analysis to divide the current frame image into a plurality of image regions. That is, the variation parameters of the image contents of successive frames are statistically analyzed to distinguish and segment a plurality of image regions having different variation parameters.
Optionally, as shown in fig. 2, the plurality of image regions include a high variation region, a micro variation region, and a background region. The high variation region refers to a region which can be changed and has high variation frequency, such as a human face; the micro-variation region refers to a region which can be changed but has low change frequency, such as a human body part; the background area refers to an area where no change occurs or where a change occurs but the frequency of change is low, such as a building.
In one embodiment, performing a varying parameter analysis on successive frame images to segment a current frame image into a plurality of image regions includes: comparing and analyzing the current frame image and the cached image frame to obtain image areas with different variation parameters; and segmenting the current frame image according to the image areas with different variation parameters to obtain a plurality of image areas.
It should be noted that several image frames obtained by the image processing chip may be stored in a whole frame as a buffered image frame, and then compared, analyzed and segmented with the buffered image frame after a new image frame is provided subsequently, for example, the whole image frame of 5 frames is buffered first, and the image frame of 6 th and later is compared, analyzed and segmented.
Specifically, after the camera starts previewing, the camera sensor outputs an image frame, which may be a RAW data frame, the image processing chip receives the image frame and performs a buffering operation, and the amount of buffered data depends on a preset number, such as 3 frames, 5 frames or 7 frames. When the camera sensor outputs a new image frame, the image processing chip compares the current frame image with the cached image frame for analysis so as to distinguish a plurality of image areas with different variation parameters in the current frame image, such as a high variation area, a micro variation area and a background area, and performs image segmentation processing on the current frame image based on the image areas.
More specifically, after the camera starts to preview, the camera sensor outputs an image frame, the image processing chip receives the image frame, and performs whole frame buffering on the first 5 frames, and keeps buffering of the whole 5 frames all the time. When the camera sensor outputs a 6 th frame, the image processing chip compares and analyzes the 6 th frame image with the previous 5 frames of images which are cached to distinguish different variation areas in the 6 th frame image, and carries out image segmentation processing on the 6 th frame image based on the variation areas; when the camera sensor outputs a 7 th frame, the image processing chip compares and analyzes the 7 th frame image with the previous 5 frames of images which are cached to distinguish different variation areas in the 7 th frame image, and carries out image segmentation processing on the 7 th frame image based on the variation areas; and performing image segmentation processing on each subsequent frame of image in sequence.
Further, when the current frame image and the cached image frame are compared and analyzed to distinguish a plurality of image areas with different variation parameters in the current frame image, a reference feature point (or a smaller image area) is selected first to perform preliminary comparison and judgment, and then the change of the edge of an object where the reference feature point is located is combined to further perform identification and judgment (the edge identification can be realized based on algorithms such as gray value, and the like, and can be specifically realized according to the prior art, and detailed description is not given here), so that point-to-line and line-by-line substitution are finally realized, and a basis is provided for judging and segmenting the area of the whole image. Therefore, by utilizing the characteristics that the image processing chip can preferentially obtain the image data and the calculation speed is high, the obtained image data is identified based on the characteristic points, the variation trend and the speed of different areas in the whole image are analyzed, and the image is further segmented to obtain a plurality of image areas of the current frame image. Of course, whether there is a change or not and the degree of the change may be determined according to the change of the feature object, and then the corresponding image segmentation processing may be performed, which is not limited herein.
Step S102, selectively caching each image area of the current frame image to obtain cached image data.
Specifically, after obtaining a plurality of image regions of the current frame image by image segmentation, each image region of the current frame image may be selectively cached to obtain cached image data, which is frame data of a snapshot (snapshot). That is, in the present application, the whole frame storage is not performed on all frames, but the selective storage is performed on most or all of the frames, so that the amount of stored data is effectively reduced, and the resource occupation is reduced.
It should be noted that, when selectively buffering to obtain buffered image data, a new image frame comes and then replaces the earliest frame, so as to ensure that only image frames with a certain number of frames (e.g. 5 frames, 7 frames, etc.) are buffered in the buffer area all the time, instead of continuously increasing with a new image frame.
As an example, each image area of the current frame image is selectively buffered by the image processing chip. Further, selectively caching each image region of the current frame image, including: and caching each frame of the high-variation area, caching each first preset number of frames of the micro-variation area, and caching each second preset number of frames of the background area, wherein the first preset number of frames is less than the second preset number of frames.
Specifically, when obtaining a plurality of image regions of the current frame image, the image processing chip may selectively buffer each image region, for example, buffer each frame of a high variation region; caching the micro-variation area every other first preset number of frames; and buffering the background area every second preset number of frames (or buffering when the change occurs). That is, in the region with higher change frequency, the amount of buffered data is larger, and conversely, the amount of buffered data is smaller, so that the amount of data to be stored is reduced on the basis of ensuring the data quality.
For example, assuming that a cache queue of a DDR of a current image processing chip can cache 8 frames, a first preset number of frames is 3 frames, and a second preset number of frames is 5 frames, a high variable region corresponding to each frame image in the 8 frames can be cached in the cache queue of the DDR, a micro variable region corresponding to a frame 1 and a frame 5 image is cached in the cache queue of the DDR, and a background region corresponding to a frame 1 and a frame 7 image is cached in the cache queue of the DDR, so that the cached data amount is reduced.
Therefore, in the application, the image processing chip selectively caches each image area of the current frame image to obtain the cached image data, so that the resource occupation of the image processing chip can be reduced.
It should be noted that, the image region segmentation index and the interval (such as the first preset number of frames and the second preset number of frames) for selectively caching each image region may be adapted and adjusted in real time according to the real-time shooting scene and the change of the power consumption and performance of the system. For example, when the system power consumption is low and the system performance is good, the frame update interval can be correspondingly reduced, for example, the interval of the corresponding frame of the micro-variation area is reduced, and the number of image area segmentation is increased, so that the matching result is more accurate; as another example, when the transmission bandwidth is sufficient, the data buffer amount of a single image frame can be increased accordingly, i.e. more image data is stored, and the segmentation and discarding are reduced. In addition, when dividing an image region, the granularity of the division can be dynamically adjusted according to the characteristics of a target, for example, for a certain small region, although the large region where the small region is located varies, the content of the small region is relatively fixed, and the granularity of the region can be correspondingly reduced, and further the data buffer amount can be reduced.
And step S103, when the photographing request is received, sending the cached image data to the application processing chip.
Specifically, when a user takes a picture through the camera APP, a shooting request is triggered, the image processing chip receives the shooting request (directly receives the shooting request or forwards the shooting request through the application processing chip), the image processing chip sends the cached image data to the application processing chip based on the shooting request, and the application processing chip performs corresponding processing to obtain a shot image. The image processing chip is used for selectively buffering the image data, and the image data is transmitted to the image processing chip through the buffer memory.
Furthermore, when the image data cached by the image processing chip is updated, the cached image data is sent to the application processing chip, so that the application processing chip can process the latest data in the first time when receiving the photographing request, the processing speed is accelerated, and the transmission pressure is reduced.
It should be noted that, in the present application, the image processing chip and the application processing chip may perform data transmission through MIPI (Mobile Industry Processor Interface) or PCIe (Peripheral Component Interconnect express), and the specific details are not limited herein.
In some embodiments, when the photographing request is received, the current frame image is further acquired, processed to obtain a preview frame image, and sent to the application processing chip.
That is to say, in the application, the application processing chip may directly obtain the photographed image based on the cached image data, or may correct the obtained photographed image based on the image frame corresponding to the actual photographing time after the photographed image is obtained, so as to obtain a more accurate image effect.
Specifically, the image processing chip acquires the image frame corresponding to the photographing time, and transmits the image frame before and after the division processing to the application processing chip after the division processing, so that the application processing chip can perform correction processing on the photographed image once based on the image frame before the processing after the photographed image is obtained based on the cached image data, and a more accurate image effect is obtained. Of course, in order to reduce the data amount of the image frame before being divided and transmitted to the application processing chip, the image frame before being divided may be cut (Resize) by the image processing chip, that is, the image size is changed and then transmitted, thereby improving the photographed image effect without increasing the data transmission pressure as much as possible.
In summary, according to the image processing method applied to the image processing chip, the continuous frame image is obtained, the continuous frame image is subjected to variation parameter analysis to divide the current frame image into the plurality of image areas, each image area of the current frame image is selectively cached to obtain the cached image data, and the cached image data is sent to the application processing chip when the photographing request is received, so that the cached and transmitted data amount can be reduced, further the system resource occupation is reduced, the image transmission processing efficiency is improved, and the image area is obtained by performing variation parameter analysis based on the continuous frame image, so that the photographing effect can be ensured.
Fig. 3 is a schematic structural diagram of an image processing chip according to an embodiment of the present invention, and referring to fig. 3, the image processing chip 100 includes: a neural network processor 110, a memory 120, and a first data transfer interface 130.
The neural network processor 110 is configured to obtain a continuous frame image, perform variable parameter analysis on the continuous frame image to divide a current frame image into a plurality of image regions, and control the memory 120 to selectively cache each image region of the current frame image to obtain cached image data; the first data transmission interface 130 is used for transmitting the buffered image data; the memory 120 is configured to store the buffered image data, and when receiving the photographing request, send the buffered image data to the application processing chip 200 through the first data transmission interface 130.
In some embodiments, the neural network processor 110 is specifically configured to compare and analyze the current frame image with the cached image frame, obtain image regions with different variation parameters, and segment the current frame image according to the image regions with different variation parameters, so as to obtain a plurality of image regions.
In some embodiments, the plurality of image regions include a high variation region, a micro variation region and a background region, wherein the neural network processor 110 is specifically configured to control the memory 120 to perform per-frame buffering on the high variation region, perform per-first-predetermined-number-frame buffering on the micro variation region, and perform per-second-predetermined-number-frame buffering on the background region, where the first-predetermined-number-frame is smaller than the second-predetermined-number-frame.
In some embodiments, the neural network processor 110 is further configured to, when receiving the photographing request, obtain a current frame image, and process the current frame image to obtain a preview frame image; the first data transmission interface 130 is also used to transmit the preview frame image to the application processing chip 200.
It should be noted that, for the description of the image processing chip in the present application, please refer to the description of the image processing method applied to the image processing chip in the present application, and details are not repeated here.
Fig. 4 is a flowchart of an image processing method applied to an application processing chip according to an embodiment of the present invention, and referring to fig. 4, the image processing method applied to the application processing chip includes:
step S401, receiving buffered image data, where the buffered image data is obtained by performing a parameter variation analysis on the obtained continuous frame image by the image processing chip to divide the current frame image into a plurality of image areas, and performing selective buffering on each image area of the current frame image.
Specifically, the continuous frame image output by the camera sensor may be obtained through the image processing chip, and the continuous frame image is subjected to variation parameter analysis to divide the current frame image into a plurality of image regions, where the plurality of image regions include a high variation region, a micro variation region and a background region, and each image region of the current frame image is selectively cached to obtain cached image data (the detailed process refers to the foregoing and is not repeated here), and then the cached image data may be sent to the application processing chip, and the application processing chip receives the cached image data to perform subsequent processing. The image processing chip is used for selectively buffering the image data, and the image data is transmitted to the image processing chip through the buffer memory.
Step S402, when the photographing request is received, frame selection processing is carried out on the cache image data, and a selected data frame is obtained.
Specifically, when a user takes a picture through the camera APP, a shooting request is triggered, the application processing chip receives the shooting request, and frame selection processing is performed on the cached image data based on the shooting request, that is, a required data frame is obtained from the cached image data, so as to obtain a selected data frame.
Further, as an example, the frame selection processing is performed on the buffered image data, and includes: acquiring actual photographing time; and carrying out frame selection processing on the cached image data according to the actual photographing time.
Specifically, when receiving a photographing request, the application processing chip calculates the actual photographing time and selects a corresponding data frame from the cached image data according to the actual photographing time; or the actual photographing time is transmitted to the image processing chip, and the image processing chip selects the corresponding data frame from the cached image data according to the actual photographing time. When the corresponding data frame is selected from the cached image data according to the actual photographing time, the data frame which is the same as or most adjacent to the actual photographing time can be used as the final selected data frame, and the final selected data frame is a high variation region because the data volume of the high variation region of the image frame is the largest, namely, the image data corresponding to the high variation region of the image frame which is the same as or most adjacent to the actual photographing time is used as the selected data frame.
As another example, the frame selection processing is performed on the buffered image data, and includes: acquiring actual photographing time and an expected photographing effect; and carrying out frame selection processing on the cached image data according to the actual photographing time and the expected photographing effect. Wherein the desired photographing effect may be an effect set by the user at the time of photographing.
Specifically, when receiving a photographing request, the application processing chip calculates the actual photographing time, obtains the expected photographing effect, and selects a corresponding data frame from the cached image data according to the actual photographing time and the expected photographing effect; or the actual photographing time and the expected photographing effect are transmitted to the image processing chip, and the image processing chip selects the corresponding data frame from the cached image data according to the actual photographing time and the expected photographing effect. When selecting the corresponding data frame from the cached image data according to the actual photographing time and the desired photographing effect, the data frame which is the same as the actual photographing time and is the most adjacent to the actual photographing time is selected firstly, then one data frame which best meets the desired photographing effect is selected from the data frames which are the same as the actual photographing time and are the most adjacent to the actual photographing time to serve as the final selected data frame, and the final selected data frame is a high-variation area because the data volume of the high-variation area of the image frame is the most, namely, the image data which corresponds to the high-variation area of the image frame which is the same as the actual photographing time and is the most adjacent to the actual photographing time is selected to serve as the selected data frame.
Of course, other indexes or manners may also be used to perform frame selection processing on the cached image data to obtain a selected data frame, which may be specifically implemented by using the prior art and will not be described herein again.
It should be noted that, in the present application, the image processing chip and the application processing chip may perform data transmission through MIPI or PCIe, and the details are not limited herein. It should be noted that, the photographing requests in the above examples are all received and obtained by the application processing chip, and the photographing requests may also be received and obtained by the image processing chip, which is not limited herein.
And S403, performing fusion splicing and image post-processing according to the selected data frame to obtain a photographed image.
Specifically, the selected data frames can be subjected to fusion splicing and image post-processing by the application processing chip to obtain the photographed image.
As an example, referring to fig. 5a, the fusion splicing and the image post-processing according to the selected data frame include: performing fusion splicing according to the selected data frames to obtain a spliced image; post-processing the spliced image to obtain a first post-processed image; and performing format conversion processing on the first post-processed image and inserting label information to generate a photographed image.
Specifically, the selected data frames may be fused and spliced to obtain a spliced image, the spliced image may be post-processed, such as beautifying and anti-shaking, the post-processed image may be format-converted and tag information may be inserted, and finally a complete photographed image of an expected format may be generated. Therefore, the photographed image can be obtained through a mode of splicing first and then processing.
It should be noted that, in general, a photographed image of an android camera includes an image main body, a thumbnail and label information, where the image main body and the thumbnail have the same data format, such as a jpeg format, and are different in size (size), so that both of the two parts are processed by format conversion processing, such as jpeg encode (transcoding from YUV color coding to jpeg data), and the label information mainly includes information such as width, height and exposure parameters of the image, and finally inserts the information.
As another example, referring to fig. 5b, performing fusion splicing and image post-processing according to the selected data frame includes: carrying out first post-processing on the selected data frame to obtain a second post-processed image; performing fusion splicing according to the second post-processing image to obtain a spliced image; performing second post-processing on the spliced image to obtain a third post-processed image; and performing format conversion processing on the third post-processed image and inserting label information to generate a photographed image.
Specifically, the selected data frame may be post-processed once to obtain a post-processed image, the post-processed images may be fused and spliced, the spliced image may be post-processed once, the post-processed image may be format-converted and tag information may be inserted, and finally, a complete photographed image in an expected format may be generated. Therefore, the photographed image can be obtained by image fusion and splicing in the post-processing process.
As another example, referring to fig. 5c, the fusion splicing and the image post-processing according to the selected data frame include: carrying out first post-processing on the selected data frame to obtain a second post-processed image; performing secondary post-processing on the second post-processed image to obtain a fourth post-processed image; performing fusion splicing according to the fourth post-processing image to obtain a spliced image; and carrying out format conversion processing on the spliced image and inserting label information to generate a photographed image.
Specifically, the selected data frame may be post-processed twice to obtain a post-processed image, the post-processed images may be fused and spliced, format conversion processing and label information insertion may be performed on the spliced image, and finally, a complete photographed image in an expected format may be generated. Therefore, the photographed image can be obtained by performing post-processing and then performing image fusion and splicing.
Therefore, in the present application, the photographed image can be obtained by performing fusion splicing on the images first and then performing post-processing (as shown in fig. 5 a), or the photographed image can be obtained by performing post-processing and then performing fusion splicing on the images (as shown in fig. 5b and 5 c), and for the latter, by adjusting the processing sequence of the post-processing algorithm or carrying out image fusion splicing in the post-processing process, the texture of the image can be prevented from being influenced after some processing, such as jpeg encode, and the latter can save the resources of post-processing algorithm, that is, only one image post-processing is needed for the image data part which does not need to be transmitted every time, and the image post-processing algorithm processing does not need to be carried out again after each image fusion, therefore, not only can system resources be saved, but also the processing speed of the image data can be increased (the image data which needs to be subjected to image post-processing is less in deed). For example, in the continuous shooting, the image data corresponding to the post-processed micro variable region and the background region is already obtained in the first shooting, so that in the second shooting, if the micro variable region and the background region are not updated, the post-processing is only performed on the updated high variable region, so that the post-processing on the micro variable region and the background region is omitted, the system resource is saved, and the processing speed of the image data is accelerated.
It should be noted that, in the present application, the adjustment may be adapted in real time according to a real-time shooting scene and a change in power consumption and performance of the system, for example, when the shooting scene is a continuous shooting or the system performance is poor, a mode shown in fig. 5b or fig. 5c is adopted; when the system power consumption is low and the system performance is good, the mode shown in 5a is adopted.
Further, in some embodiments, performing fusion splicing according to the selected data frame includes: according to the selected data frame, acquiring image data corresponding to a micro-variation area and a background area which are matched with the selected data frame from the buffered image data; and carrying out image fusion splicing according to the image data corresponding to the high variation region, the micro variation region matched with the selected data frame and the image data corresponding to the background region.
Specifically, after obtaining the selected data frame, that is, the image data corresponding to the high variation region, the application processing chip may first obtain the image data corresponding to the micro variation region and the background region, which are matched with the image data corresponding to the high variation region, from the buffered image data according to the image data corresponding to the high variation region, for example, select the image data corresponding to the micro variation region and the background region corresponding to the frame closest to the high variation region, and then perform image fusion and stitching according to the image data corresponding to the high variation region and the image data corresponding to the micro variation region and the background region, which are matched with the image data corresponding to the high variation region.
More specifically, assuming that the selected data frame is image data corresponding to a high variation region of the current image frame, image data corresponding to a low variation region that is forward spaced by a first preset number of frames, for example, 3 frames, and image data corresponding to a background region that is forward spaced by a second preset number of frames, for example, 5 frames, may be selected, and then image fusion and splicing are performed according to the image data corresponding to the high variation region, the image data corresponding to the micro variation region, and the image data corresponding to the background region, so as to obtain a spliced image, and finally the spliced image is subjected to post-algorithm processing, so as to obtain a photographed image.
Further, when a photographing request is received, the preview frame image sent by the image processing chip is received, and the spliced image is corrected according to the preview frame image.
That is to say, in the application, the selected data frames can be directly fused and spliced to obtain a complete image, or the selected data frames can be fused and spliced first, and then the spliced image is corrected based on the image frame corresponding to the actual photographing time, so that a more accurate image effect is obtained.
Specifically, the image processing chip acquires the image frame corresponding to the photographing moment, and transmits the image frame before and after the image frame is split to the application processing chip, so that the application processing chip can perform one-time correction processing based on the complete image frame after the image splicing processing is performed, and a more accurate image effect is obtained. Of course, in order to reduce the data amount of the image frame before being divided and transmitted to the application processing chip, the image processing chip may perform a cropping operation on the image frame before being divided, that is, an image size changing operation, and then perform transmission, so as to improve the photographed image effect without increasing the data transmission pressure as much as possible.
In summary, according to the image processing method applied to the application processing chip in the embodiment of the present invention, by receiving the cached image data sent by the image processing chip, performing frame selection processing on the cached image data when receiving the photographing request, obtaining the selected data frame, and performing fusion splicing and image post-processing according to the selected data frame, obtaining the photographed image, the amount of cached and transmitted data can be reduced, thereby reducing the system resource occupation and improving the image transmission processing efficiency, and since the image area is obtained by performing the parameter variation analysis based on the continuous frame image, the photographing effect can be ensured.
Fig. 6 is a schematic structural diagram of an application processing chip according to an embodiment of the present invention, and referring to fig. 6, the application processing chip 200 includes: a second data transmission interface 210 and a central processor 220.
The second data transmission interface 210 is configured to receive buffered image data, where the buffered image data is obtained by performing parameter variation analysis on the obtained continuous frame image by the image processing chip 100 to divide the current frame image into a plurality of image regions, and selectively buffering each image region of the current frame image; the central processing unit 220 is configured to perform frame selection processing on the cached image data when receiving the photographing request, obtain a selected data frame, and perform fusion splicing and image post-processing according to the selected data frame, so as to obtain a photographed image.
In some embodiments, the plurality of image regions include a high variation region, a micro variation region, and a background region, wherein the central processing unit 220 is specifically configured to obtain, according to the selected data frame, image data corresponding to the micro variation region and the background region that match the selected data frame from the buffered image data, and perform image fusion and stitching according to the image data corresponding to the high variation region and the image data corresponding to the micro variation region and the background region that match the selected data frame.
In some embodiments, the second data transmission interface 210 is further configured to receive a preview frame image sent by the image processing chip 100 when receiving the photographing request; the central processing unit 220 is further configured to perform a correction process on the stitched image according to the preview frame image.
It should be noted that, for the description of the application processing chip in the present application, please refer to the description of the image processing method applied to the application processing chip in the present application, and details are not repeated here.
Fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the invention, and referring to fig. 7, the electronic device 1000 includes the image processing chip 100 and/or the application processing chip 200.
It should be noted that the electronic device 1000 in the present application may be a device with a photographing function, such as a mobile phone, a tablet computer, and a digital camera.
The electronic equipment can reduce the data amount cached and transmitted through the image processing chip and/or the application processing chip, further reduce the occupation of system resources and improve the image transmission processing efficiency, and the image area is obtained by analyzing the variation parameters based on the continuous frame image, so that the photographing effect can be ensured.
Fig. 8 is a flowchart of a photographing method according to an embodiment of the invention, and referring to fig. 8, the photographing method may include:
in step S801, a continuous frame image is acquired, and a variation parameter analysis is performed on the continuous frame image to divide the current frame image into a plurality of image regions.
Specifically, referring to fig. 9a to 9c, the continuous frame image output by the camera sensor may be acquired by the image processing chip, and the continuous frame image may be subjected to variation parameter analysis to divide the current frame image into a plurality of image areas. That is, the variation parameters of the image contents of successive frames are statistically analyzed to distinguish and segment a plurality of image regions having different variation parameters.
Optionally, as shown in fig. 2, the plurality of image regions include a high variation region, a micro variation region, and a background region. The high variation region refers to a region which can be changed and has high variation frequency, such as a human face; the micro-variation region refers to a region which can be changed but has low change frequency, such as a human body part; the background area refers to an area where no change occurs or where a change occurs but the frequency of change is low, such as a building.
In one embodiment, performing a varying parameter analysis on successive frame images to segment a current frame image into a plurality of image regions includes: comparing and analyzing the current frame image and the cached image frame to obtain image areas with different variation parameters; and segmenting the current frame image according to the image areas with different variation parameters to obtain a plurality of image areas.
It should be noted that several image frames obtained by the image processing chip may be stored in a whole frame as a buffered image frame, and then compared, analyzed and segmented with the buffered image frame after a new image frame is provided subsequently, for example, the whole image frame of 5 frames is buffered first, and the image frame of 6 th and later is compared, analyzed and segmented.
Specifically, after the camera starts previewing, the camera sensor outputs an image frame, which may be a RAW data frame, the image processing chip receives the image frame and performs a buffering operation, and the amount of buffered data depends on a preset number, such as 3 frames, 5 frames or 7 frames. When the camera sensor outputs a new image frame, the image processing chip compares the current frame image with the cached image frame for analysis so as to distinguish a plurality of image areas with different variation parameters in the current frame image, such as a high variation area, a micro variation area and a background area, and performs image segmentation processing on the current frame image based on the image areas.
More specifically, after the camera starts to preview, the camera sensor outputs an image frame, the image processing chip receives the image frame, and performs whole frame buffering on the first 5 frames, and keeps buffering of the whole 5 frames all the time. When the camera sensor outputs a 6 th frame, the image processing chip compares and analyzes the 6 th frame image with the previous 5 frames of images which are cached to distinguish different variation areas in the 6 th frame image, and carries out image segmentation processing on the 6 th frame image based on the variation areas; when the camera sensor outputs a 7 th frame, the image processing chip compares and analyzes the 7 th frame image with the previous 5 frames of images which are cached to distinguish different variation areas in the 7 th frame image, and carries out image segmentation processing on the 7 th frame image based on the variation areas; and performing image segmentation processing on each subsequent frame of image in sequence.
Further, when the current frame image and the cached image frame are compared and analyzed to distinguish a plurality of image areas with different variation parameters in the current frame image, a reference feature point (or a smaller image area) is selected first to perform preliminary comparison and judgment, and then the change of the edge of an object where the reference feature point is located is combined to further perform identification and judgment (the edge identification can be realized based on algorithms such as gray value, and the like, and can be specifically realized according to the prior art, and detailed description is not given here), so that point-to-line and line-by-line substitution are finally realized, and a basis is provided for judging and segmenting the area of the whole image. Therefore, by utilizing the characteristics that the image processing chip can preferentially obtain the image data and the calculation speed is high, the obtained image data is identified based on the characteristic points, the variation trend and the speed of different areas in the whole image are analyzed, and the image is further segmented to obtain a plurality of image areas of the current frame image. Of course, whether there is a change or not and the degree of the change may be determined according to the change of the feature object, and then the corresponding image segmentation processing may be performed, which is not limited herein.
Step S802, selectively caching each image area of the current frame image to obtain cached image data.
Specifically, after obtaining a plurality of image regions of the current frame image by image segmentation, each image region of the current frame image may be selectively cached to obtain cached image data, which is frame data of the snapshot. That is, in the present application, the whole frame storage is not performed on all frames, but the selective storage is performed on most or all of the frames, so that the amount of stored data is effectively reduced, and the resource occupation is reduced.
It should be noted that, when selectively buffering to obtain buffered image data, a new image frame comes and then replaces the earliest frame, so as to ensure that only image frames with a certain number of frames (e.g. 5 frames, 7 frames, etc.) are buffered in the buffer area all the time, instead of continuously increasing with a new image frame.
As an example, referring to fig. 9a, each image area of the current frame image is selectively buffered by the image processing chip. Further, selectively caching each image region of the current frame image, including: and caching each frame of the high-variation area, caching each first preset number of frames of the micro-variation area, and caching each second preset number of frames of the background area, wherein the first preset number of frames is less than the second preset number of frames.
Specifically, when obtaining a plurality of image regions of the current frame image, the image processing chip may selectively buffer each image region, for example, buffer each frame of a high variation region; caching the micro-variation area every other first preset number of frames; and buffering the background area every second preset number of frames (or buffering when the change occurs). That is, in the region with higher change frequency, the amount of buffered data is larger, and conversely, the amount of buffered data is smaller, so that the amount of data to be stored is reduced on the basis of ensuring the data quality.
For example, assuming that a cache queue of a DDR of a current image processing chip can cache 8 frames, a first preset number of frames is 3 frames, and a second preset number of frames is 5 frames, a high variable region corresponding to each frame image in the 8 frames can be cached in the cache queue of the DDR, a micro variable region corresponding to a frame 1 and a frame 5 image is cached in the cache queue of the DDR, and a background region corresponding to a frame 1 and a frame 7 image is cached in the cache queue of the DDR, so that the cached data amount is reduced.
As another example, referring to fig. 9b, selectively caching each image region of the current frame image by the back-end application processing chip, where the selectively caching each image region of the current frame image includes: receiving each frame of the high-variation area, receiving every other first preset number of frames of the micro-variation area, and receiving every other second preset number of frames of the background area so as to selectively cache each image area of the current frame image, wherein the first preset number of frames is smaller than the second preset number of frames.
Specifically, after obtaining a plurality of image regions of the current frame image, the image processing chip may selectively transmit each image region to the back-end application processing chip, and the back-end application processing chip selectively caches each image region of the current frame image, for example, the image processing chip transmits high variation regions of each frame to the back-end application processing chip for caching; transmitting the micro-variation areas of every other first preset number of frames to a back-end application processing chip for caching; and transmitting the micro-variation areas of every second preset number of frames to a back-end application processing chip for caching. That is, the higher the change frequency, the more the amount of data to be transmitted and buffered, and conversely, the less the amount of data to be transmitted and buffered, thereby reducing the amount of data to be transmitted and stored on the basis of ensuring the data quality.
For example, if the buffer queue of the DDR of the front-end and back-end application processing chips can buffer 8 frames, the first predetermined number of frames is 3 frames, and the second predetermined number of frames is 5 frames, the high transition region corresponding to each frame image in the 8 frames is transmitted to the back-end application processing chip by the image processing chip and is buffered in the buffer queue of the DDR, the micro transition region corresponding to the 1 st frame image and the 5 th frame image is transmitted to the back-end application processing chip by the image processing chip and is buffered in the buffer queue of the DDR, and the background region corresponding to the 1 st frame image and the 7 th frame image is transmitted to the back-end application processing chip by the image processing chip and is buffered in the buffer queue of the DDR, so as to reduce the data amount of transmission and buffering, and because the real-time transmission and buffering are performed on the high transition region with high real-time requirement, the micro transition region and the background region with low real-time requirement are transmitted and buffered at intervals, therefore, the data quality is ensured, and the data volume needing to be transmitted and stored is reduced on the basis of ensuring the data quality.
It should be noted that, in this example, the image processing chip may perform whole-frame storage on each frame of image, the storage mode is only storage in a partitioned mode, and when the image processing chip transmits to the back-end application processing chip, selective transmission is performed, and selective caching is performed by the back-end application processing chip. Although this approach increases the data update storage pressure of the image processing chip to some extent compared to other approaches, it has more flexibility for the back end, and can more flexibly require the image processing chip to transmit some image data, for example, when the back end algorithm is adjusted, the front end has more data to select.
Therefore, in the application, the image processing chip can selectively cache each image area of the current frame image to obtain the cached image data, and the back-end application processing chip can selectively cache each image area of the current frame image to obtain the cached image data. For the former, the resource occupation of an image processing chip can be reduced; for the latter, the resource occupation of a back-end application processing chip can be reduced, the data transmission efficiency can be improved, and the flexibility of a back-end algorithm can be increased.
It should be noted that, the image region segmentation index and the interval (such as the first preset number of frames and the second preset number of frames) for selectively caching each image region may be adapted and adjusted in real time according to the real-time shooting scene and the change of the power consumption and performance of the system. For example, when the system power consumption is low and the system performance is good, the frame update interval can be correspondingly reduced, for example, the interval of the corresponding frame of the micro-variation area is reduced, and the number of image area segmentation is increased, so that the matching result is more accurate; as another example, when the transmission bandwidth is sufficient, the data buffer amount of a single image frame can be increased accordingly, i.e. more image data is stored, and the segmentation and discarding are reduced. In addition, when dividing an image region, the granularity of the division can be dynamically adjusted according to the characteristics of a target, for example, for a certain small region, although the large region where the small region is located varies, the content of the small region is relatively fixed, and the granularity of the region can be correspondingly reduced, and further the data buffer amount can be reduced.
Step S803, when the photographing request is received, performing frame selection processing on the cached image data to obtain a selected data frame.
Specifically, when a user takes a picture through the camera APP, a shooting request is triggered, the back-end application processing chip receives the shooting request, and frame selection processing is performed on the cached image data based on the shooting request, that is, a required data frame is obtained from the cached image data, so as to obtain a selected data frame.
As an example, referring to fig. 9a, the image processing chip obtains the buffered image data and sends the buffered image data to the back-end application processing chip, so that the back-end application processing chip performs frame selection processing on the buffered image data when receiving the photographing request.
That is to say, after the image processing chip obtains the cached image data through image segmentation and selective caching, the cached image data is transmitted to the back-end application processing chip, and when the back-end application processing chip receives a photographing request, the back-end application processing chip performs frame selection processing on the cached image data. The image processing chip is used for selectively caching the cached image data, and the data volume is small, so that the data transmission pressure can be effectively reduced during transmission, the data transmission efficiency is improved, and the data processing pressure of the rear-end application processing chip can be reduced.
Furthermore, when the image data cached by the image processing chip is updated, the cached image data is sent to the back-end application processing chip, so that the back-end application processing chip can perform frame selection processing on the latest data in the first time when receiving the photographing request, the frame selection speed is increased, and the transmission pressure is reduced.
As another example, referring to fig. 9b, after obtaining the cached image data through the back-end application processing chip, the back-end application processing chip performs frame selection processing on the cached image data when receiving the photographing request.
That is to say, after the image processing chip obtains a plurality of image areas of the current frame image through image segmentation and selectively transmits the image areas to the back-end application processing chip, the back-end application processing chip performs caching to obtain cached image data, and the back-end application processing chip performs frame selection processing on the cached image data when receiving a photographing request. The image processing chip selectively transmits different variable regions of each frame of image to the back-end application processing chip instead of all data, so that the transmitted data volume is small, the data transmission pressure can be effectively reduced, the data transmission efficiency is improved, and the data processing pressure of the back-end application processing chip can be reduced.
As another example, referring to fig. 9c, when a photographing request is received, the image processing chip performs frame selection processing on the cached image data, and sends the selected data frame to the back-end application processing chip, so that the back-end application processing chip performs fusion splicing and image post-processing on the selected data frame.
Specifically, when receiving a photographing request, the back-end application processing chip transmits the photographing request to the image processing chip, the image processing chip performs frame selection processing on the cached image data, transmits a selected data frame obtained after the frame selection processing and the cached image data to the back-end application processing chip, and the back-end application processing chip performs fusion splicing and image post-processing on the selected data frame. The image processing chip transmits the data frames after the frame selection processing and the cached image data obtained by selective caching, and the data volume is small, so that the data transmission pressure can be effectively reduced, the data transmission efficiency is improved, and the data processing pressure of the rear-end application processing chip can be reduced.
Thus, in the present application, the frame selecting process may be performed by the image processing chip, or may be performed by the backend application processing chip. Regardless of the mode, the data caching amount and the data transmission pressure (the occupied bandwidth of transmission) can be effectively reduced through the selective data caching mode, so that the system resource occupation is effectively reduced, the data transmission processing efficiency is improved, meanwhile, the speed of rear-end algorithm processing can be increased by less data volume, and the user experience is well improved.
Further, as an example, the frame selection processing is performed on the buffered image data, and includes: acquiring actual photographing time; and carrying out frame selection processing on the cached image data according to the actual photographing time.
Specifically, when receiving a photographing request, the back-end application processing chip calculates the actual photographing time and selects a corresponding data frame from the cached image data according to the actual photographing time; or the actual photographing time is transmitted to the image processing chip, and the image processing chip selects the corresponding data frame from the cached image data according to the actual photographing time. When the corresponding data frame is selected from the cached image data according to the actual photographing time, the data frame which is the same as or most adjacent to the actual photographing time can be used as the final selected data frame, and the final selected data frame is a high variation region because the data volume of the high variation region of the image frame is the largest, namely, the image data corresponding to the high variation region of the image frame which is the same as or most adjacent to the actual photographing time is used as the selected data frame.
As another example, the frame selection processing is performed on the buffered image data, and includes: acquiring actual photographing time and an expected photographing effect; and carrying out frame selection processing on the cached image data according to the actual photographing time and the expected photographing effect. Wherein the desired photographing effect may be an effect set by the user at the time of photographing.
Specifically, when receiving a photographing request, the back-end application processing chip calculates the actual photographing time, obtains the expected photographing effect at the same time, and selects a corresponding data frame from the cached image data according to the actual photographing time and the expected photographing effect; or the actual photographing time and the expected photographing effect are transmitted to the image processing chip, and the image processing chip selects the corresponding data frame from the cached image data according to the actual photographing time and the expected photographing effect. When selecting the corresponding data frame from the cached image data according to the actual photographing time and the desired photographing effect, the data frame which is the same as the actual photographing time and is the most adjacent to the actual photographing time is selected firstly, then one data frame which best meets the desired photographing effect is selected from the data frames which are the same as the actual photographing time and are the most adjacent to the actual photographing time to serve as the final selected data frame, and the final selected data frame is a high-variation area because the data volume of the high-variation area of the image frame is the most, namely, the image data which corresponds to the high-variation area of the image frame which is the same as the actual photographing time and is the most adjacent to the actual photographing time is selected to serve as the selected data frame.
Of course, other indexes or manners may also be used to perform frame selection processing on the cached image data to obtain a selected data frame, which may be specifically implemented by using the prior art and will not be described herein again.
It should be noted that, in the present application, the image processing chip and the back-end application processing chip may perform data transmission through MIPI or PCIe, and the details are not limited herein. It should be noted that, the photographing requests in the above examples are all received and obtained by the back-end application processing chip, and the photographing requests may also be received and obtained by the image processing chip, which is not limited herein.
And step S804, performing fusion splicing and image post-processing according to the selected data frame to obtain a photographed image.
Specifically, the selected data frames can be subjected to fusion splicing and image post-processing by the application processing chip to obtain the photographed image.
As an example, referring to fig. 5a, the fusion splicing and the image post-processing according to the selected data frame include: performing fusion splicing according to the selected data frames to obtain a spliced image; post-processing the spliced image to obtain a first post-processed image; and performing format conversion processing on the first post-processed image and inserting label information to generate a photographed image.
Specifically, the selected data frames may be fused and spliced to obtain a spliced image, the spliced image may be post-processed, such as beautifying and anti-shaking, the post-processed image may be format-converted and tag information may be inserted, and finally a complete photographed image of an expected format may be generated. Therefore, the photographed image can be obtained through a mode of splicing first and then processing.
It should be noted that, in general, a photographed image of an android camera includes an image main body, a thumbnail and tag information, where the image main body and the thumbnail have the same data format, such as a jpeg format, and are different in size, so that both of the two parts are processed by format conversion processing, such as jpeg encode (transcoding from YUV color coding to jpeg data), and the tag information mainly includes information such as a width, a height and an exposure parameter of the image, and is finally inserted into the information.
As another example, referring to fig. 5b, performing fusion splicing and image post-processing according to the selected data frame includes: carrying out first post-processing on the selected data frame to obtain a second post-processed image; performing fusion splicing according to the second post-processing image to obtain a spliced image; performing second post-processing on the spliced image to obtain a third post-processed image; and performing format conversion processing on the third post-processed image and inserting label information to generate a photographed image.
Specifically, the selected data frame may be post-processed once to obtain a post-processed image, the post-processed images may be fused and spliced, the spliced image may be post-processed once, the post-processed image may be format-converted and tag information may be inserted, and finally, a complete photographed image in an expected format may be generated. Therefore, the photographed image can be obtained by image fusion and splicing in the post-processing process.
As another example, referring to fig. 5c, the fusion splicing and the image post-processing according to the selected data frame include: carrying out first post-processing on the selected data frame to obtain a second post-processed image; performing secondary post-processing on the second post-processed image to obtain a fourth post-processed image; performing fusion splicing according to the fourth post-processing image to obtain a spliced image; and carrying out format conversion processing on the spliced image and inserting label information to generate a photographed image.
Specifically, the selected data frame may be post-processed twice to obtain a post-processed image, the post-processed images may be fused and spliced, format conversion processing and label information insertion may be performed on the spliced image, and finally, a complete photographed image in an expected format may be generated. Therefore, the photographed image can be obtained by performing post-processing and then performing image fusion and splicing.
Therefore, in the present application, the photographed image can be obtained by performing fusion splicing on the images first and then performing post-processing (as shown in fig. 5 a), or the photographed image can be obtained by performing post-processing and then performing fusion splicing on the images (as shown in fig. 5b and 5 c), and for the latter, by adjusting the processing sequence of the post-processing algorithm or carrying out image fusion splicing in the post-processing process, the texture of the image can be prevented from being influenced after some processing, such as jpeg encode, and the latter can save the resources of post-processing algorithm, that is, only one image post-processing is needed for the image data part which does not need to be transmitted every time, and the image post-processing algorithm processing does not need to be carried out again after each image fusion, therefore, not only can system resources be saved, but also the processing speed of the image data can be increased (the image data which needs to be subjected to image post-processing is less in deed). For example, in the continuous shooting, the image data corresponding to the post-processed micro variable region and the background region is already obtained in the first shooting, so that in the second shooting, if the micro variable region and the background region are not updated, the post-processing is only performed on the updated high variable region, so that the post-processing on the micro variable region and the background region is omitted, the system resource is saved, and the processing speed of the image data is accelerated.
It should be noted that, in the present application, the adjustment may be adapted in real time according to a real-time shooting scene and a change in power consumption and performance of the system, for example, when the shooting scene is a continuous shooting or the system performance is poor, a mode shown in fig. 5b or fig. 5c is adopted; when the system power consumption is low and the system performance is good, the mode shown in 5a is adopted.
Further, as an example, selecting a data frame as image data corresponding to a high variation region, wherein performing fusion splicing according to the selected data frame includes: according to the selected data frame, acquiring image data corresponding to a micro-variation area and a background area which are matched with the selected data frame from the buffered image data; and carrying out image fusion splicing according to the image data corresponding to the high variation region, the micro variation region matched with the selected data frame and the image data corresponding to the background region.
Specifically, after obtaining the selected data frame, that is, the image data corresponding to the high variation region, the application processing chip may first obtain the image data corresponding to the micro variation region and the background region, which are matched with the image data corresponding to the high variation region, from the buffered image data according to the image data corresponding to the high variation region, for example, select the image data corresponding to the micro variation region and the background region corresponding to the frame closest to the high variation region, and then perform image fusion and stitching according to the image data corresponding to the high variation region and the image data corresponding to the micro variation region and the background region, which are matched with the image data corresponding to the high variation region.
More specifically, assuming that the selected data frame is image data corresponding to a high variation region of the current image frame, image data corresponding to a low variation region that is forward spaced by a first preset number of frames, for example, 3 frames, and image data corresponding to a background region that is forward spaced by a second preset number of frames, for example, 5 frames, may be selected, and then image fusion and splicing are performed according to the image data corresponding to the high variation region, the image data corresponding to the micro variation region, and the image data corresponding to the background region, so as to obtain a spliced image, and finally the spliced image is subjected to post-algorithm processing, so as to obtain a photographed image.
Further, when a photographing request is received, a current frame image is obtained, the current frame image is cut to obtain a preview frame image, and the preview frame image is sent to the back-end application processing chip, so that the back-end application processing chip can correct the spliced image according to the preview frame image.
That is to say, in the application, the selected data frames can be directly fused and spliced to obtain a complete image, or the selected data frames can be fused and spliced first, and then the spliced image is corrected based on the image frame corresponding to the actual photographing time, so that a more accurate image effect is obtained.
Specifically, the image processing chip acquires the image frame corresponding to the photographing moment, and transmits the image frame before and after the image frame is split to the rear-end application processing chip, so that the rear-end application processing chip can perform one-time correction processing on the basis of the complete image frame after image splicing processing, and a more accurate image effect is obtained. Of course, in order to reduce the data amount of the image frame before being divided and transmitted to the rear-end application processing chip, the image processing chip may perform a cropping operation on the image frame before being divided, that is, an image size changing operation, and then perform transmission, so as to improve the photographed image effect without increasing the data transmission pressure as much as possible.
In summary, according to the photographing method of the embodiment of the present invention, the continuous frame image is obtained, the continuous frame image is subjected to parameter variation analysis to divide the current frame image into the plurality of image regions, each image region of the current frame image is selectively cached to obtain the cached image data, then when the photographing request is received, the cached image data is subjected to frame selection processing to obtain the selected data frame, and fusion splicing and image post-processing are performed according to the selected data frame to obtain the photographed image. Compared with the traditional ZSL photographing scheme, the method and the device have the advantages that the segmented image is selectively stored, so that the requirement on the storage space is far smaller than that of the traditional processing algorithm, the pressure of data back transmission is certainly reduced, and further, when algorithm processing is carried out at the rear end, partial image data transmitted to the rear end only need to be processed, the calculation power of the rear end algorithm is saved, more calculation power can be applied to the improvement of the image effect by the related algorithm, so that the rear end algorithm can be well improved in processing speed and effect, the requirements of users can be better met, and the user experience is improved.
In one embodiment, a computer-readable storage medium is provided, on which a photographing program is stored, which when executed by a processor implements the aforementioned photographing method.
According to the computer-readable storage medium of the embodiment of the invention, by the photographing method, the amount of cached and transmitted data can be reduced, so that the system resource occupation is reduced and the image transmission processing efficiency is improved.
In one embodiment, an electronic device is provided, which includes a memory, a processor, and a photographing program stored in the memory and executable on the processor, and when the processor executes the photographing program, the photographing method is implemented.
It should be noted that the electronic device may be a mobile phone, a tablet computer, a digital camera, or other devices with a photographing function, and the internal structure diagram of the electronic device may be as shown in fig. 10, the electronic device 2000 includes a memory 2010, a processor 2020, a camera 2030 and a display 2040, and the processor 2020 may further include an image processing chip 2021 and an application processing chip 2022. The image processing chip 2021 and the application processing chip 2022 in the processor 2020 perform a series of processing on the continuous frame images collected by the camera 2030 according to the aforementioned photographing method to obtain photographed images, and the photographed images are displayed on the display screen 2040 and can be stored in the memory 2010. It can be understood that the image processing chip 2021 and the application processing chip 2022 are internally provided with memories, and the internal memories store the photographing program and store data generated during the image processing process, such as caching data images. In addition, those skilled in the art will appreciate that the structure shown in fig. 10 is a block diagram of only a portion of the structure related to the present application, and does not constitute a limitation on the electronic device to which the present application is applied, and a specific electronic device may include more or less components than those shown in the drawings, or combine some components, or have a different arrangement of components.
In one embodiment, there is provided a photographing apparatus, as shown in fig. 11, the photographing apparatus 3000 including: an image processing chip 3010 and an application processing chip 3020.
The image processing chip 3010 is configured to obtain a continuous frame image, perform variable parameter analysis on the continuous frame image to divide a current frame image into a plurality of image regions, and perform selective caching on each image region of the current frame image to obtain cached image data; the application processing chip 3020 is configured to receive the cached image data, perform frame selection processing on the cached image data when receiving the photographing request, obtain a selected data frame, and perform fusion splicing and image post-processing according to the selected data frame to obtain a photographed image.
In one embodiment, the image processing chip 3010 is specifically configured to: comparing and analyzing the current frame image and the cached image frame to obtain image areas with different variation parameters; and segmenting the current frame image according to the image areas with different variation parameters to obtain a plurality of image areas.
In one embodiment, the plurality of image regions includes a high variation region, a micro variation region, and a background region.
In one embodiment, the image processing chip 3010 is specifically configured to: and caching each frame of the high-variation area, caching each first preset number of frames of the micro-variation area, and caching each second preset number of frames of the background area, wherein the first preset number of frames is less than the second preset number of frames.
In one embodiment, the image processing chip 3010 is further configured to send the buffered image data to the application processing chip 3020 when there is an update in the buffered image data.
In one embodiment, the data frame is selected as image data corresponding to a high variation region, wherein the back-end application processing chip 3020 is specifically configured to: according to the selected data frame, acquiring image data corresponding to a micro-variation area and a background area which are matched with the selected data frame from the buffered image data; and carrying out image fusion splicing according to the image data corresponding to the high variation region, the micro variation region matched with the selected data frame and the image data corresponding to the background region.
In one embodiment, the image processing chip 3010 is further configured to, upon receiving a photographing request sent by the application processing chip 3020, obtain a current frame image, perform cropping processing on the current frame image to obtain a preview frame image, and send the preview frame image to the application processing chip 3020, so that the back-end application processing chip 3020 performs correction processing on the stitched image according to the preview frame image.
In one embodiment, the application processing chip 3020 is specifically configured to: performing fusion splicing according to the selected data frames to obtain a spliced image; post-processing the spliced image to obtain a first post-processed image; and performing format conversion processing on the first post-processed image and inserting label information to generate a photographed image.
In another embodiment, the application processing chip 3020 is specifically configured to: carrying out first post-processing on the selected data frame to obtain a second post-processed image; performing fusion splicing according to the second post-processing image to obtain a spliced image; performing second post-processing on the spliced image to obtain a third post-processed image; and performing format conversion processing on the third post-processed image and inserting label information to generate a photographed image.
In yet another embodiment, the application processing chip 3020 is specifically configured to: carrying out first post-processing on the selected data frame to obtain a second post-processed image; performing second post-processing on the second post-processed image to obtain a fourth post-processed image; performing fusion splicing according to the fourth post-processing image to obtain a spliced image; and carrying out format conversion processing on the spliced image and inserting label information to generate a photographed image.
It should be noted that, for the specific limitation of the photographing apparatus, reference may be made to the above limitation on the photographing method, and details are not described herein again. The modules in the photographing device can be wholly or partially realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. An image processing method applied to an image processing chip is characterized by comprising the following steps:
acquiring continuous frame images, and performing variable parameter analysis on the continuous frame images to divide a current frame image into a plurality of image areas;
selectively caching each image area of the current frame image to obtain cached image data;
and when a photographing request is received, sending the cached image data to an application processing chip.
2. The image processing method of claim 1, wherein performing a varying parameter analysis on the successive frame images to segment the current frame image into a plurality of image regions comprises:
comparing and analyzing the current frame image and the cached image frame to obtain image areas with different variation parameters;
and segmenting the current frame image according to the image areas with different variation parameters to obtain the plurality of image areas.
3. The image processing method according to claim 1 or 2, wherein the plurality of image regions include a high variation region, a micro variation region, and a background region, and wherein selectively buffering each image region of the current frame image comprises:
and caching each frame of the high-variation area, caching every other first preset number of frames of the micro-variation area, and caching every other second preset number of frames of the background area, wherein the first preset number of frames is smaller than the second preset number of frames.
4. The image processing method of claim 1, wherein when the photographing request is received, the current frame image is further acquired, processed to obtain a preview frame image, and sent to the application processing chip.
5. An image processing chip, comprising:
a memory;
the neural network processor is used for acquiring continuous frame images, performing variable parameter analysis on the continuous frame images to divide a current frame image into a plurality of image areas, and controlling the memory to selectively cache each image area of the current frame image to obtain cached image data;
a first data transmission interface for transmitting the cached image data;
the memory is used for storing the cache image data and sending the cache image data to an application processing chip through the first data transmission interface when a photographing request is received.
6. An image processing method applied to an application processing chip is characterized by comprising the following steps:
receiving cache image data, wherein the cache image data is obtained by performing variable parameter analysis on the obtained continuous frame images by an image processing chip so as to divide the current frame image into a plurality of image areas and selectively caching each image area of the current frame image;
when a photographing request is received, performing frame selection processing on the cached image data to obtain a selected data frame;
and performing fusion splicing and image post-processing according to the selected data frame to obtain a photographed image.
7. The image processing method of claim 6, wherein the plurality of image regions comprise a high variation region, a micro variation region and a background region, and wherein performing fusion splicing according to the selected data frame comprises:
according to the selected data frame, acquiring image data corresponding to a micro-variation area and a background area which are matched with the selected data frame from the cached image data;
and carrying out image fusion splicing according to the image data corresponding to the high variation region, the micro variation region matched with the selected data frame and the image data corresponding to the background region.
8. The image processing method according to claim 6 or 7, wherein when receiving a photographing request, the image processing method further receives a preview frame image sent by the image processing chip, and performs correction processing on the stitched image according to the preview frame image.
9. An application processing chip, comprising:
the second data transmission interface is used for receiving cache image data, wherein the cache image data is obtained by performing variable parameter analysis on the obtained continuous frame image by an image processing chip so as to divide the current frame image into a plurality of image areas and selectively caching each image area of the current frame image;
and the central processing unit is used for carrying out frame selection processing on the cached image data when receiving a photographing request to obtain a selected data frame, and carrying out fusion splicing and image post-processing according to the selected data frame to obtain a photographed image.
10. An electronic device, comprising:
the image processing chip of claim 5; and/or
An application processing chip as claimed in claim 9.
CN202110418292.0A 2021-04-19 2021-04-19 Image processing method, image processing chip, application processing chip and electronic equipment Active CN113132637B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110418292.0A CN113132637B (en) 2021-04-19 2021-04-19 Image processing method, image processing chip, application processing chip and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110418292.0A CN113132637B (en) 2021-04-19 2021-04-19 Image processing method, image processing chip, application processing chip and electronic equipment

Publications (2)

Publication Number Publication Date
CN113132637A true CN113132637A (en) 2021-07-16
CN113132637B CN113132637B (en) 2023-04-07

Family

ID=76777488

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110418292.0A Active CN113132637B (en) 2021-04-19 2021-04-19 Image processing method, image processing chip, application processing chip and electronic equipment

Country Status (1)

Country Link
CN (1) CN113132637B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114979481A (en) * 2022-05-23 2022-08-30 深圳市海创云科技有限公司 5G ultra-high-definition video monitoring system and method

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105208292A (en) * 2015-11-16 2015-12-30 联想(北京)有限公司 Photographic processing method and system
US20180213150A1 (en) * 2017-01-24 2018-07-26 Qualcomm Incorporated Adaptive buffering rate technology for zero shutter lag (zsl) camera-inclusive devices
CN108989832A (en) * 2017-05-31 2018-12-11 腾讯科技(深圳)有限公司 A kind of image processing method and its equipment, storage medium, terminal
CN109089053A (en) * 2018-10-23 2018-12-25 Oppo广东移动通信有限公司 Image transfer method, device, electronic equipment and storage medium
CN111275846A (en) * 2018-12-04 2020-06-12 比亚迪股份有限公司 Data record generation method and device, electronic equipment and storage medium
CN111385475A (en) * 2020-03-11 2020-07-07 Oppo广东移动通信有限公司 Image acquisition method, photographing device, electronic equipment and readable storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105208292A (en) * 2015-11-16 2015-12-30 联想(北京)有限公司 Photographic processing method and system
US20180213150A1 (en) * 2017-01-24 2018-07-26 Qualcomm Incorporated Adaptive buffering rate technology for zero shutter lag (zsl) camera-inclusive devices
CN108989832A (en) * 2017-05-31 2018-12-11 腾讯科技(深圳)有限公司 A kind of image processing method and its equipment, storage medium, terminal
CN109089053A (en) * 2018-10-23 2018-12-25 Oppo广东移动通信有限公司 Image transfer method, device, electronic equipment and storage medium
CN111275846A (en) * 2018-12-04 2020-06-12 比亚迪股份有限公司 Data record generation method and device, electronic equipment and storage medium
CN111385475A (en) * 2020-03-11 2020-07-07 Oppo广东移动通信有限公司 Image acquisition method, photographing device, electronic equipment and readable storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114979481A (en) * 2022-05-23 2022-08-30 深圳市海创云科技有限公司 5G ultra-high-definition video monitoring system and method

Also Published As

Publication number Publication date
CN113132637B (en) 2023-04-07

Similar Documents

Publication Publication Date Title
US11006046B2 (en) Image processing method and mobile terminal
US10334153B2 (en) Image preview method, apparatus and terminal
CN107155067B (en) Camera control method and device, terminal and storage medium
US11431915B2 (en) Image acquisition method, electronic device, and non-transitory computer readable storage medium
CN112073648B (en) Video multi-picture synthesis method and device, computer equipment and storage medium
EP3062509A1 (en) Terminal, image processing method, and image acquisition method
CN108989832B (en) Image data processing method and equipment, storage medium and terminal thereof
CN110149554B (en) Video image processing method and device, electronic equipment and storage medium
CN110602412B (en) IPC, image processing device, image processing system and method
CN110213498B (en) Image generation method and device, electronic equipment and computer readable storage medium
CN110650291A (en) Target focus tracking method and device, electronic equipment and computer readable storage medium
CN113132637B (en) Image processing method, image processing chip, application processing chip and electronic equipment
CN110881108B (en) Image processing method and image processing apparatus
EP3876534A1 (en) Coding data processing method and apparatus, computer device, and storage medium
CN110958399A (en) High dynamic range image HDR realization method and related product
WO2020168807A1 (en) Image brightness adjusting method and apparatus, computer device, and storage medium
CN111445487A (en) Image segmentation method and device, computer equipment and storage medium
CN110049247B (en) Image optimization method and device, electronic equipment and readable storage medium
CN111462021A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN113422967B (en) Screen projection display control method and device, terminal equipment and storage medium
US10992901B2 (en) Method, apparatus, device and storage medium for controlling video playback speed
CN110475044B (en) Image transmission method and device, electronic equipment and computer readable storage medium
CN110401845B (en) First screen playing method and device, computer equipment and storage medium
CN114339306A (en) Live video image processing method and device and server
CN114143471A (en) Image processing method, system, mobile terminal and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant