CN115457098A - Image processing method, device, system, electronic equipment and readable storage medium - Google Patents

Image processing method, device, system, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN115457098A
CN115457098A CN202211078415.1A CN202211078415A CN115457098A CN 115457098 A CN115457098 A CN 115457098A CN 202211078415 A CN202211078415 A CN 202211078415A CN 115457098 A CN115457098 A CN 115457098A
Authority
CN
China
Prior art keywords
frame image
image
processor
metadata
depth information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211078415.1A
Other languages
Chinese (zh)
Inventor
樊明兴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Weiguang Co ltd
Original Assignee
Zeku Technology Shanghai Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zeku Technology Shanghai Corp Ltd filed Critical Zeku Technology Shanghai Corp Ltd
Priority to CN202211078415.1A priority Critical patent/CN115457098A/en
Publication of CN115457098A publication Critical patent/CN115457098A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/20Processor architectures; Processor configuration, e.g. pipelining

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The application relates to an image processing method, an apparatus, a system, an electronic device, a storage medium and a computer program product. The method comprises the following steps: acquiring image data information of a first frame image; the first frame image is received and obtained by a first processor; processing depth information based on image data information of the first frame image to obtain depth information of the first frame image; updating the depth information of the first frame image into the metadata of the second frame image based on the receiving of the second frame image by the first processor and before the receiving of the second frame image by the first processor is finished; and the metadata of the second frame image is used for being sent to the receiving end through the first processor so as to indicate the receiving end to process according to the depth information of the first frame image in the metadata of the second frame image and the corresponding first frame image. The method can improve the accuracy of the obtained depth information.

Description

Image processing method, device, system, electronic equipment and readable storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to an image processing method, an apparatus, a system, an electronic device, and a computer-readable storage medium.
Background
With the development of image processing technology, depth images are used in various fields, such as image segmentation, edge detection, image registration, three-dimensional reconstruction, image blurring and the like, and directly reflect the geometric shape of a visible surface of a scene, so that the image processing effect can be improved. For example, blurring an image using depth data in a depth image can make the blurred image more natural and improve the image blurring effect. However, the image data output by the current photosensitive element is easily mismatched with the image data actually used for depth estimation, so that the accuracy of the depth information obtained by depth estimation is low.
Disclosure of Invention
The embodiment of the application provides an image processing method, an image processing device, an image processing system, electronic equipment and a computer readable storage medium, and can improve the accuracy of obtained depth information.
An image processing method applied to a second processor, the method comprising:
acquiring image data information of a first frame image; the first frame image is received and obtained by a first processor;
processing depth information based on image data information of the first frame image to obtain depth information of the first frame image;
updating the depth information of the first frame image into the metadata of the second frame image based on the receiving of the second frame image by the first processor and before the receiving of the second frame image by the first processor is finished;
and the metadata of the second frame image is used for being sent to the receiving end through the first processor so as to indicate the receiving end to process according to the depth information of the first frame image in the metadata of the second frame image and the corresponding first frame image.
An image processing apparatus, the apparatus comprising:
the image data information acquisition module is used for acquiring the image data information of the first frame image; the first frame image is received and obtained by a first processor;
the depth information processing module is used for carrying out depth information processing on the basis of the image data information of the first frame image to obtain the depth information of the first frame image;
a depth information updating module for updating the depth information of the first frame image into the metadata of the second frame image based on the reception of the second frame image by the first processor and before the reception of the second frame image by the first processor is finished;
and the metadata of the second frame image is used for being sent to the receiving end through the first processor so as to instruct the receiving end to process according to the depth information of the first frame image in the metadata of the second frame image and the corresponding first frame image.
An electronic device comprising a memory storing a computer program and a processor implementing the steps of the above image processing method when executing the computer program.
A computer-readable storage medium, having stored thereon a computer program which, when being executed by a processor, carries out the steps of the above image processing method.
A computer program product comprising a computer program which, when executed by a processor, carries out the steps of the above image processing method.
The second processor acquires the image data information of the first frame image received and obtained by the first processor, performs depth information processing based on the image data information of the first frame image, updates the obtained depth information of the first frame image into the metadata of the second frame image based on the second frame image received by the first processor and before the first processor finishes receiving the second frame image, and instructs the receiving end to process according to the depth information of the first frame image in the metadata of the second frame image and the corresponding first frame image after the metadata of the second frame image is sent to the receiving end by the first processor. In the image processing process, the second processor receives the image data information of the finished image for each time aiming at the image data information subjected to the depth information processing, so that the consistency of the image data information of the same image in different processing processes is ensured, and the accuracy of the obtained depth information is improved.
A method of image processing, the method comprising:
receiving, by a first processor, a first frame image;
acquiring image data information of a first frame image through a second processor, performing depth information processing on the basis of the image data information of the first frame image through the second processor, receiving a second frame image through the first processor, and updating the obtained depth information of the first frame image into metadata of the second frame image before the end of receiving the second frame image through the first processor;
and sending the metadata of the second frame image to the receiving end through the first processor to instruct the receiving end to process according to the depth information of the first frame image in the metadata of the second frame image and the corresponding first frame image.
An image processing apparatus, the apparatus comprising:
the first frame image receiving module is used for receiving a first frame image through the first processor;
the depth information processing module is used for acquiring the image data information of the first frame image through the second processor, performing depth information processing on the basis of the image data information of the first frame image through the second processor, receiving the second frame image through the first processor, and updating the obtained depth information of the first frame image into metadata of the second frame image before the end of receiving the second frame image through the first processor;
and the metadata sending module is used for sending the metadata of the second frame image to the receiving end through the first processor so as to indicate the receiving end to process the metadata of the second frame image according to the depth information of the first frame image in the metadata of the second frame image and the corresponding first frame image.
An electronic device comprising a memory storing a computer program and a processor implementing the steps of the above image processing method when executing the computer program.
A computer-readable storage medium, having stored thereon a computer program which, when being executed by a processor, carries out the steps of the above image processing method.
A computer program product comprising a computer program which, when executed by a processor, carries out the steps of the above image processing method.
The image processing method, the image processing device, the electronic equipment, the storage medium and the computer program product are characterized in that the second processor is used for acquiring the image data information of the first frame image received by the first processor, performing depth information processing based on the image data information of the first frame image, updating the obtained depth information of the first frame image into the metadata of the second frame image based on the second frame image received by the first processor, and sending the metadata of the second frame image to the receiving end through the first processor before the first processor finishes receiving the second frame image, so as to instruct the receiving end to process according to the depth information of the first frame image in the metadata of the second frame image and the corresponding first frame image. In the image processing process, the second processor receives the image data information of the finished image for each time aiming at the image data information processed by the depth information, so that the consistency of the image data information of the same image in different processing processes is ensured, the accuracy of the obtained depth information is improved, and the image processing effect can be improved when the image data information is processed based on the depth information.
An image processing method applied to a first processor, the method comprising:
receiving a first frame image and sending the first frame image to a receiving end;
based on receiving the second frame image, sending the metadata of the second frame image to the receiving end to instruct the receiving end to process according to the depth information of the first frame image in the metadata of the second frame image and the corresponding first frame image;
and the second processor updates the obtained depth information of the first frame image into the metadata of the second frame image based on the received second frame image and before the end of receiving the second frame image.
An image processing apparatus, the apparatus comprising:
the first frame image receiving module is used for receiving the first frame image and sending the first frame image to a receiving end;
the second frame image metadata sending module is used for sending the metadata of the second frame image to the receiving end based on the received second frame image so as to indicate the receiving end to process according to the depth information of the first frame image in the metadata of the second frame image and the corresponding first frame image;
and the second processor receives the second frame image and updates the obtained depth information of the first frame image into the metadata of the second frame image before the second frame image is received.
An electronic device comprising a memory storing a computer program and a processor implementing the steps of the above image processing method when executing the computer program.
A computer-readable storage medium, having stored thereon a computer program which, when being executed by a processor, carries out the steps of the above image processing method.
A computer program product comprising a computer program which, when executed by a processor, carries out the steps of the above image processing method.
According to the image processing method, the image processing device, the electronic equipment, the storage medium and the computer program product, the first processor receives the first frame image and sends the first frame image to the receiving end, and the metadata of the second frame image is sent to the receiving end based on the received second frame image so as to instruct the receiving end to process according to the depth information of the first frame image in the metadata of the second frame image and the corresponding first frame image. And the second processor receives the second frame image and updates the obtained depth information of the first frame image into the metadata of the second frame image before the second frame image is received. In the image processing process, the second processor is the image data information of the received finished image for the image data information processed by the first processor each time, so that the consistency of the image data information of the same image in different processing processes is ensured, the accuracy of the obtained depth information is improved, and the image processing effect can be improved when the image data information is processed based on the depth information.
An image processing system, the system comprising:
a first processor for receiving a first frame image; based on receiving the second frame image, sending the metadata of the second frame image to the receiving end to instruct the receiving end to process according to the depth information of the first frame image in the metadata of the second frame image and the corresponding first frame image;
a second processor for acquiring image data information of the first frame image; processing depth information based on image data information of the first frame image to obtain depth information of the first frame image; updating the depth information of the first frame image into the metadata of the second frame image based on the receiving of the second frame image by the first processor and before the receiving of the second frame image by the first processor is finished;
the scheduling processor is used for controlling the first processor to receive the first frame image and the second frame image and controlling the first processor to send the metadata of the second frame image to the receiving end;
and the scheduling processor is also used for controlling the second processor to perform depth information processing based on the image data information of the first frame image and updating the depth information of the first frame image into the metadata of the second frame image.
In the image processing system, the scheduling processor controls the first processor to receive the first frame image and send the first frame image to the receiving end, and based on receiving the second frame image, the metadata of the second frame image is sent to the receiving end so as to instruct the receiving end to process according to the depth information of the first frame image in the metadata of the second frame image and the corresponding first frame image. The scheduling processor controls the second processor to acquire the image data information of the first frame image received and acquired by the first processor, performs depth information processing based on the image data information of the first frame image, and updates the acquired depth information of the first frame image into the metadata of the second frame image before the second frame image is received by the first processor. In the image processing process, the second processor is the image data information of the received finished image for the image data information processed by the first processor each time, so that the consistency of the image data information of the same image in different processing processes is ensured, the accuracy of the obtained depth information is improved, and the image processing effect can be improved when the image data information is processed based on the depth information.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a diagram of an exemplary environment in which a method for image processing is implemented;
FIG. 2 is a flow diagram of a method of image processing in one embodiment;
FIG. 3 is a flow diagram of depth information processing steps in one embodiment;
FIG. 4 is a flow chart of a method of image processing in another embodiment;
FIG. 5 is a timing diagram of an image processing method in one embodiment;
FIG. 6 is a flowchart of an image processing method in yet another embodiment;
FIG. 7 is a schematic illustration of a blurring process in one embodiment;
FIG. 8 is a diagram illustrating a scheduling process of an image processing method according to an embodiment;
FIG. 9 is a timing chart of an image processing method in another embodiment;
FIG. 10 is a block diagram showing the configuration of an image processing apparatus according to an embodiment;
FIG. 11 is a block diagram showing a configuration of an image processing apparatus according to another embodiment;
FIG. 12 is a block diagram showing a configuration of an image processing apparatus in another embodiment;
FIG. 13 is a block diagram showing a schematic configuration of an image processing system according to an embodiment;
FIG. 14 is a diagram illustrating the internal architecture of an electronic device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The image processing method provided by the embodiment of the application can be applied to the application environment shown in fig. 1. The scheduling processor 104 is in communication with the first processor 102 and the second processor 106, respectively, and may specifically be in communication via a network. In addition, the first processor 102 and the second processor 106 may also communicate with each other, such as via a network. The data storage system may store data that the respective processor needs to process. The data storage system may be integrated on the respective processor, or may be placed on the cloud or other network server. In application, image processing may be implemented based on communication between the first processor 102 and the second processor 106, specifically, the second processor 106 obtains image data information of a first frame image received and obtained by the first processor 102, the second processor 106 performs depth information processing based on the image data information of the first frame image, in the case of receiving a second frame image by the first processor 102, and before the end of receiving the second frame image by the first processor 102, the second processor 106 updates the obtained depth information of the first frame image into metadata of the second frame image, and the metadata of the second frame image instructs a receiving end to perform processing according to the depth information of the first frame image in the metadata of the second frame image and the corresponding first frame image after being transmitted to the receiving end by the first processor 102.
When the image processing method is implemented based on the first processor 102, the first processor 102 receives the first frame image and transmits the first frame image to the receiving end. The first processor 102 transmits metadata of the second frame image to the receiving end based on receiving the second frame image to instruct the receiving end to process according to the depth information of the first frame image in the metadata of the second frame image and the corresponding first frame image. The depth information of the first frame image is obtained by the second processor 106 through depth information processing based on the image data information of the first frame image, the second frame image is received by the second processor 106 based on the first processor 102, and the obtained depth information of the first frame image is updated into the metadata of the second frame image before the second frame image is received.
When the image processing method is implemented based on the scheduling processor 104, the scheduling processor 104 acquires, through the second processor 106, image data information of a first frame image received by the first processor 102, the second processor 106 performs depth information processing based on the image data information of the first frame image, and in the case of receiving a second frame image through the first processor 102, and before the end of receiving the second frame image by the first processor 102, the second processor 106 updates the obtained depth information of the first frame image into metadata of the second frame image, and the scheduling processor 104 transmits, through the first processor 102, the metadata of the second frame image to a receiving end to instruct the receiving end to process according to the depth information of the first frame image in the metadata of the second frame image and the corresponding first frame image.
The first processor 102, the scheduling processor 104, and the second processor 106 may be processors in an electronic device, the electronic device may be, but is not limited to, various personal computers, notebook computers, smart phones, tablet computers, internet of things devices, and portable wearable devices, and the internet of things devices may be smart speakers, smart televisions, smart air conditioners, smart car-mounted devices, and the like. The portable wearable device can be a smart watch, a smart bracelet, a head-mounted device, and the like. The first processor 102, the scheduling processor 104 and the second processor 106 may also be implemented by servers, and the servers may be implemented by independent servers or a server cluster composed of a plurality of servers.
In one embodiment, as shown in fig. 2, an image processing method is provided, and the method is described by taking the second processor in fig. 1 as an example, where the second processor may be a processor in an electronic device or a server. In this embodiment, the method includes the steps of:
step 202, acquiring image data information of a first frame image; the first frame image is received by the first processor.
The first frame image is received by the first processor, and may specifically be received by the first processor from a photosensitive element of the camera. When the camera shoots, imaging is realized through the photosensitive element, and a captured optical signal is converted into an electric signal capable of being processed. When the camera shoots, a photosensitive element of the camera sends a shot image to the first processor, and the shot image is received by the first processor and further processed, such as linear correction, noise removal, dead pixel removal, interpolation, white balance, automatic exposure control and the like, so as to improve the imaging quality. The image shot by the camera is sent to the first processor frame by the photosensitive element, and the first processor receives each frame of image. The first frame image is an image currently received by the first processor from a photosensitive element of the camera. The image data information refers to information related to the first frame image, and may specifically include attribute information, storage location information, and the like of the first frame image, and may further include specific image data of the first frame image. The attribute information may be description information for the first frame image, and may include, but is not limited to, a width, a height, or a bit width of the first frame image.
Specifically, when the first frame image needs to be subjected to depth information processing, such as depth estimation for the first frame image, the second processor acquires image data information of the first frame image, where the image data information may include attribute information and storage location information of the first frame image, and the second processor may acquire image data of the first frame image based on the storage location information and perform depth estimation based on the image data and the attribute information of the first frame image to obtain depth information of the first frame image. The image data information may further include image data of the first frame image, and the second processor may perform depth information processing directly according to the image data in the image data information. In a specific implementation, the second processor may trigger to acquire the image data information of the first frame image when the first processor finishes receiving the first frame image, that is, the first processor finishes receiving the first frame image, so as to process the image data information of the first frame image. For example, the second processor may acquire the image data information of the first frame image to the first processor upon determining that the first processor finishes receiving the first frame image, or the second processor may acquire the image data information of the first frame image from the memory.
And 204, performing depth information processing based on the image data information of the first frame image to obtain the depth information of the first frame image.
The depth information may include depth data of an image, and specifically may include a depth image, where the depth image is an image in which a distance from an image acquirer to each point in a scene, that is, a depth is used as a pixel value, and directly reflects a geometric shape of a visible surface of a scene. The depth image can be calculated into point cloud data through coordinate conversion, and the point cloud data with regular and necessary information can also be inversely calculated into depth image data. In the image frame provided by the depth data stream, each pixel point represents the distance from the object closest to the camera plane to the plane at that particular coordinate in the field of view of the depth sensor. In a specific application, the depth information may further include description information of the depth data for the image, that is, the depth information may further include attribute information of the depth data, such as various description information of the width, height, and bit width of the depth image. Based on the depth information of the first frame image, various processes such as image segmentation, edge detection, image registration, three-dimensional reconstruction, and image blurring may be performed on the first frame image.
Specifically, the second processor performs depth information processing on the image data information of the first frame image, and specifically may perform depth estimation based on the image data information of the first frame image to obtain the depth information of the first frame image. In a specific implementation, the second processor may perform depth calculation on the image data information of the first frame image based on a deep learning algorithm to obtain the depth information of the first frame image, for example, calculate the depth data of the first frame image.
Step 206, updating the depth information of the first frame image into the metadata of the second frame image based on the receiving of the second frame image by the first processor and before the receiving of the second frame image by the first processor is finished; and the metadata of the second frame image is used for being sent to the receiving end through the first processor so as to indicate the receiving end to process according to the depth information of the first frame image in the metadata of the second frame image and the corresponding first frame image.
The second frame image is an image which is received next and adjacently after the first frame image is received, namely the second frame image is received when the first frame image is received after being received and triggered again. Metadata is also called intermediate data and relay data, and is data describing data, mainly information describing data attributes. The metadata may be used to support functions such as indicating storage locations, historical data, resource lookups, file records, and the like. Metadata of an image is data describing the image, and may be attribute information of the image. Metadata of an Image may include types of EXIF (Exchangeable Image File format), IPTC (International Press Telecommunications Council), and XMP (Extensible Metadata Platform standard). The EXIF is usually added automatically by a digital camera when taking a picture, such as information of camera model, lens, exposure, picture size, etc.; the IPTC can comprise information such as picture titles, keywords, descriptions, authors, copyright and the like; XMP is a standard for metadata storage and management. Metadata for the image may be generated by the first processor as the image is received. The receiving end is used for processing the image, and for example, various processing such as image segmentation, image blurring, three-dimensional reconstruction and the like can be performed.
Specifically, after the first processor finishes receiving the first frame image, it may continue to receive the next frame image from the photosensitive element, that is, continue to receive the second frame image, and after the photosensitive element processes and outputs one frame of image data, if a vertical blanking period is needed to prepare for the next frame, that is, after a scanning point of the photosensitive element finishes scanning one frame, it needs to return to the upper left corner of the image from the lower right corner of the image to start scanning of a new frame, where the vertical blanking period is a time interval from the end of reading one frame to the start of reading the next frame by the photosensitive element. In the time interval corresponding to the vertical blanking period, the image data is not output by the photosensitive element, and the image data is not received by the first processor. When the photosensitive element outputs the second frame image after the vertical blanking period, the first processor correspondingly starts to receive the second frame image.
The second processor updates the depth information of the first frame image into the metadata of the second frame image upon determining that the first processor receives the second frame image. Specifically, the second processor may determine a storage location of metadata of the second frame image, and update the depth information of the first frame image into the storage location of the metadata of the second frame image. When the second processor performs the update processing on the depth information of the first frame image, it is necessary to complete the processing of updating the depth information of the first frame image into the metadata of the second frame image before the first processor finishes receiving the second frame image, so as to ensure that the depth information of the first frame image can be simultaneously sent out when the first processor sends the metadata of the second frame image to the receiving end, thereby ensuring the processing efficiency of the metadata of the second frame image.
The metadata of the second frame image may be sent to the receiving end by the first processor, so that after the receiving end obtains the metadata of the second frame image, the receiving end obtains the depth information of the first frame image from the metadata of the second frame image, and performs processing, such as image segmentation and image blurring, according to the depth information of the first frame image and the corresponding first frame image. The first frame image may also be sent to the receiving end by the first processor, and specifically, after the first frame image is received by the first processor, the received first frame image may be sent to the receiving end.
In a specific application, when the first processor completes receiving the first frame image, that is, when the first processor finishes receiving the first frame image, the second processor obtains image data information of the first frame image in response to the end of receiving the first frame image by the first processor, and performs depth estimation based on the image data information of the first frame image to obtain depth information of the first frame image, where the depth information of the first frame image may specifically include depth data of the first frame image, and may further include attribute data of the depth data. When the first processor triggers the reception of the second frame image, the second processor updates the depth information of the first frame image into the metadata of the second frame image in response to the reception of the second frame image by the first processor, and completes the updating processing of the depth information of the first frame image, namely completes the updating processing of the depth information of the first frame image into the metadata of the second frame image before the end of the reception of the second frame image by the first processor. The first processor may send the metadata of the second frame image to a receiving end, and the receiving end may extract the depth information of the first frame image from the metadata of the second frame image, and perform processing, such as three-dimensional reconstruction and blurring processing, according to the depth information of the first frame image and the corresponding first frame image.
In the specific application, the depth information processing of the first frame image by the second processor includes the processing of calculating the depth information of the first frame image and updating the calculated depth information of the first frame image into the metadata of the second frame image, which are both performed between the end of receiving the first frame image by the first processor and the end of receiving the second frame image by the first processor, that is, the first processor has finished receiving the first frame image and does not change the image data information of the first frame image any more, so that the second processor can be ensured to match the image data information subjected to the depth information processing with the first frame image received by the first processor, and the accuracy of the obtained depth information is improved.
In the image processing method, the second processor acquires image data information of a first frame image received and obtained by the first processor, performs depth information processing based on the image data information of the first frame image, updates the depth information of the acquired first frame image into metadata of the second frame image based on the second frame image received by the first processor and before the end of receiving the second frame image by the first processor, and instructs the receiving end to process according to the depth information of the first frame image in the metadata of the second frame image and the corresponding first frame image after the metadata of the second frame image is sent to the receiving end through the first processor. In the image processing process, the second processor receives the image data information of the finished image for each time aiming at the image data information subjected to the depth information processing, so that the consistency of the image data information of the same image in different processing processes is ensured, and the accuracy of the obtained depth information is improved.
In one embodiment, acquiring image data information for a first frame of image comprises: and acquiring image data information of the first frame image based on the fact that the first processor receives the first frame image triggering frame ending interrupt event.
The frame end interrupt event refers to an interrupt event that the first processor ends receiving the image. After the first processor receives all the image data of the first frame image, the photosensitive element of the camera needs to go through a vertical blanking period to adjust to process the second frame image, and the first processor generates an end-of-frame interrupt event of the first frame image so as to trigger the second processor to process the depth information of the first frame image.
Specifically, when the first processor receives the end of the first frame image, an end-of-frame interrupt event is triggered, and the second processor acquires the image data information of the first frame image in response to the end-of-frame interrupt event. In a specific application, when the frame end interrupt event is triggered, the first processor may directly send a trigger signal to the second processor to trigger the second processor to perform depth information processing by acquiring image data information of the first frame image. In another application, upon triggering an end-of-frame interrupt event, the scheduling processor may send a trigger signal to the second processor in response to the end-of-frame interrupt event to trigger the second processor to perform depth information processing by acquiring image data information of the first frame image. In a specific implementation, the trigger signal sent by the first processor or the scheduling processor to the second processor may carry image data information of the first frame image, or may also carry storage location information of the image data information of the first frame image, so that the second processor obtains the image data information of the first frame image according to the storage location information.
In this embodiment, based on the first processor receiving the first frame image triggering frame ending interrupt event, the second processor obtaining the image data information of the first frame image for depth information processing, it may be ensured that the first frame image has been received completely, and corresponding image data is not changed, and it may be ensured that the second processor matches, for the image data information for depth information processing, the first frame image received by the first processor, thereby being beneficial to improving accuracy of the obtained depth information.
In one embodiment, performing depth information processing based on image data information of a first frame image to obtain depth information of the first frame image includes: and performing depth estimation based on the image data information of the first frame image to obtain the depth information of the first frame image before the first processor triggers the receiving of the frame start interrupt event of the second frame image.
The frame start interrupt event refers to an interrupt event when the first processor starts to receive the image. And the photosensitive element of the camera is adjusted to process the second frame image after undergoing the vertical blanking period, the image data of the second frame image is output to the first processor, and the first processor triggers a frame start interrupt event when the first processor starts to receive the second frame image.
Specifically, the second processor performs depth estimation based on the image data information of the first frame of image, and may specifically perform depth estimation on the image data information through a pre-trained artificial neural network model to obtain the depth information of the first frame of image. The second processor performs depth estimation on the image data information, and the execution is completed before the first processor triggers a frame start interrupt event for receiving the second frame image. That is, before the first processor receives the second frame image, the second processor completes the depth estimation processing on the first frame image, and obtains the depth information of the first frame image, so that the first processor and the second processor can process different frame images, and the isolation of the processing of the first processor and the second processor is realized.
In this embodiment, before the first processor receives the second frame image, that is, before the first processing triggers the frame start interrupt event, the second processor completes processing the depth information of the first frame image to obtain the depth information of the first frame image, so that the first processor and the second processor can process different frame images, thereby realizing the separation of the processing of the first processor and the second processor, and being capable of ensuring that the image data information processed by the second processor for the depth information is matched with the first frame image received by the first processor, thereby being beneficial to improving the accuracy of the obtained depth information.
In one embodiment, updating the depth information of the first frame image into the metadata of the second frame image based on receiving the second frame image by the first processor and before the end of receiving the second frame image by the first processor comprises: the method includes receiving a frame start interrupt event of a second frame image based on a first processor trigger, and updating depth information of the first frame image into metadata of the second frame image before the first processor triggers the reception of a frame end interrupt event of the second frame image.
The frame start interrupt event refers to an interrupt event that the first processor starts receiving the image, and the frame end interrupt event refers to an interrupt event that the first processor ends receiving the image.
Specifically, when the first processor triggers the reception of the frame start interrupt event of the second frame image, which indicates that the first processor starts to receive the second frame image, the second processor updates the depth information of the first frame image into the metadata of the second frame image, and completes the process of updating the depth information of the first frame image into the metadata of the second frame image before the first processor triggers the reception of the frame end interrupt event of the second frame image, namely before the first processor receives the end of the second frame image, so that the depth information of the first frame image can be simultaneously sent out when the first processor sends the metadata of the second frame image to the receiving end, and the processing efficiency of the metadata of the second frame image is ensured.
In this embodiment, the first processor triggers receiving of a frame start interrupt event of the second frame image, triggers the second processor to update the depth information of the first frame image to the metadata of the second frame image, and completes processing of updating the depth information of the first frame image to the metadata of the second frame image before the first processor triggers receiving of a frame end interrupt event of the second frame image, so that when the first processor completes receiving of the second frame image, the first processor can be supported to send the metadata of the second frame image carrying the depth information of the first frame image to the receiving end, and processing efficiency of sending the metadata of the second frame image to the sending end is determined.
In one embodiment, the first processor comprises an image processor; the second processor comprises a neural network processor.
The Image Processor (ISP) is used to perform post-processing on the Signal output by the front-end Image sensor, and has the main functions of linear correction, noise removal, dead pixel removal, interpolation, white balance, automatic exposure control, and the like. A Neural-Network Processing Unit (NPU) may run a deep learning algorithm to perform depth information Processing on an image.
Specifically, the first processor may be an image processor to receive image data transmitted by the photosensitive element through the image processor and process the received image data. The second processor may be a neural network processor, so as to perform depth information processing on the image received by the first processor, and obtain depth information corresponding to the image. The image processor generates a frame start interrupt event and a frame end interrupt event in the process of receiving the image transmitted by the photosensitive element, so that the control of the neural network processor on the depth information processing can be simplified, and the efficiency of the depth information processing can be improved.
In this embodiment, the frame start interrupt event and the frame end interrupt event generated by the image processor in the process of receiving the image transmitted by the photosensitive element are used as trigger signals to trigger the neural network processor to perform the depth information processing, so that the control of the neural network processor on the depth information processing is simplified, and the efficiency of the depth information processing is improved.
In one embodiment, acquiring image data information for a first frame of image comprises: determining a state of the neural network processor based on an end of the first frame of image received by the image processor; and acquiring image data information of the first frame of image based on the normal working state of the neural network processor.
Specifically, when the first frame image is received by the image processor to be ended, that is, the frame end interrupt event is triggered based on the first frame image received by the image processor, the state of the neural network processor is determined to determine whether the neural network processor can implement the depth information processing on the first frame image, such as detecting whether the neural network processor fails, whether the computing resources are sufficient, and the like. If the neural network processor is determined to be in a normal working state, namely the neural network processor can realize the depth information processing of the first frame image, the neural network processor acquires the image data information of the first frame image so as to perform the depth information processing on the first frame image. In a specific application, if the neural network processor is in an abnormal state, which indicates that the neural network processor cannot process the depth information of the first frame image, the neural network processor can be prompted to perform troubleshooting processing, and under the condition that the neural network processor is restored to a normal working state, the neural network processor acquires the image data information of the first frame image to perform the depth information processing.
In this embodiment, based on determining that the neural network processor is in the normal operating state, the neural network processor obtains the image data information of the first frame image to perform the depth information processing, which can ensure that the neural network processor can support the depth information processing and ensure the normal operation of the depth information processing.
In one embodiment, the image data information of the first frame image includes image attribute information and data storage location information of the first frame image. As shown in fig. 3, the depth information processing step of performing depth information processing based on image data information of a first frame image to obtain depth information of the first frame image includes:
step 302, obtaining image data of the first frame image according to the data storage position information.
The image data information of the first frame image comprises image attribute information and data storage position information of the first frame image. The image attribute information of the first frame image is description information for describing the first frame image, such as the height, width or bit width of the first frame image; the data storage position information is a position where the image data of the first frame image is stored, and the specific image data of the first frame image can be acquired according to the data storage position information. The image data refers to each pixel point data in the first frame image, i.e., the first frame image is constituted by the image data.
Specifically, the image data information obtained by the second processor includes image attribute information and data storage position information of the first frame image, and the second processor acquires the image data of the first frame image from a corresponding storage position according to the data storage position information.
And step 304, performing depth estimation according to the image data and the image attribute information of the first frame image to obtain depth information of the first frame image.
Specifically, the second processor performs depth estimation according to the image data and the image attribute information of the first frame image, for example, image features of the first frame image may be constructed according to the image data and the image attribute information of the first frame image, and the image features are input into a depth estimation model trained in advance to perform depth estimation based on the image features through the depth estimation model, and the depth information of the first frame image is output by the depth estimation model. The depth estimation model can be obtained by training a training sample image carrying a depth information label, wherein the depth information label can be a depth image corresponding to the training sample image, and the depth estimation model obtained by training can perform depth estimation according to the input image characteristics and output a corresponding depth image.
In this embodiment, the second processor performs depth estimation through the image data and the image attribute information of the first frame image, and may perform depth estimation based on the multi-dimensional information of the first frame image, which is beneficial to improving the accuracy of the obtained depth information.
In one embodiment, updating the depth information of the first frame image into the metadata of the second frame image based on receiving the second frame image by the first processor and before the end of receiving the second frame image by the first processor comprises: based on receiving the second frame image by the first processor, determining metadata of the second frame image; the depth information of the first frame image is added to the metadata of the second frame image before the first processor finishes receiving the second frame image.
The metadata of the second frame image is data describing the second frame image, and may be attribute information of the second frame image. Different fields may be set in the metadata of the second frame image, and different types of attribute information may be written in the different fields. Each field in the metadata can be flexibly configured according to actual needs, such as length, position, data type, corresponding attribute information type and the like of the set field. The metadata may be obtained by the first processor through corresponding processing when the second frame image is received, for example, during the process of receiving the second frame image, the first processor analyzes the image data of the second frame image to obtain the attribute information of the second frame image, and obtains the metadata of the second frame image based on the attribute information.
Specifically, when it is determined that the first processor receives the second frame image, the second processor may determine metadata of the second frame image, and specifically, a storage location of the metadata of the second frame image may be determined by the second processor, and the second processor may be queried to determine the metadata of the second frame image according to the storage location. And the second processor adds the obtained depth information of the first frame image into the metadata of the second frame image, and completes the process of adding the depth information of the first frame image into the metadata of the second frame image before the first processor finishes receiving the second frame image. That is, before the first processor finishes receiving the second frame image, the second processor completes the process of adding the depth information of the first frame image into the metadata of the second frame image, so that after the first processor finishes receiving the second frame image, the first processor can timely send the depth information of the first frame image to the receiving end along with the metadata of the second frame image, and the receiving end can process the depth information.
In this embodiment, when it is determined that the first processor receives the second frame image, the second processor determines metadata of the second frame image, adds the depth information of the first frame image to the metadata of the second frame image, and completes the process of adding the depth information of the first frame image to the metadata of the second frame image before the first processor finishes receiving the second frame image, so that the first processor can timely send the depth information of the first frame image to the receiving end along with the metadata of the second frame image after the first processor finishes receiving the second frame image, thereby ensuring the efficiency of processing for the first frame image.
In one embodiment, adding the depth information of the first frame image to the metadata of the second frame image before the first processor finishes receiving the second frame image comprises: determining a depth information field from metadata of the second frame image; the depth information of the first frame image is written into the depth information field before the first processor finishes receiving the second frame image.
The metadata of the second frame image includes a depth information field, which is used to store depth information, and specifically may be used to store depth information of the first frame image. The second processor may enable adding the depth information of the first frame image to the metadata of the second frame image by writing the depth information of the first frame image into a depth information field of the metadata of the second frame image.
Specifically, the second processor further determines a depth information field in the metadata of the second frame image, and specifically may determine, according to an identifier of each field in the metadata, to determine the depth information field from the metadata of the second frame image. The second processor writes the depth information of the first frame image into the depth information field, and completes the process of writing the depth information of the first frame image into the depth information field before the first processor finishes receiving the second frame image.
In this embodiment, the second processor writes the depth information of the previous frame image into the depth information field of the metadata of the second frame image, so as to implement the processing of adding the depth information of the first frame image into the metadata of the second frame image, so that the first processor can send the depth information of the first frame image to the receiving end in time along with the metadata of the second frame image after receiving the second frame image, thereby ensuring the efficiency of processing the first frame image.
In one embodiment, as shown in fig. 4, an image processing method is provided, which is exemplified by applying the method to the scheduling processor in fig. 1, and the scheduling processor may be a processor in an electronic device or a server. In this embodiment, the method includes the steps of:
at step 402, a first frame of image is received by a first processor.
The first frame image is received by the first processor, and may specifically be received by the first processor from a photosensitive element of the camera. When the camera shoots, the photosensitive element of the camera sends the shot image to the first processor and the first processor receives the shot image.
Specifically, the scheduling processor may trigger the first processor to perform an image receiving job, i.e., control the first processor to receive the first frame image from the photosensitive element of the camera. The first frame image is an image currently received by the first processor from a photosensitive element of the camera.
And 404, acquiring image data information of the first frame image through the second processor, performing depth information processing on the basis of the image data information of the first frame image through the second processor, receiving the second frame image through the first processor, and updating the obtained depth information of the first frame image into metadata of the second frame image before the second frame image is received by the first processor.
The image data information refers to information related to the first frame image, and may specifically include attribute information, storage location information, and the like of the first frame image, and may further include specific image data of the first frame image. The depth information may comprise depth data of the image, and may specifically comprise a depth image, which refers to an image having as pixel values the distances from the image collector to the points in the scene, i.e. the depth, and which directly reflects the geometry of the visible surface of the scene. Metadata of an image is data describing the image, and may be attribute information of the image.
Specifically, when the first processor finishes receiving the first frame image, that is, the first processor completes receiving processing of the first frame image, the scheduling processor may acquire, by the second processor, image data information of the first frame image, where the image data information may include attribute information and storage location information of the first frame image, and the second processor may acquire, based on the storage location information, image data of the first frame image and perform depth estimation based on the image data and the attribute information of the first frame image to obtain depth information of the first frame image. In a specific implementation, the second processor may perform depth calculation on the image data information of the first frame image based on a deep learning algorithm to obtain the depth information of the first frame image, for example, calculate the depth data of the first frame image. The scheduling processor may update the depth information of the first frame image into metadata of the second frame image through the second processor upon determining that the first processor receives the second frame image. Specifically, the storage location of the metadata of the second frame image may be determined by the scheduling processor, the depth information of the first frame image is updated into the storage location of the metadata of the second frame image by the second processor, and the process of updating the depth information of the first frame image into the metadata of the second frame image is completed before the end of receiving the second frame image by the first processor.
And step 406, sending the metadata of the second frame image to the receiving end through the first processor, so as to instruct the receiving end to process according to the depth information of the first frame image in the metadata of the second frame image and the corresponding first frame image.
Specifically, for the metadata of the second frame image, the scheduling processor may send the metadata to the receiving end through the first processor, so that after the receiving end obtains the metadata of the second frame image, the receiving end obtains the depth information of the first frame image from the metadata of the second frame image, and performs processing, such as image segmentation and image blurring, according to the depth information of the first frame image and the corresponding first frame image. The first frame image may also be sent to the receiving end through the first processor, and specifically, after the first processor completes receiving the first frame image, the scheduling processor controls the first processor to send the received first frame image to the receiving end.
In one particular application, as shown in FIG. 5, the scheduling processor controls the first processor to receive the image, and the first processor receives the first frame image. And the scheduling processor controls the second processor to process the depth information aiming at the first frame image when determining that the first processing is finished receiving the first frame image, namely when the first processor is finished receiving the first frame image. Specifically, the second processor obtains image data information of the first frame image, and performs depth estimation based on the image data information of the first frame image to obtain depth information of the first frame image, where the depth information of the first frame image may specifically include depth data of the first frame image, and may also include attribute data of the depth data. When the first processor triggers to receive the second frame image, the scheduling controller controls the second processor to perform update processing on the depth information of the first frame image, specifically, the second processor updates the depth information of the first frame image into metadata of the second frame image, and completes the update processing on the depth information of the first frame image before the first processor receives the end of the second frame image, that is, completes the processing of updating the depth information of the first frame image into the metadata of the second frame image. And the scheduling processor controls the first processor to send out the metadata of the second frame image, and the first processor sends the metadata of the second frame image to the receiving end. The receiving end may extract the depth information of the first frame image from the metadata of the second frame image, and perform processing, such as three-dimensional reconstruction, blurring processing, and the like, according to the depth information of the first frame image and the corresponding first frame image.
In the specific application, the scheduling processor controls the second processor to process the depth information of the first frame image, including the process of obtaining the depth information of the first frame image through calculation and the process of updating the depth information of the first frame image obtained through calculation into the metadata of the second frame image, both the process of controlling the first processor to finish receiving the first frame image and the process of controlling the first processor to finish receiving the second frame image are carried out, namely the first processor finishes receiving the first frame image and does not change the image data information of the first frame image any more, and the second processor can be ensured to be matched with the first frame image received by the first processor aiming at the image data information subjected to depth information processing, so that the accuracy of the obtained depth information is improved.
In the image processing method, the second processor acquires the image data information of the first frame image received by the first processor, performs depth information processing based on the image data information of the first frame image, updates the obtained depth information of the first frame image into the metadata of the second frame image based on the second frame image received by the first processor, and sends the metadata of the second frame image to the receiving end through the first processor so as to instruct the receiving end to process according to the depth information of the first frame image in the metadata of the second frame image and the corresponding first frame image. In the image processing process, the second processor receives the image data information of the finished image for each time aiming at the image data information processed by the depth information, so that the consistency of the image data information of the same image in different processing processes is ensured, the accuracy of the obtained depth information is improved, and the image processing effect can be improved when the image data information is processed based on the depth information.
In one embodiment, the image processing method further comprises: determining the processing statistical time length of the second processor for performing depth information processing on the received image; determining the receiving interval duration between the first processor receiving two adjacent frames of images based on the processing statistical duration; the receiving interval duration is greater than or equal to the processing statistical duration.
The second processor is configured to perform depth information processing on the received image, specifically perform depth estimation on the received image, perform statistics on a duration required by the second processor to perform depth estimation on each frame of image, and specifically perform statistics on historical time consumed by the second processor to perform depth estimation to obtain a processing statistic duration. The processing statistic duration is used for representing the time consumed by the second processor when the depth information processing is carried out based on the image data information of the image. The length of the processing statistic time length is related to the computing power of the second processor and the data size of the processed image data information. The receiving interval duration refers to an interval duration between the end of receiving the first frame image and the beginning of receiving the second frame image by the first processor, that is, the receiving interval duration is an interval duration between the receiving of two adjacent frame images by the first processor, and during the interval duration, the first processor does not receive images.
In particular, the scheduling processor may determine a processing statistic duration for the second processor to perform depth information processing on the received image. In a specific application, the scheduling processor may count time consumed by the second processor for performing depth information processing on the image received historically, and obtain a processing statistical duration based on a statistical result, which is used to represent the time consumed by the second processor when performing depth information processing on the image data information based on the image. For example, the scheduling processor may perform weighted average on the time consumed by the second processor for performing depth information processing on the historically received images, resulting in a processing statistical duration. The scheduling processor determines the receiving interval duration of the first processor based on the processing statistical duration of the second processor, wherein the receiving interval duration is greater than or equal to the processing statistical duration. In a specific application, the scheduling processor may set the receiving interval duration of the first processor based on the processing statistical duration of the second processor, so that the receiving interval duration of the first processor is not less than the processing statistical duration of the second processor, and thus the second processor has sufficient time to complete depth information processing during an interval in which the first processor receives two adjacent frames of images, and depth information of the images is obtained.
In this embodiment, the scheduling processor determines, according to the processing statistical duration of the second processor, the receiving interval duration between the first processor and the receiving of the two adjacent frames of images, so that the receiving interval duration is greater than or equal to the processing statistical duration, and thus the second processor has sufficient time to complete the depth information processing during the interval between the first processor and the receiving of the two adjacent frames of images, and obtain the depth information of the images.
In one embodiment, the image processing method further comprises: determining a blurring processing parameter based on image data and depth information of a first frame image; and performing image blurring processing on the first frame image through blurring processing parameters.
The blurring processing parameter refers to a parameter when blurring processing is performed on an image, and the blurring processing parameter can be used for blurring the image to obtain a blurred image. Specifically, the scheduling processor acquires image data of a first frame image, and determines a blurring processing parameter based on the image data of the first frame image and the depth information. For example, the scheduling processor may segment the first frame of image based on the depth information and the image data, determine a context of the scene in the first frame of image, and generate corresponding blurring parameters to control a degree of blurring of the first frame of image. The scheduling processor can perform image blurring processing on the first frame image based on the obtained blurring processing parameters to obtain an image of the first frame image after corresponding blurring, so that blurring processing on the first frame image is achieved.
In this embodiment, the scheduling processor determines a blurring processing parameter according to the image data and the depth information of the first frame image, and performs image blurring processing on the first frame image based on the blurring processing parameter, so that blurring can be performed by using accurate depth information of the first frame image, an image blurring effect can be ensured, and a blurred image is more natural.
In one embodiment, the image processing method further comprises: and sending the image data of the first frame image to the receiving end through the first processor based on the end of receiving the first frame image through the first processor, so as to instruct the receiving end to process according to the depth information of the first frame image in the metadata of the second frame image and the image data corresponding to the first frame image.
Wherein the image data of the first frame image is data obtained by the first processor in the image receiving process. The image data of the first frame image is sent to the receiving end, and the receiving end may perform processing, such as image blurring processing, three-dimensional reconstruction processing, and the like, according to the image data of the first frame image and the depth information.
Specifically, the scheduling processor indicates that the first processor has completed receiving the first frame image when determining that the first frame image is received by the first processor to be over, such as when the first processor triggers an end-of-frame interrupt event for receiving the first frame image. The scheduling processor transmits the image data of the first frame image to the receiving end through the first processor, so that the receiving end processes the image data of the first frame image according to the depth information of the first frame image in the metadata of the second frame image and the corresponding first frame image. The image data of the first frame image is sent to the receiving end when the first processor finishes receiving the first frame image, and the depth information of the first frame image is sent to the receiving end along with the metadata of the second frame image, namely the image data and the depth information of the first frame image received by the receiving end are different by one frame image, so that the accuracy of the depth information obtained by the receiving end can be ensured, and the processing effect of the first frame image based on the depth information is favorably improved.
In this embodiment, when the first processor finishes receiving the first frame image, the scheduling processor sends the image data of the first frame image to the receiving end through the first processor to instruct the receiving end to process according to the depth information of the first frame image in the metadata of the second frame image and the image data corresponding to the first frame image, so that the image data of the first frame image received by the receiving end and the depth information differ by one frame image, which can ensure the accuracy of the depth information obtained by the receiving end, and is beneficial to improving the processing effect of the first frame image based on the depth information.
In one embodiment, the depth information includes a depth image of the first frame image; the image processing method further includes: acquiring attribute information corresponding to the depth image; updating, by the first processor, the attribute information into the metadata of the second frame image based on the receiving of the second frame image by the first processor and before the receiving of the second frame image by the first processor ends.
The depth image is an image which takes the distance from an image collector to each point in a scene, namely the depth as a pixel value, and directly reflects the geometric shape of a visible surface of a scene. The attribute information corresponding to the depth image is information describing the depth image, such as various information of width, height, or bit width of the depth image.
Specifically, the depth information includes a depth image of the first frame image, and the scheduling processor obtains attribute information corresponding to the depth image. The attribute information corresponding to the depth image may be generated by the second processor when performing the depth information processing, or may be obtained by the first processor or the scheduling processor based on the depth image analysis. The scheduling processor updates, by the first processor, attribute information corresponding to the depth image into the metadata of the second frame image in a case where the first processor receives the second frame image, and completes a process of updating the attribute information into the metadata of the second frame image before the end of the reception of the second frame image by the first processor.
Further, sending the metadata of the second frame image to the receiving end through the first processor to instruct the receiving end to process according to the depth information of the first frame image in the metadata of the second frame image and the corresponding first frame image, including: and sending the metadata of the second frame image to the receiving end through the first processor to instruct the receiving end to process according to the depth image and the attribute information of the first frame image in the metadata of the second frame image and the corresponding first frame image.
The metadata of the second frame image includes the depth image of the first frame image written by the second processor and the attribute information corresponding to the depth image of the first frame image written by the first processor. Specifically, the scheduling processor sends the metadata of the second frame image to the receiving end through the first processor, and after receiving the metadata of the second frame image, the receiving end can extract the depth image and the attribute information of the first frame image from the metadata of the second frame image, so as to perform processing, such as image separation, three-dimensional reconstruction or blurring processing, according to the depth image and the attribute information of the first frame image and the corresponding first frame image.
In this embodiment, the scheduling processor updates the attribute information corresponding to the obtained depth image of the first frame image to the metadata of the second frame image through the first processor, and after controlling the first processor to send the metadata of the second frame image to the receiving end, the receiving end can process the metadata according to the depth image and the attribute information of the first frame image and the corresponding first frame image, and can perform image processing by using the attribute information corresponding to the depth image, which is beneficial to further improving the image processing effect.
The application also provides an application scene, and the application scene applies the image processing method. Specifically, the application of the image processing method in the application scenario is as follows:
when performing three-dimensional reconstruction on an image captured by a camera, depth information of the image needs to be obtained, for example, a depth image corresponding to the image is obtained, so as to perform three-dimensional reconstruction according to the depth image. When the camera shoots an image, the image data is sent to the first processor through the photosensitive element, the first processor receives the image data of the first frame image, and when the first processor finishes receiving the first frame image, the second processor acquires the image data information of the first frame image and carries out depth information processing based on the image data information of the first frame image to obtain the depth information of the first frame image. In a case where the first processor continues to receive the second frame image, the second processor writes the depth information of the first frame image into the metadata of the second frame image, and completes the process of writing the depth information of the first frame image into the metadata of the second frame image before the end of the reception of the second frame image by the first processor. The first processor may transmit the metadata of the second frame image to the receiving end to instruct the receiving end to extract the depth information of the first frame image from the metadata of the second frame image, and perform a process of three-dimensional reconstruction according to the depth information of the first frame image and the corresponding first frame image.
In an embodiment, as shown in fig. 6, an image processing method is provided, which is described by taking the method as an example applied to the first processor in fig. 1, and the scheduling processor may be a processor in an electronic device or a server. In this embodiment, the method includes the steps of:
step 602, receiving a first frame image, and sending the first frame image to a receiving end.
The first frame image is received by the first processor, and may specifically be received by the first processor from a photosensitive element of the camera. When the camera shoots, a photosensitive element of the camera sends a shot image to the first processor, and the shot image is received by the first processor. The receiving end is used for processing the image, such as performing various processes such as image segmentation, image blurring or three-dimensional reconstruction.
Specifically, the first processor performs an image receiving job, that is, the first processor receives a first frame image from a photosensitive element of the camera. The first frame image is an image currently received by the first processor from a photosensitive element of the camera. The first processor sends the received first frame image to the receiving end, and specifically, after the receiving of the first frame image is completed, the first processor sends the first frame image to the receiving end.
Step 604, sending metadata of a second frame image to the receiving end based on receiving the second frame image, so as to instruct the receiving end to process according to depth information of the first frame image in the metadata of the second frame image and the corresponding first frame image; and the second processor receives the second frame image and updates the obtained depth information of the first frame image into metadata of the second frame image before the second frame image is received.
The metadata of the image is data describing the image, and can be used as attribute information of the image. The metadata of the image may be generated by the first processor as the image is received. The image data information refers to information related to the first frame image, and may specifically include attribute information, storage location information, and the like of the first frame image, and may further include specific image data of the first frame image. The depth information may comprise depth data of the image, and may specifically comprise a depth image, which refers to an image having as pixel values the distances from the image collector to the points in the scene, i.e. the depth, and which directly reflects the geometry of the visible surface of the scene.
Specifically, for the metadata of the second frame image, the first processor sends the metadata to the receiving end, so that after the receiving end obtains the metadata of the second frame image, the receiving end obtains the depth information of the first frame image from the metadata of the second frame image, and performs processing, such as image segmentation and image blurring, according to the depth information of the first frame image and the corresponding first frame image. For the depth information of the first frame image, when the first processor finishes receiving the first frame image, that is, the first processor completes receiving processing of the first frame image, the second processor may acquire image data information of the first frame image, where the image data information may include attribute information and storage location information of the first frame image, and the second processor may acquire image data of the first frame image based on the storage location information and perform depth estimation based on the image data and the attribute information of the first frame image to obtain the depth information of the first frame image. In a specific implementation, the second processor may perform depth calculation on the image data information of the first frame image based on a deep learning algorithm to obtain the depth information of the first frame image, for example, calculate the depth data of the first frame image. Upon determining that the first processor receives the second frame image, the depth information of the first frame image may be updated into the metadata of the second frame image by the second processor. Specifically, a storage location of metadata of the second frame image may be determined, the depth information of the first frame image is updated to the storage location of the metadata of the second frame image by the second processor, and the updating of the depth information of the first frame image to the metadata of the second frame image is completed before the end of receiving the second frame image by the first processor.
According to the image processing method, the first processor receives the first frame image and sends the first frame image to the receiving end, and based on the receiving of the second frame image, the metadata of the second frame image is sent to the receiving end so as to indicate the receiving end to process according to the depth information of the first frame image in the metadata of the second frame image and the corresponding first frame image. And the second processor receives the second frame image and updates the obtained depth information of the first frame image into the metadata of the second frame image before the second frame image is received. In the image processing process, the second processor receives the image data information of the finished image for each time aiming at the image data information processed by the depth information, so that the consistency of the image data information of the same image in different processing processes is ensured, the accuracy of the obtained depth information is improved, and the image processing effect can be improved when the image data information is processed based on the depth information.
The application also provides an application scene, and the application scene applies the image processing method. Specifically, the application of the image processing method in the application scenario is as follows:
as shown in fig. 7, for blurring processing of an image, theoretically, the farther from the focal plane, the bigger the circle of confusion, the more blurred the image is. That is, the closer to the focal plane the image is. The ideal blurring effect is that although the range outside the depth of field is blurred, the blur degree is different, and the closer to the focal plane, the sharper the blur, and the farther away from the focal plane, the more blurred the blur.
Currently, the common image blurring scheme includes two types based on one camera and two cameras. The blurring based on one camera is realized by carrying out fuzzy algorithm processing on regions except for a focus region shooting object; however, since only one camera is used, it is difficult to obtain depth information of the lens and the shot region, resulting in similar blurring degrees of different regions when performing the algorithm processing. And background segmentation can be performed by deep learning, and a neural network is trained by using a data set to realize segmentation of the foreground and the background. Segmentation determines which objects are the subjects that we want to focus clearly and which objects are the background that needs blurring. However, if blurring is performed only based on the foreground and background segmentation results, the resulting blurred picture is not natural. Mainly, due to the lack of depth data, distance-dependent blurring cannot be adopted, and the segmentation algorithm is not perfect, the foreground and background may be judged incorrectly. Blurring is carried out based on two cameras, and a Depth image (Depth Map) is constructed by utilizing double photographing, so that the front-back relation of a scene is inferred, and the blurring degree is controlled. The double-shot method can calculate the distance of each pixel point out-of-focus plane by utilizing the visual angle difference of two cameras so as to calculate the blurring according to the distance of the focal plane.
At present, a common method for processing a depth calculation of a single camera is to simultaneously process images by an image processor ISP and a neural network processor NPU, specifically, the image processor ISP processes the images, and the NPU runs a deep learning algorithm to calculate depth information of the images. Thus requiring coordinated synchronization of the hardware and software of both the ISP and NPU. Especially, when image data output by the Sensor of the photosensitive element is changed, for example, DOL (Digital overlay wide dynamic range, digital overlay wide dynamic technology) switching, resolution changing, etc., the control difficulty becomes more complicated, and data processing by the ISP and the NPU is not matched easily, so that the generated calculation result is inaccurate, and the processing effect on subsequent images is greatly influenced.
Based on this, in order to reduce the complexity Of the software and hardware work scheduling, in the image processing method provided in this embodiment, the Start Of Frame (SOF) and the End Of Frame (EOF) generated by processing an image by an ISP are interrupted, specifically, the EOF Of image transmission is used to Start triggering an NPU to perform depth calculation, and depth information is filled in metadata in the SOF to control the NPU to calculate a depth image, so that the processing process Of the depth image can be simplified, and the processing process Of the entire depth image is simple and reliable. According to the image processing method provided by the embodiment, the depth information is synchronously calculated while the image is transmitted by using a specific hardware architecture, the generation process of the depth information is accelerated, and the whole process can be efficiently and stably controlled by using a heterogeneous logic control processing mode.
Specifically, the image processing method provided by this embodiment is based on a scheduling processor, for example, the depth information calculation function can be completed based on the hardware architecture of the Explorer chip and the frame output time setting of the Sensor. For AON (Always On) sensor, i.e. the photosensitive element kept On line, image data output is performed continuously, and the method is generally applied to a proactive scene. Inside the Explorer chip, an ISP and an NPU are provided, the ISP can perform normal processing on images, and the NPU can perform deep learning calculation. Two HW (Hardware ) can run simultaneously without affecting each other. Based on the hardware architecture, in the scene, the Explorer chip can perform shunting processing on image data, namely, the image is transmitted to the ISP for conventional image processing, and on the other hand, the image can be transmitted to the NPU for deep learning to calculate the subsequent required depth information.
As shown in fig. 8, the time duration of each frame of image received by the image processor ISP from the light Sensor is 21ms (milliseconds), the time interval between two adjacent frames is 12ms, and the time taken for the neural network processor NPU to calculate the depth information of one frame of image is 9ms. When the image processor ISP and the neural network processor NPU start to operate and the image processor ISP finishes receiving the 0 th frame image, the neural network processor NPU is triggered to start to calculate the depth information of the 0 th frame image, and the depth data of the 0 th frame image is obtained after 9ms, which may specifically include the depth image of the 0 th frame image. After 12ms when the image processor ISP finishes receiving the 0 th frame image, the image processor ISP is triggered to start receiving the 1 st frame image, and the neural network processor NPU is triggered to write the obtained depth data into the metadata of the 1 st frame image, so that the metadata of the 1 st frame image includes the metadata information of the 1 st frame image and the depth data of the 0 th frame image. In addition, the image processor ISP also updates the attribute information of the depth data to the metadata of the 1 st frame image. The image processor ISP may send the metadata of the 1 st frame image to the receiving end, and the receiving end may extract the depth data of the 0 th frame image and the attribute information of the depth data from the metadata to process the 0 th frame image, specifically perform blurring processing.
Thus, the depth information is processed by the image processor ISP and the neural network processor NPU, when the image processor ISP finishes receiving the nth frame image, the neural network processor NPU is triggered to start calculating the depth information of the nth frame image, and the depth data of the nth frame image is obtained after 9ms, which may specifically include the depth image of the nth frame image. After 12ms when the image processor ISP finishes receiving the nth frame image, the image processor ISP is triggered to start receiving the (N + 1) th frame image, and the neural network processor NPU is triggered to write the obtained depth data into the metadata of the (N + 1) th frame image, so that the metadata of the (N + 1) th frame image includes the metadata information of the (N + 1) th frame image and the depth data of the nth frame image. In addition, the image processor ISP also updates the attribute information of the depth data of the nth frame image to the metadata of the N +1 th frame image. The image processor ISP may send the metadata of the (N + 1) th frame image to the receiving end, and the receiving end may extract the depth data of the nth frame image and the attribute information of the depth data from the metadata to process the nth frame image.
Further, in order to enable the ISP and the NPU to work cooperatively in the scene, the Sensor needs to be set. In this scenario, the Sensor is required to output one frame of image data with SOF and EOF intervals of 21ms and vblk (vertical blanking) of 12ms, which can provide enough time for the NPU to perform depth calculation. Here, vblk is a time interval from the end of reading one frame to the start of reading the next frame, that is, an interval duration between the ISP receiving two adjacent frames of images.
Specifically, when the ISP receives the EOF of the first frame image, the NPU is triggered to start working, and the NPU is informed of the data information and the data storage location of the first frame image, where the data information includes information such as width, height, and bit width. After receiving the data information and the data storage location of the first frame image, the NPU starts to calculate the depth data corresponding to the first frame image. The estimated time for the NPU to compute the image information for one frame is 9ms, so the processing will be completed before the SOF for the second frame. After the NPU completes processing, the depth data is stored in the designated buffer, and attribute information related to the Explorer chip is notified, where the attribute information is information describing the depth data. When the SOF of the second frame image is triggered, the Explorer chip updates the depth data and corresponding attribute information of the first frame image into Metadata Info of the second frame image, and the ISP sends the Metadata of the second frame image to the receiving end. The receiving end can obtain the depth data and corresponding attribute information of the first frame image in the Metadata Info of the second frame image, and can further process the first frame image according to the depth data and corresponding attribute information of the first frame image, such as blurring processing.
As shown in fig. 9, in the process of performing depth information processing by the image processor ISP and the neural network processor NPU, the image processor ISP detects the state of the neural network processor NPU when the image frame end interrupt of the first frame image is triggered. Under the condition that the NPU (neural network processor) is in a normal working state, the Explorer chip triggers the NPU to perform depth calculation until information is fed back after the depth calculation is completed. When triggering the interruption of the image frame of the second frame image, the image processor ISP adds the attribute information corresponding to the depth data of the first frame image to the metadata of the second frame image, and when triggering the interruption of the image frame of the second frame image, sends the metadata of the second frame image to the receiving end, thereby realizing the sending of the depth information including the depth data of the first frame image and the corresponding attribute information, and further processing by the receiving end.
In a specific application, the logical relationship between the processing flows of the ISP and the NPU can be confirmed by looking at the log. Moreover, the difference between the image data and the depth information output is one frame, and whether to adopt the image processing method provided by the embodiment can be judged through the sequence of receiving the image data and the depth information.
The image processing method provided by this embodiment completes the usage scenario of cooperative work of the ISP and the NPU by using the hardware characteristics of the Explorer chip and the image transmission flow, can maximally use the Explorer chip hardware and efficiently use data transmission, and can realize a relatively complex scenario requirement by using a relatively low hardware architecture and a relatively simple control scheme. Meanwhile, heterogeneous control logic is used to cooperate with the functions of the ISP and the NPU, so that the complexity of the control logic can be greatly simplified, and the stability of the control logic is enhanced. According to the image processing method provided by the embodiment, the hardware architecture of the Explorer chip, the specific output of the photosensitive element Sensor and the characteristics of image transmission are combined through scheduling control, and the depth information corresponding to the image is output while the Explorer chip processes the image through cooperative operation.
It should be understood that, although the steps in the flowcharts related to the embodiments as described above are sequentially displayed as indicated by arrows, the steps are not necessarily performed sequentially as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a part of the steps in the flowcharts related to the embodiments described above may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, and the execution order of the steps or stages is not necessarily sequential, but may be rotated or alternated with other steps or at least a part of the steps or stages in other steps.
Based on the same inventive concept, the embodiment of the present application further provides an image processing apparatus for implementing the image processing method. The implementation scheme for solving the problem provided by the apparatus is similar to the implementation scheme described in the above method, so specific limitations in one or more embodiments of the image processing apparatus provided below can be referred to the limitations of the image processing method in the foregoing, and details are not described here.
In one embodiment, as shown in fig. 10, there is provided an image processing apparatus 1000 including: an image data information acquisition module 1002, a depth information processing module 1004, and a depth information update module 1006, wherein:
an image data information obtaining module 1002, configured to obtain image data information of a first frame image; a first frame image is received and obtained by a first processor;
a depth information processing module 1004, configured to perform depth information processing based on image data information of the first frame image to obtain depth information of the first frame image;
a depth information update module 1006, configured to update the depth information of the first frame image into the metadata of the second frame image based on the second frame image received by the first processor and before the second frame image is received by the first processor; and the metadata of the second frame image is used for being sent to the receiving end through the first processor so as to instruct the receiving end to process according to the depth information of the first frame image in the metadata of the second frame image and the corresponding first frame image.
In one embodiment, the image data information obtaining module 1002 is further configured to obtain the image data information of the first frame image based on the first processor receiving the first frame image triggering an end-of-frame interrupt event.
In one embodiment, the depth information processing module 1004 is further configured to perform depth estimation based on image data information of the first frame image to obtain depth information of the first frame image before the first processor triggers a frame start interrupt event for receiving the second frame image.
In one embodiment, the depth information updating module 1006 is further configured to receive a start of frame interrupt event of the second frame image based on the first processor trigger, and update the depth information of the first frame image into the metadata of the second frame image before the first processor triggers the reception of the end of frame interrupt event of the second frame image.
In one embodiment, the first processor comprises an image processor; the second processor comprises a neural network processor.
In one embodiment, the image data information obtaining module 1002 is further configured to determine a state of the neural network processor based on an end of receiving the first frame of image by the image processor; and acquiring image data information of the first frame of image based on the normal working state of the neural network processor.
In one embodiment, the image data information of the first frame image includes image attribute information and data storage location information of the first frame image; the depth information processing module 1004 is further configured to obtain image data of the first frame of image according to the data storage location information; and performing depth estimation according to the image data and the image attribute information of the first frame image to obtain the depth information of the first frame image.
In one embodiment, the depth information update module 1006 is further configured to determine metadata of the second frame image based on receiving the second frame image by the first processor; the depth information of the first frame image is added to the metadata of the second frame image before the first processor finishes receiving the second frame image.
In one embodiment, the depth information update module 1006 is further configured to determine a depth information field from metadata of the second frame image; the depth information of the first frame image is written into the depth information field before the end of the reception of the second frame image by the first processor.
In one embodiment, as shown in fig. 11, there is provided an image processing apparatus 1100, including: a first frame image receiving module 1102, a depth information processing module 1104, and a metadata transmitting module 1106, wherein:
a first frame image receiving module 1102, configured to receive, by a first processor, a first frame image;
the depth information processing module 1104 is used for acquiring the image data information of the first frame image through the second processor, performing depth information processing on the basis of the image data information of the first frame image through the second processor, receiving the second frame image through the first processor, and updating the acquired depth information of the first frame image into metadata of the second frame image before the end of receiving the second frame image through the first processor;
a metadata sending module 1106, configured to send, by the first processor, the metadata of the second frame image to the receiving end, so as to instruct the receiving end to process according to the depth information of the first frame image in the metadata of the second frame image and the corresponding first frame image.
In one embodiment, the device further comprises an interval duration determination module, configured to determine a processing statistical duration for the second processor to perform depth information processing on the received image; determining the receiving interval duration between the first processor receiving two adjacent frames of images based on the processing statistical duration; the receiving interval duration is greater than or equal to the processing statistical duration.
In one embodiment, the image processing device further comprises a blurring processing module for determining a blurring processing parameter based on the image data of the first frame image and the depth information; and performing image blurring processing on the first frame image through blurring processing parameters.
In one embodiment, the image data sending module is further configured to send, by the first processor, the image data of the first frame image to the receiving end based on the end of receiving the first frame image by the first processor, so as to instruct the receiving end to process according to the depth information of the first frame image in the metadata of the second frame image and the image data corresponding to the first frame image.
In one embodiment, the depth information includes a depth image of the first frame image; the attribute information updating module is used for acquiring attribute information corresponding to the depth image; updating, by the first processor, the attribute information into metadata of the second frame image based on the receiving of the second frame image by the first processor and before the end of the receiving of the second frame image by the first processor; the metadata sending module 1106 is further configured to send, by the first processor, the metadata of the second frame image to the receiving end, so as to instruct the receiving end to process according to the depth image and the attribute information of the first frame image in the metadata of the second frame image and the corresponding first frame image.
In one embodiment, as shown in fig. 12, there is provided an image processing apparatus 1200 including: a first frame image receiving module 1202 and a second frame image metadata transmitting module 1204, wherein:
a first frame image receiving module 1202, configured to receive a first frame image and send the first frame image to a receiving end;
a second frame image metadata sending module 1204, configured to send, based on receiving the second frame image, metadata of the second frame image to a receiving end, so as to instruct the receiving end to process according to depth information of the first frame image in the metadata of the second frame image and the corresponding first frame image;
and the second processor updates the obtained depth information of the first frame image into the metadata of the second frame image based on the received second frame image and before the end of receiving the second frame image.
The respective modules in the image processing apparatus described above may be wholly or partially implemented by software, hardware, and a combination thereof. The modules may be embedded in a hardware form or may be independent of a processor in the electronic device, or may be stored in a memory in the electronic device in a software form, so that the processor calls and executes operations corresponding to the modules.
In one embodiment, as shown in fig. 13, there is provided an image processing system 1300, comprising:
a first processor 1302 for receiving a first frame image; based on receiving the second frame image, sending the metadata of the second frame image to the receiving end to instruct the receiving end to process according to the depth information of the first frame image in the metadata of the second frame image and the corresponding first frame image;
a second processor 1304 for acquiring image data information of the first frame image; processing depth information based on image data information of the first frame image to obtain depth information of the first frame image; updating the depth information of the first frame image into the metadata of the second frame image based on the receiving of the second frame image by the first processor and before the receiving of the second frame image by the first processor is finished;
a scheduling processor 1306, configured to control the first processor to receive the first frame image and the second frame image, and control the first processor to send metadata of the second frame image to a receiving end;
the scheduling processor 1306 is further configured to control the second processor to perform depth information processing based on the image data information of the first frame image, and to control the second processor to update the depth information of the first frame image into the metadata of the second frame image.
In the image processing system, the scheduling processor controls the first processor to receive the first frame image and send the first frame image to the receiving end, and based on receiving the second frame image, the metadata of the second frame image is sent to the receiving end so as to instruct the receiving end to process according to the depth information of the first frame image in the metadata of the second frame image and the corresponding first frame image. The scheduling processor controls the second processor to obtain the image data information of the first frame image received and obtained by the first processor, carries out depth information processing based on the image data information of the first frame image, receives the second frame image based on the first processor, and updates the obtained depth information of the first frame image into the metadata of the second frame image before the second frame image is received by the first processor. In the image processing process, the second processor is the image data information of the received finished image for the image data information processed by the first processor each time, so that the consistency of the image data information of the same image in different processing processes is ensured, the accuracy of the obtained depth information is improved, and the image processing effect can be improved when the image data information is processed based on the depth information. In one embodiment, an electronic device is provided, which may be a server or a terminal, and its internal structure diagram may be as shown in fig. 14. The electronic device includes a processor, a memory, an Input/Output (I/O) interface, and a communication interface. The processor, the memory and the input/output interface are connected through a system bus, and the communication interface is connected to the system bus through the input/output interface. Wherein the processor of the electronic device is configured to provide computing and control capabilities. The memory of the electronic device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operating system and the computer program to run on the non-volatile storage medium. The database of the electronic device is used for storing image processing data. The input/output interface of the electronic device is used for exchanging information between the processor and an external device. The communication interface of the electronic device is used for connecting and communicating with an external terminal through a network. The computer program is executed by a processor to implement an image processing method.
Those skilled in the art will appreciate that the structure shown in fig. 14 is a block diagram of only a portion of the structure relevant to the present application, and does not constitute a limitation on the electronic device to which the present application is applied, and a particular electronic device may include more or less components than those shown in the drawings, or combine certain components, or have a different arrangement of components.
The embodiment of the application also provides a computer readable storage medium. One or more non-transitory computer-readable storage media containing computer-executable instructions that, when executed by one or more processors, cause the processors to perform the steps of the image processing method.
Embodiments of the present application also provide a computer program product containing instructions which, when run on a computer, cause the computer to perform an image processing method.
It should be noted that, the user information (including but not limited to user equipment information, user personal information, etc.) and data (including but not limited to data for analysis, stored data, displayed data, etc.) referred to in the present application are information and data authorized by the user or sufficiently authorized by each party, and the collection, use and processing of the related data need to comply with the relevant laws and regulations and standards of the relevant country and region.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, database, or other medium used in the embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include a Read-Only Memory (ROM), a magnetic tape, a floppy disk, a flash Memory, an optical Memory, a high-density embedded nonvolatile Memory, a resistive Random Access Memory (ReRAM), a Magnetic Random Access Memory (MRAM), a Ferroelectric Random Access Memory (FRAM), a Phase Change Memory (PCM), a graphene Memory, and the like. Volatile Memory can include Random Access Memory (RAM), external cache Memory, and the like. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), for example. The databases involved in the embodiments provided herein may include at least one of relational and non-relational databases. The non-relational database may include, but is not limited to, a block chain based distributed database, and the like. The processors referred to in the embodiments provided herein may be general purpose processors, central processing units, graphics processors, digital signal processors, programmable logic devices, quantum computing based data processing logic devices, etc., without limitation.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present application should be subject to the appended claims.

Claims (22)

1. An image processing method applied to a second processor, comprising:
acquiring image data information of a first frame image; the first frame image is obtained by receiving the first frame image by a first processor;
processing depth information based on the image data information of the first frame image to obtain the depth information of the first frame image;
updating depth information of a first frame image into metadata of a second frame image based on receiving the second frame image by the first processor and before the first processor finishes receiving the second frame image;
the metadata of the second frame image is used for being sent to a receiving end through the first processor so as to indicate the receiving end to process according to the depth information of the first frame image in the metadata of the second frame image and the corresponding first frame image.
2. The method of claim 1, wherein the obtaining image data information for the first frame of image comprises:
and acquiring image data information of a first frame image based on a first processor receiving a first frame image trigger frame ending interrupt event.
3. The method according to claim 1, wherein the performing depth information processing based on the image data information of the first frame image to obtain the depth information of the first frame image comprises:
and performing depth estimation based on the image data information of the first frame image to obtain the depth information of the first frame image before the first processor triggers and receives a frame start interrupt event of a second frame image.
4. The method of claim 1, wherein the updating the depth information of the first frame image into the metadata of the second frame image based on receiving the second frame image by the first processor and before the receiving of the second frame image by the first processor ends comprises:
receiving a frame start interrupt event for a second frame image based on the first processor trigger, and updating depth information for the first frame image into metadata for the second frame image before the first processor triggers receipt of a frame end interrupt event for the second frame image.
5. The method of any one of claims 1 to 4, wherein the first processor comprises an image processor; the second processor comprises a neural network processor.
6. The method of claim 5, wherein the obtaining image data information of the first frame image comprises:
determining a state of the neural network processor based on an end of receiving the first frame of image by the image processor;
and acquiring image data information of the first frame of image based on the normal working state of the neural network processor.
7. The method according to any one of claims 1 to 4, wherein the image data information of the first frame image includes image attribute information and data storage location information of the first frame image; the processing of depth information based on the image data information of the first frame image to obtain the depth information of the first frame image includes:
acquiring image data of the first frame of image according to the data storage position information;
and performing depth estimation according to the image data of the first frame image and the image attribute information to obtain the depth information of the first frame image.
8. The method of any one of claims 1 to 4, wherein the updating the depth information of the first frame image into the metadata of the second frame image based on receiving the second frame image by the first processor and before the end of receiving the second frame image by the first processor comprises:
based on receiving a second frame image by the first processor, determining metadata for the second frame image;
adding depth information of the first frame image to metadata of the second frame image before the first processor finishes receiving the second frame image.
9. The method of claim 8, wherein adding the depth information of the first frame image to the metadata of the second frame image before the first processor receives the end of the second frame image comprises:
determining a depth information field from metadata of the second frame image;
writing depth information of the first frame image into the depth information field before the first processor finishes receiving the second frame image.
10. An image processing method, comprising:
receiving, by a first processor, a first frame image;
acquiring image data information of the first frame image through a second processor, performing depth information processing on the basis of the image data information of the first frame image through the second processor, receiving a second frame image through the first processor, and updating the obtained depth information of the first frame image into metadata of the second frame image before the end of receiving the second frame image through the first processor;
and sending the metadata of the second frame image to a receiving end through the first processor so as to instruct the receiving end to process according to the depth information of the first frame image in the metadata of the second frame image and the corresponding first frame image.
11. The method of claim 10, further comprising:
determining a processing statistical time length for the second processor to perform depth information processing on the received image;
determining the receiving interval duration between the two adjacent frames of images received by the first processor based on the processing statistical duration; the receiving interval duration is greater than or equal to the processing statistical duration.
12. The method of claim 10, further comprising:
determining a blurring processing parameter based on the image data of the first frame image and the depth information;
and performing image blurring processing on the first frame image through the blurring processing parameter.
13. The method of claim 10, further comprising:
sending, by the first processor, image data of the first frame image to the receiving end based on the end of receiving the first frame image by the first processor, so as to instruct the receiving end to process according to the depth information of the first frame image in the metadata of the second frame image and the image data corresponding to the first frame image.
14. The method according to any one of claims 10 to 13, wherein the depth information comprises a depth image of the first frame image; the method further comprises the following steps:
acquiring attribute information corresponding to the depth image;
updating, by the first processor, the attribute information into metadata of a second frame image based on receiving the second frame image by the first processor and before the first processor finishes receiving the second frame image;
the sending, by the first processor, the metadata of the second frame image to a receiving end to instruct the receiving end to process according to the depth information of the first frame image in the metadata of the second frame image and the corresponding first frame image includes:
and sending the metadata of the second frame image to a receiving end through the first processor so as to indicate the receiving end to process according to the depth image and the attribute information of the first frame image in the metadata of the second frame image and the corresponding first frame image.
15. An image processing method applied to a first processor, comprising:
receiving a first frame image and sending the first frame image to a receiving end;
based on receiving a second frame image, sending metadata of the second frame image to the receiving end to indicate the receiving end to process according to the depth information of the first frame image in the metadata of the second frame image and the corresponding first frame image;
and the second processor receives the second frame image and updates the obtained depth information of the first frame image into metadata of the second frame image before the second frame image is received.
16. An image processing apparatus characterized by comprising:
the image data information acquisition module is used for acquiring the image data information of the first frame of image; the first frame image is obtained by receiving the first frame image by a first processor;
the depth information processing module is used for carrying out depth information processing on the basis of the image data information of the first frame image to obtain the depth information of the first frame image;
a depth information updating module for updating depth information of the first frame image into metadata of a second frame image based on receiving the second frame image by the first processor and before the end of receiving the second frame image by the first processor;
the metadata of the second frame image is used for being sent to a receiving end through the first processor so as to indicate the receiving end to process according to the depth information of the first frame image in the metadata of the second frame image and the corresponding first frame image.
17. An image processing apparatus characterized by comprising:
the first frame image receiving module is used for receiving a first frame image through the first processor;
the depth information processing module is used for acquiring image data information of the first frame image through a second processor, performing depth information processing on the basis of the image data information of the first frame image through the second processor, receiving a second frame image through the first processor, and updating the obtained depth information of the first frame image into metadata of the second frame image before the first processor finishes receiving the second frame image;
and the metadata sending module is used for sending the metadata of the second frame image to a receiving end through the first processor so as to instruct the receiving end to process the metadata of the second frame image according to the depth information of the first frame image in the metadata of the second frame image and the corresponding first frame image.
18. An image processing apparatus characterized by comprising:
the first frame image receiving module is used for receiving a first frame image and sending the first frame image to a receiving end;
a second frame image metadata sending module, configured to send metadata of a second frame image to the receiving end based on receiving the second frame image, so as to instruct the receiving end to process according to depth information of the first frame image in the metadata of the second frame image and the corresponding first frame image;
and the second processor receives the second frame image and updates the obtained depth information of the first frame image into metadata of the second frame image before the second frame image is received.
19. An image processing system, comprising:
a first processor for receiving a first frame image; sending metadata of a second frame image to the receiving end based on receiving the second frame image so as to instruct the receiving end to process according to the depth information of the first frame image in the metadata of the second frame image and the corresponding first frame image;
the second processor is used for acquiring the image data information of the first frame image; processing depth information based on the image data information of the first frame image to obtain the depth information of the first frame image; updating depth information of the first frame image into metadata of the second frame image based on receiving the second frame image by the first processor and before the first processor finishes receiving the second frame image;
the scheduling processor is used for controlling the first processor to receive a first frame image and a second frame image and controlling the first processor to send metadata of the second frame image to a receiving end;
the scheduling processor is further configured to control the second processor to perform depth information processing based on the image data information of the first frame image, and control the second processor to update the depth information of the first frame image into metadata of the second frame image.
20. An electronic device comprising a memory and a processor, the memory having stored thereon a computer program, wherein the computer program, when executed by the processor, causes the processor to carry out the steps of the image processing method as claimed in any one of claims 1 to 15.
21. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 15.
22. A computer program product comprising a computer program, characterized in that the computer program, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 15.
CN202211078415.1A 2022-09-05 2022-09-05 Image processing method, device, system, electronic equipment and readable storage medium Pending CN115457098A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211078415.1A CN115457098A (en) 2022-09-05 2022-09-05 Image processing method, device, system, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211078415.1A CN115457098A (en) 2022-09-05 2022-09-05 Image processing method, device, system, electronic equipment and readable storage medium

Publications (1)

Publication Number Publication Date
CN115457098A true CN115457098A (en) 2022-12-09

Family

ID=84303784

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211078415.1A Pending CN115457098A (en) 2022-09-05 2022-09-05 Image processing method, device, system, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN115457098A (en)

Similar Documents

Publication Publication Date Title
CN110430365B (en) Anti-shake method, anti-shake device, computer equipment and storage medium
US11367196B2 (en) Image processing method, apparatus, and storage medium
CN111327887B (en) Electronic device, method of operating the same, and method of processing image of the electronic device
WO2021022989A1 (en) Calibration parameter obtaining method and apparatus, processor, and electronic device
CN107615745A (en) A kind of photographic method and terminal
CN113992850A (en) ISP-based image processing method and device, storage medium and camera equipment
CN114096994A (en) Image alignment method and device, electronic equipment and storage medium
US20140300817A1 (en) Avoiding Flash-Exposed Frames During Video Recording
AU2021240231A1 (en) Image synchronization method and apparatus, and device and computer storage medium
CN111800569A (en) Photographing processing method and device, storage medium and electronic equipment
CN108259767B (en) Image processing method, image processing device, storage medium and electronic equipment
WO2021164329A1 (en) Image processing method and apparatus, and communication device and readable storage medium
CN109523456A (en) Image processing method and device, electronic equipment, computer readable storage medium
CN107180417B (en) Photo processing method and device, computer readable storage medium and electronic equipment
CN110443887B (en) Feature point positioning method, device, reconstruction method, system, equipment and medium
WO2023174063A1 (en) Background replacement method and electronic device
US10783704B2 (en) Dense reconstruction for narrow baseline motion observations
CN115457098A (en) Image processing method, device, system, electronic equipment and readable storage medium
CN108495038B (en) Image processing method, image processing device, storage medium and electronic equipment
KR102389916B1 (en) Method, apparatus, and device for identifying human body and computer readable storage
US20190114793A1 (en) Image Registration Method and Apparatus for Terminal, and Terminal
WO2023109871A1 (en) Depth image generation method and apparatus, electronic device, and storage medium
CN116228607B (en) Image processing method and electronic device
CN117294831B (en) Time calibration method, time calibration device, computer equipment and storage medium
CN116091572B (en) Method for acquiring image depth information, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20241014

Address after: 6th Floor, No.1 Chongqing Road, Banqiao District, Xinbei City, Taiwan, China, China

Applicant after: Weiguang Co.,Ltd.

Country or region after: Samoa

Address before: 200131 room 01, 8th floor (7th floor, real estate registration floor), No. 1, Lane 61, shengxia Road, China (Shanghai) pilot Free Trade Zone, Pudong New Area, Shanghai

Applicant before: Zheku Technology (Shanghai) Co.,Ltd.

Country or region before: China