CN106251279A - A kind of image processing method and terminal - Google Patents

A kind of image processing method and terminal Download PDF

Info

Publication number
CN106251279A
CN106251279A CN201610685209.5A CN201610685209A CN106251279A CN 106251279 A CN106251279 A CN 106251279A CN 201610685209 A CN201610685209 A CN 201610685209A CN 106251279 A CN106251279 A CN 106251279A
Authority
CN
China
Prior art keywords
image
processor
frame
images
sequence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN201610685209.5A
Other languages
Chinese (zh)
Inventor
黄晓峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Jinli Communication Equipment Co Ltd
Original Assignee
Shenzhen Jinli Communication Equipment Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Jinli Communication Equipment Co Ltd filed Critical Shenzhen Jinli Communication Equipment Co Ltd
Priority to CN201610685209.5A priority Critical patent/CN106251279A/en
Publication of CN106251279A publication Critical patent/CN106251279A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/20Processor architectures; Processor configuration, e.g. pipelining
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/60Memory management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the invention discloses a kind of image processing method and terminal, wherein method includes: first processor obtains the original image gathered by imageing sensor, and by picture frame sequence, described original image is sent to the second processor;Described second processor receives the image that described first processor is sent by described picture frame sequence, and the image receiving described second processor carries out image procossing, and generation processes image;The image that described second processor receives includes described original image;Described second processor shows described process image by display, or described second processor preserves described process image by memorizer.The embodiment of the present invention carries out the collection of image, image preview or image by two processors and stores, and work efficiency is high.

Description

Image processing method and terminal
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an image processing method and a terminal.
Background
At present, more and more terminals or devices have a photographing or shooting function, for example, smart phones, tablet computers and the like have a photographing or shooting function, and vehicles such as automobiles record driving images or videos through cameras arranged on the vehicles. Generally, a processor in a terminal calls a camera, acquires an image through the camera, processes the image, and calls a display to display the processed image or calls a memory to store the processed image. In this case, one processor needs to perform multiple tasks at the same time, which is likely to cause unsmooth image preview.
Disclosure of Invention
The embodiment of the invention provides an image processing method and a terminal, wherein the two processors are used for image acquisition, image preview or image storage, and the working efficiency is high.
In a first aspect, an embodiment of the present invention provides an image processing method, where the method includes:
the method comprises the steps that a first processor acquires an original image acquired through an image sensor and sends the original image to a second processor through an image frame sequence;
the second processor receives the images sent by the first processor through the image frame sequence, and performs image processing on the images received by the second processor to generate processed images; the image received by the second processor comprises the original image;
the second processor displays the processed image through a display, or the second processor saves the processed image through a memory.
In another aspect, an embodiment of the present invention provides a terminal, where the terminal includes a first processor and a second processor; the first processor comprises:
an acquisition unit for acquiring an original image acquired by an image sensor;
the first sending unit is used for sending the original image to the second processor through an image frame sequence;
the second processor comprises:
a first receiving unit, configured to receive an image sent by the first processor through the image frame sequence;
the processing unit is used for carrying out image processing on the image received by the second processor to generate a processed image; the image received by the second processor comprises the original image;
and the execution unit is used for displaying the processed image through a display, or the second processor saves the processed image through a memory.
The method comprises the steps that an original image collected through an image sensor is obtained through a first processor, and the original image is sent to a second processor through an image frame sequence; the second processor receives the images sent by the first processor through the image frame sequence, and performs image processing on the images received by the second processor to generate processed images; the image received by the second processor comprises the original image; the second processor displays the processed image through a display, or the second processor stores the processed image through a memory, and the two processors are used for image acquisition, image preview or image storage, so that the working efficiency is high.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic flow chart diagram of an image processing method according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart diagram of an image processing method according to another embodiment of the present invention;
FIG. 3 is a schematic diagram illustrating an embodiment of a method for transmitting a fused image through a sequence of image frames according to an embodiment of the present invention;
FIG. 4 is a schematic diagram illustrating an embodiment of transmitting a fused image through a sequence of image frames according to another embodiment of the present invention;
FIG. 5 is a schematic diagram illustrating an implementation of transmitting a fused image through a sequence of image frames according to yet another embodiment of the present invention;
fig. 6 is a schematic block diagram of a terminal according to an embodiment of the present invention;
fig. 7 is a schematic block diagram of a terminal according to another embodiment of the present invention;
fig. 8 is a schematic block diagram of a terminal according to another embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the specification of the present invention and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to a determination" or "in response to a detection". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
In particular implementations, the terminals described in embodiments of the invention include, but are not limited to, other portable devices such as mobile phones, laptop computers, or tablet computers having touch sensitive surfaces (e.g., touch screen displays and/or touch pads). It should also be understood that in some embodiments, the device is not a portable communication device, but is a desktop computer having a touch-sensitive surface (e.g., a touch screen display and/or touchpad).
In the discussion that follows, a terminal that includes a display and a touch-sensitive surface is described. However, it should be understood that the terminal may include one or more other physical user interface devices such as a physical keyboard, mouse, and/or joystick.
The terminal supports various applications, such as one or more of the following: a drawing application, a presentation application, a word processing application, a website creation application, a disc burning application, a spreadsheet application, a gaming application, a telephone application, a video conferencing application, an email application, an instant messaging application, an exercise support application, a photo management application, a digital camera application, a web browsing application, a digital music player application, and/or a digital video player application.
Various applications that may be executed on the terminal may use at least one common physical user interface device, such as a touch-sensitive surface. One or more functions of the touch-sensitive surface and corresponding information displayed on the terminal can be adjusted and/or changed between applications and/or within respective applications. In this way, a common physical architecture (e.g., touch-sensitive surface) of the terminal can support various applications with user interfaces that are intuitive and transparent to the user.
Referring to fig. 1, fig. 1 is a schematic flow chart of an image processing method according to an embodiment of the present invention, where the image processing method shown in fig. 1 may include the following steps:
step S101: the first processor acquires a raw image acquired by an image sensor.
The terminal may include an image sensor, a first processor, a second processor, a signal link, and the like, wherein each device in the first terminal may be connected through a bus, or may be connected through other data lines, control lines, and the like.
Specifically, an image sensor in the terminal acquires an original image of a current scene in real time, and sends the original image to a first processor through a bus or a data line for transmitting image data, which is connected between the first processor and the image sensor, and at this time, the first processor acquires the original image acquired by the image sensor.
Step S102: the first processor sends the original image to a second processor through a sequence of image frames.
The image sensor collects an original image of a current scene in real time according to a preset frequency and transmits the original image to the first processor according to the same preset frequency, and the first processor can send the original image obtained in real time to the second processor frame by frame according to a preset frame rate. The image frame sequence is a multi-frame image which comprises a plurality of frames of images which are sequenced according to the time sequence acquired by the first processor. The first processor may send the original image over a sequence of image frames to the second processor via a signal link on a first-in-first-out basis.
Step S103: and the second processor receives the image sent by the first processor through the image frame sequence, and performs image processing on the image received by the second processor to generate a processed image. Wherein the image received by the second processor comprises the original image.
It is understood that the raw image acquired by the image sensor may be a bayer image, and the second processor may process the raw image by passing the raw image through an image signal processing pipeline, i.e., an isp (image signaling processing) pipeline. The ISP pipeline may include one or a combination of cfa (color Filter array) interpolation, white balance, edge enhancement, color correction, Gamma correction, gray scale conversion, and the like.
Step S104: the second processor displays the processed image through a display, or the second processor saves the processed image through a memory.
Specifically, the terminal may include a display and a memory, and the terminal may also be externally connected with the display or the memory. The second processor can display the processed image through a display, so that real-time preview of an original image currently acquired by the image sensor is realized; the processed image can also be saved by a memory, so that the saving of the original image currently acquired by the image sensor is realized.
It is understood that the terminal may be a device with a camera shooting or photographing function, such as a smart phone, a tablet computer, a camera, a car recorder, and the like. The terminal may include two processors, i.e., a first processor and a second processor, wherein the first processor and the second processor may be integrated on one chip or may be integrated on two chips, respectively. The signal link may be a branch of a bus in the terminal or may be provided separately, and the signal link may be used for data transmission between the first processor and the second processor, for example, for transmitting image data. The image sensor may be a camera.
In the embodiment of the invention, the original image acquired by the image sensor is acquired by the first processor, and is sent to the second processor through the image frame sequence; the second processor receives the images sent by the first processor through the image frame sequence, performs image processing on the images received by the second processor, and generates processed images, wherein the images received by the second processor comprise original images; and displaying the processed image through a display, or the second processor saving the processed image through a memory. The method carries out image acquisition, image preview or image storage through the two processors, and has high working efficiency.
Referring to fig. 2, fig. 2 is a schematic flow chart of an image processing method according to another embodiment of the present invention, and the image processing method shown in fig. 2 may include the following steps:
step S201: the first processor acquires a raw image acquired by an image sensor.
The terminal may include an image sensor, a first processor, a second processor, a signal link, and the like, wherein each device in the first terminal may be connected through a bus, or may be connected through other data lines, control lines, and the like.
Specifically, an image sensor in the terminal acquires an original image of a current scene in real time, and sends the original image to a first processor through a bus or a data line for transmitting image data, which is connected between the first processor and the image sensor, and at this time, the first processor acquires the original image acquired by the image sensor.
Step S202: and sending the original image to a second processor through an image frame sequence.
The image sensor collects an original image of a current scene in real time according to a preset frequency and transmits the original image to the first processor according to the same preset frequency, and the first processor can send the original image obtained in real time to the second processor frame by frame according to a preset frame rate. The image frame sequence is a multi-frame image which comprises a plurality of frames of images which are sequenced according to the sequence acquired by the first processor. The first processor may send the original image over a sequence of image frames to the second processor via a signal link on a first-in-first-out basis.
Step S203: the first processor receives a multi-frame fusion instruction.
The image multi-frame fusion instruction may be sent by the second processor, or may be generated by the first processor according to a received photographing instruction or continuous photographing instruction. The photographing instruction or the continuous photographing instruction may be sent by the second processor, or may be generated by receiving a photographing operation input by a user on the terminal by pressing a photographing key or on the touch display screen.
It is understood that step S203 can be performed at any step before or after step S201, and the invention is not limited thereto.
Step S204: and the first processor performs multi-frame fusion processing on the acquired N frames of original images according to the multi-frame fusion instruction to generate K frames of fusion images, wherein N, K is a positive integer.
Specifically, in order to obtain an image with better quality, it is generally necessary to perform multi-frame fusion processing on the obtained multi-frame original images. And the first processor performs multi-frame fusion processing on the acquired N frames of original images according to the multi-frame fusion instruction to generate K frames of fusion images. Wherein, K can be 1 or a positive integer more than 1; the specific method for performing multi-frame fusion processing on the acquired N frames of original images to generate the K frames of fused images may be as follows: processing the acquired N frames of original images through an image fusion algorithm to generate a fused image; the image fusion algorithm comprises one of a high dynamic illumination rendering algorithm, a super-resolution algorithm, a multi-frame noise reduction algorithm or a multi-focal-length image fusion algorithm.
It is to be understood that the first processor may include a buffer area, and the buffer area may include a plurality of buffer units, the buffer units being configured to buffer one frame of original image, the buffer area may buffer M frames of original images, and the N frames of original images may be original images in N-1 buffer units before the buffer unit in which the original image is being written and the buffer unit in which the original image is being written. Wherein M is a positive integer and is more than or equal to N + 2.
Step S205: and sending the K frame fusion image to a second processor through the image frame sequence.
In particular, the first processor may send the K frame fused image over the image frame sequence to the second processor over a signal link.
In this embodiment of the present invention, step S205 may further include: and sending frame identification information to a second processor through the image frame sequence, wherein the frame identification information is used for identifying the fusion image.
Specifically, the frame identification information may be contained in a Data Tape (DT) field of a fused image Data packet for including a fused image and the frame identification information, the type of the frame identification information may be defined using 0 × 30 to 0 × 37, for example, the user may customize the frame identification information of the high dynamic illumination rendered image, the super-resolution image, the multi-frame noise reduction image, and the multi-focal length image fused image to be 0 × 30, 0 × 31, 0 × 32, 0 × 33, and 0 × 36, respectively, and the second processor determines the image type of the image according to the Data in the DT field. It is understood that when the DT field in the data packet of the image transmitted by the first processor through the image frame sequence is received by the second processor, and is no data or preset data, such as 0 × 37, the image transmitted by the first processor through the image frame sequence received by the second processor is considered as the original image.
In this embodiment of the present invention, the implementation manner of step S205 may be: covering the K frames of fused images on K frames of original images in the image frame sequence; and sequentially sending the images in the covered image frame sequence to the second processor. After the first processor generates the K-frame fusion image, the first processor may overlay the K-frame fusion image on the K-frame original image in the image frame sequence, and sequentially send the original image and the K-frame fusion image in the overlaid image frame sequence to the second processor through the image frame sequence overlaid by the K-frame fusion image.
Typically, images in a sequence of image frames are transmitted at a preset frame rate (F), which may be the frame rate at which the image sensor acquires raw images, to transmit the acquired raw images in real time. The sending frame rate is the number of the images sent by the first processor in unit time; the frame rate of the image sensor for acquiring the original images is the number of the original images acquired by the image sensor in unit time.
Referring to fig. 3, fig. 3 is a schematic diagram illustrating an implementation of transmitting a fused image through an image frame sequence according to an embodiment of the present invention. Fig. 3 illustrates the principle of an embodiment in which the fused image is transmitted through a sequence of image frames, taking K as an example 1. The image frame sequence comprises a plurality of frames of images which are sequenced according to the sequence of the acquisition time, and the image at the rightmost end of the image frame sequence is the image which is currently sent. When the first processor does not generate the fusion image, the first processor output image frame sequence is the same as the image frame sequence input by the first processor; the image frame sequence input by the first processor is the image frame sequence output by the image sensor. After the first processor generates the K-frame fused image, the K-frame fused image is overlaid on the K-frame original image in the first processor input image frame sequence to form a first processor output image frame sequence shown in fig. 3, and the first processor sequentially sends the original image and the K-frame fused image in the overlaid image frame sequence to the second processor.
It is understood that the K-frame fused image can cover any K-frame original image in the image frame sequence without changing the transmission frame rate of the original image. Preferably, the K frame fused image covers K frame original images following the original image currently being transmitted.
In this embodiment of the present invention, the implementation manner of step S205 may be: inserting the K-frame fused image into the sequence of image frames; and sequentially sending the images in the inserted image frame sequence to the second processor. It can be understood that the K-frame fused image may be inserted between two original images, or may be dispersedly inserted between multiple original images, and the sending frame rate of the original images may not be changed.
Referring to fig. 4, fig. 4 is a schematic diagram illustrating an implementation of transmitting a fused image through an image frame sequence according to another embodiment of the present invention. Fig. 4 illustrates the principle of an embodiment in which the fused image is transmitted through the image frame sequence, taking K as an example 1. The image frame sequence comprises a plurality of frames of images which are sequenced according to the sequence of the acquisition time, and the image at the rightmost end of the image frame sequence is the image which is currently sent. When the first processor does not generate the fusion image, the first processor output image frame sequence is the same as the image frame sequence input by the first processor; the image frame sequence input by the first processor is the image frame sequence output by the image sensor. After the first processor generates the K-frame fused image, the K-frame fused image is inserted into the first processor input image frame sequence to form the first processor output image frame sequence shown in fig. 4, and the first processor sequentially sends the original image and the K-frame fused image in the inserted image frame sequence to the second processor.
In this embodiment of the present invention, after the inserting the K-frame fusion image into the image frame sequence, and before the sequentially sending the images in the inserted image frame sequence to the second processor, the method may further include: adjusting the sending frame rate of Q frame images in the inserted image frame sequence; the Q frame image comprises the K frame fusion image; q is a positive integer, and Q is more than or equal to K.
Specifically, the first processor may adjust the transmission frame rate of Q frame images in the inserted image frame sequence, wherein the Q frame images include K frame fusion images, while keeping the transmission frame rate of original images other than the Q frame images unchanged. Alternatively, the transmission frame rate of the adjusted Q frame image may be F × (Q +2)/(Q + 2-K). Preferably, the Q frame image is a Q frame image following the original image being transmitted in the inserted image frame sequence, and the Q frame image includes a K frame fusion image.
Referring to fig. 5, fig. 5 is a schematic diagram illustrating an implementation of transmitting a fused image through an image frame sequence according to another embodiment of the present invention. Fig. 5 illustrates the principle of an embodiment of transmitting a fused image through a sequence of image frames, with K being 1 and Q being 3. The image frame sequence comprises a plurality of frames of images which are sequenced according to the sequence of the acquisition time, and the image at the rightmost end of the image frame sequence is the image which is currently sent. When the first processor does not generate the fusion image, the first processor output image frame sequence is the same as the image frame sequence input by the first processor; the image frame sequence input by the first processor is the image frame sequence output by the image sensor. After the first processor generates K frames of fused images, the K frames of fused images are inserted into the input image frame sequence of the first processor, and the sending frame rate of Q frames of images in the inserted image frame sequence is adjusted to form an output image frame sequence of the first processor as shown in fig. 5, and the first processor sends the original images and the K frames of fused images in the inserted image frame sequence to the second processor in sequence. If the transmission frame rate of the image frame sequence input by the first processor is F, the transmission frame rate of the Q-frame image adjusted in fig. 5 is 1.25F.
Step S206: and the second processor receives the image sent by the first processor through the image frame sequence, and performs image processing on the image received by the second processor to generate a processed image. Wherein the image received by the second processor comprises the original image and the K frame fused image.
It is understood that the raw image acquired by the image sensor may be a bayer image, the K-frame fused image generated by the first processor may also be a bayer image, and the way for the second processor to process the raw image or the fused image may be to process the raw image through an image Signal processing pipeline, that is, an isp (image Signal processing) pipeline. The ISP pipeline may include one or a combination of cfa (color Filter array) interpolation, white balance, edge enhancement, color correction, Gamma correction, gray scale conversion, and the like.
Step S207: the second processor displays the processed image through a display, or the second processor saves the processed image through a memory.
Specifically, the terminal may include a display and a memory, and the terminal may also be externally connected with the display or the memory. The second processor can display the processed image through a display, so that real-time preview of an original image currently acquired by the image sensor is realized; the processed image can also be saved by a memory, so that the saving of the original image currently acquired by the image sensor is realized.
Optionally, after the second processor receives the image sent by the first processor through the image frame sequence and before step S207, the second processor may select to display or store the processed image according to the type of the image, specifically, the image received by the second processor may include frame identification information to identify the image type of the image, and when the image received by the second processor is an original image, the second processor may select to display the processed original image; when the image received by the second processor is a fused image, for example, the frame identification information in the data packet of the image received by the second processor is 0 × 30, 0 × 31, 0 × 32, 0 × 33, or 0 × 36, the second processor may select to store the processed fused image.
According to the embodiment of the invention, an original image acquired through an image sensor is acquired through a first processor, multi-frame fusion processing is carried out on the acquired N frames of original images according to a received multi-frame fusion instruction to generate a K frame fusion image, and the original image and the K frame fusion image are sent to a second processor through an image frame sequence; the second processor receives the image sent by the first processor through the image frame sequence, performs image processing on the image received by the second processor to generate a processed image, and displays the processed image through a display, or the second processor stores the processed image through a memory, so that mutual interference between multi-frame fusion processing of the original image and real-time preview of the original image is realized, the working efficiency of the terminal is improved, and the user experience is improved.
Moreover, the fused image and the original image can be transmitted by interleaving through a signal link of the original second processor without changing the hardware structure of the second processor.
Referring to fig. 6, fig. 6 is a schematic block diagram of a terminal according to an embodiment of the present invention. The terminal comprises a first processor 61, a second processor 62, the first processor 61 and the second processor 62 being connected by a signal link 63. Wherein the first processor 61 may include: an acquisition unit 601 and a first transmission unit 602; the second processor 62 may include: a first receiving unit 603, a processing unit 604 and an execution unit 605, wherein,
an acquisition unit 601 configured to acquire an original image acquired by an image sensor;
a first sending unit 602, configured to send the original image to the second processor through an image frame sequence;
the second processor 62 may include: a first receiving unit 603, a processing unit 604 and an execution unit 605, wherein,
a first receiving unit 603, configured to receive an image sent by the first processor through the image frame sequence;
a processing unit 604, configured to perform image processing on the image received by the second processor, and generate a processed image; the image received by the second processor comprises the original image;
an executing unit 605, configured to display the processed image through a display, or the second processor stores the processed image through a memory.
It should be noted that, in each embodiment of the present invention, the functions of the obtaining unit 601, the first sending unit 602, the first receiving unit 603, the processing unit 604, and the executing unit 605 in the terminal may be specifically implemented according to the method in the foregoing method embodiment, and a specific implementation process thereof may refer to the description related to the implementation manner described in the foregoing method first embodiment, which is not described herein again.
Referring to fig. 7, fig. 7 is a schematic block diagram of a terminal according to another embodiment of the present invention. The terminal comprises a first processor 71 and a second processor 72, wherein the first processor 71 and the second processor 72 are connected through a signal link 73, and the first processor 71 may further comprise various units in the first processor 61 shown in fig. 6; a second receiving unit 606, a fusion processing unit 607, a second transmitting unit 608 and an adjusting unit 609, wherein,
a second receiving unit 606, configured to receive a multi-frame fusion instruction;
the fusion processing unit 607 is configured to perform multi-frame fusion processing on the acquired N original images according to the multi-frame fusion instruction to generate K fusion images, where N, K is a positive integer, and N is greater than or equal to K;
a second sending unit 608, configured to send the K-frame fused image to a second processor through the image frame sequence; the images received by the second processor further include the K frame fused image.
In this embodiment of the present invention, the second sending unit 608 is specifically configured to:
covering the K frames of fused images on K frames of original images in the image frame sequence;
and sequentially sending the images in the covered image frame sequence to the second processor.
In this embodiment of the present invention, the second sending unit 608 is specifically configured to:
inserting the K-frame fused image into the sequence of image frames;
and sequentially sending the images in the inserted image frame sequence to the second processor.
In this embodiment of the present invention, the first processor 71 may further include:
an adjusting unit 609, configured to adjust a transmission frame rate of Q frames of images in the inserted image frame sequence; the Q frame image comprises the K frame fusion image; q is a positive integer, and Q is more than or equal to K.
It should be noted that, in each embodiment of the present invention, functions of the obtaining unit 601, the first sending unit 602, the first receiving unit 603, the processing unit 604, the executing unit 605, the second receiving unit 606, the fusion processing unit 607, the second sending unit 608, and the adjusting unit 609 may be specifically implemented according to the method in the foregoing method embodiment, and a specific implementation process thereof may refer to relevant descriptions of implementation manners described in the foregoing first embodiment and second embodiment of the method, and details are not described here again.
Referring to fig. 8, fig. 8 is a schematic block diagram of a terminal according to another embodiment of the present invention. The terminal may include a first processor 801, a second processor 802, one or more input devices 803, one or more output devices 804 and a memory 805. The second processor 802, the input device 803, the output device 804 and the memory 805 are connected by a bus 806, the first processor 801 and the second processor 802 are connected by a signal link 807, and the memory 805 is used to store instructions, wherein the input device 803 may comprise an image sensor. The input device 803 may be connected to the first processor 801 by a data line 808. The memory 805 is used for storing instructions, and the first processor 801 and the second processor 802 are used for executing the instructions stored in the memory 805. Wherein,
the first processor 801 may be configured to acquire a raw image captured by an image sensor in the input device 803 via a data line 808 and send the raw image to the second processor 802 via a sequence of image frames via a signal link 807;
the second processor 802 is configured to receive an image sent by the first processor 801 through the image frame sequence, and perform image processing on the image received by the second processor 802 to generate a processed image; the image received by the second processor 802 comprises the original image;
the second processor 802 displays the processed image via a display in an output device 804 or the second processor saves the processed image via a memory 805.
In this embodiment of the present invention, the first processor 801 is further configured to perform the method,
receiving a multi-frame fusion instruction through the device to be input 803;
performing multi-frame fusion processing on the obtained N original images according to the multi-frame fusion instruction to generate K fusion images, wherein N, K is a positive integer, and N is more than or equal to K;
sending the K frames of fused image over the signal link 807 to a second processor 802 over the sequence of image frames; the images received by the second processor 802 also include the K frame fused image.
In this embodiment of the present invention, the first processor 801 executing the sending of the K frames of fused images to the second processor 802 through the image frame sequence through the signal link 807 may include:
covering the K frames of fused images on K frames of original images in the image frame sequence;
the images in the covered image frame sequence are sequentially sent to the second processor 802.
In this embodiment of the present invention, the first processor 801 executing the sending of the K frames of fused images to the second processor 802 through the image frame sequence through the signal link 807 may include:
inserting the K-frame fused image into the sequence of image frames;
the images in the inserted image frame sequence are sequentially sent to the second processor 802.
In this embodiment of the present invention, after the first processor 801 inserts the K frames of fused images into the image frame sequence, and before the images in the inserted image frame sequence are sequentially sent to the second processor 802 through the signal link 807, the first processor 801 is further configured to:
adjusting the sending frame rate of Q frame images in the inserted image frame sequence; the Q frame image comprises the K frame fusion image; q is a positive integer, and Q is more than or equal to K.
It should be understood that in the embodiments of the present invention, the first Processor 801 or the second Processor 802 may be a Central Processing Unit (CPU), and the Processor may also be other general purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, and the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. Preferably, the first processor 801 is a DSP (digital Signal processing) processor and the second processor 802 may be an ARM (Acorn RISC machine) processor.
In embodiments of the present invention, the input device 803 may include an image sensor, a touch pad, a fingerprint sensor (for collecting fingerprint information of a user and direction information of the fingerprint), a microphone, etc., and the output device 804 may include a display (LCD, etc.), a speaker, etc.
The memory 805 may include a read-only memory and a random access memory, and provides instructions and data to the first processor 801 and the second processor 802. A portion of the memory 805 may also include non-volatile random access memory. For example, the memory 805 may also store information of device types.
In a specific implementation, the first processor 801, the second processor 802, the input device 803, the output device 804, the memory 805, the bus 806, the signal link 807, and the data line 808, which are described in the embodiment of the present invention, may perform the implementation manners described in the first embodiment and the second embodiment of the method for processing an image, and may also perform the implementation manners of the terminal described in the embodiment of the present invention, which is not described herein again.
Those of ordinary skill in the art will appreciate that the elements and algorithm steps of the examples described in connection with the embodiments disclosed herein may be embodied in electronic hardware, computer software, or combinations of both, and that the components and steps of the examples have been described in a functional general in the foregoing description for the purpose of illustrating clearly the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the terminal and the unit described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed terminal and method can be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may also be an electric, mechanical or other form of connection.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment of the present invention.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention essentially or partially contributes to the prior art, or all or part of the technical solution can be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The steps in the method of the embodiment of the invention can be sequentially adjusted, combined and deleted according to actual needs.
The units in the terminal of the embodiment of the invention can be merged, divided and deleted according to actual needs.
While the invention has been described with reference to specific embodiments, the invention is not limited thereto, and various equivalent modifications and substitutions can be easily made by those skilled in the art within the technical scope of the invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (10)

1. An image processing method, comprising:
the method comprises the steps that a first processor acquires an original image acquired through an image sensor and sends the original image to a second processor through an image frame sequence;
the second processor receives the images sent by the first processor through the image frame sequence, and performs image processing on the images received by the second processor to generate processed images; the image received by the second processor comprises the original image;
the second processor displays the processed image through a display, or the second processor saves the processed image through a memory.
2. The method of claim 1, further comprising:
the first processor receives a multi-frame fusion instruction;
the first processor performs multi-frame fusion processing on the acquired N original images according to the multi-frame fusion instruction to generate K fusion images, wherein N, K is a positive integer, and N is more than or equal to K;
sending the K frame fusion image to a second processor through the image frame sequence; the images received by the second processor further include the K frame fused image.
3. The method of claim 2, wherein sending the K-frame fused image to a second processor over the sequence of image frames comprises:
covering the K frames of fused images on K frames of original images in the image frame sequence;
and sequentially sending the images in the covered image frame sequence to the second processor.
4. The method of claim 2, wherein sending the K-frame fused image to a second processor over the sequence of image frames comprises:
inserting the K-frame fused image into the sequence of image frames;
and sequentially sending the images in the inserted image frame sequence to the second processor.
5. The method of claim 4, wherein after the inserting the K-frame fused images into the sequence of image frames and before the sequentially sending the images in the sequence of inserted image frames to the second processor, the method further comprises:
adjusting the sending frame rate of Q frame images in the inserted image frame sequence; the Q frame image comprises the K frame fusion image; q is a positive integer, and Q is more than or equal to K.
6. A terminal, comprising a first processor, a second processor, wherein the first processor comprises:
an acquisition unit for acquiring an original image acquired by an image sensor;
the first sending unit is used for sending the original image to the second processor through an image frame sequence;
the second processor comprises:
a first receiving unit, configured to receive an image sent by the first processor through the image frame sequence;
the processing unit is used for carrying out image processing on the image received by the second processor to generate a processed image; the image received by the second processor comprises the original image;
and the execution unit is used for displaying the processed image through a display, or the second processor saves the processed image through a memory.
7. The terminal of claim 6, wherein the first processor further comprises:
the second receiving unit is used for receiving the multi-frame fusion instruction;
the fusion processing unit is used for carrying out multi-frame fusion processing on the acquired N original images according to the multi-frame fusion instruction to generate K fusion images, wherein N, K is a positive integer, and N is more than or equal to K;
the second sending unit is used for sending the K frame fusion image to a second processor through the image frame sequence; the images received by the second processor further include the K frame fused image.
8. The terminal of claim 7, wherein the second sending unit is specifically configured to:
covering the K frames of fused images on K frames of original images in the image frame sequence;
and sequentially sending the images in the covered image frame sequence to the second processor.
9. The terminal of claim 7, wherein the second sending unit is specifically configured to:
inserting the K-frame fused image into the sequence of image frames;
and sequentially sending the images in the inserted image frame sequence to the second processor.
10. The terminal of claim 9, wherein the first processor further comprises:
an adjusting unit, configured to adjust a transmission frame rate of Q-frame images in the inserted image frame sequence; the Q frame image comprises the K frame fusion image; q is a positive integer, and Q is more than or equal to K.
CN201610685209.5A 2016-08-18 2016-08-18 A kind of image processing method and terminal Withdrawn CN106251279A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610685209.5A CN106251279A (en) 2016-08-18 2016-08-18 A kind of image processing method and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610685209.5A CN106251279A (en) 2016-08-18 2016-08-18 A kind of image processing method and terminal

Publications (1)

Publication Number Publication Date
CN106251279A true CN106251279A (en) 2016-12-21

Family

ID=57593233

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610685209.5A Withdrawn CN106251279A (en) 2016-08-18 2016-08-18 A kind of image processing method and terminal

Country Status (1)

Country Link
CN (1) CN106251279A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107277351A (en) * 2017-06-30 2017-10-20 维沃移动通信有限公司 The processing method and mobile terminal of a kind of view data
CN108229516A (en) * 2016-12-30 2018-06-29 北京市商汤科技开发有限公司 For interpreting convolutional neural networks training method, device and the equipment of remote sensing images
CN108881946A (en) * 2017-05-10 2018-11-23 北京猎户星空科技有限公司 Generation, transmission, processing method, device and its system of sensing data
CN110730311A (en) * 2019-10-08 2020-01-24 西安万像电子科技有限公司 Image processing method, host, single board and system
CN111698414A (en) * 2019-03-14 2020-09-22 北京小米移动软件有限公司 Image signal processing method and device, electronic device and readable storage medium
WO2023236115A1 (en) * 2022-06-08 2023-12-14 北京小米移动软件有限公司 Image processing method and apparatus, and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103595924A (en) * 2013-06-18 2014-02-19 南京理工大学 Image fusion system based on Cameralink and image fusion method based on Cameralink
US20150228048A1 (en) * 2014-02-07 2015-08-13 Samsung Electronics Co., Ltd. Method for displaying image information and electronic device thereof
CN205486305U (en) * 2016-03-07 2016-08-17 四川九洲北斗导航与位置服务有限公司 Multinuclear heart image processor system based on compactPCI bus

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103595924A (en) * 2013-06-18 2014-02-19 南京理工大学 Image fusion system based on Cameralink and image fusion method based on Cameralink
US20150228048A1 (en) * 2014-02-07 2015-08-13 Samsung Electronics Co., Ltd. Method for displaying image information and electronic device thereof
CN205486305U (en) * 2016-03-07 2016-08-17 四川九洲北斗导航与位置服务有限公司 Multinuclear heart image processor system based on compactPCI bus

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108229516A (en) * 2016-12-30 2018-06-29 北京市商汤科技开发有限公司 For interpreting convolutional neural networks training method, device and the equipment of remote sensing images
CN108881946A (en) * 2017-05-10 2018-11-23 北京猎户星空科技有限公司 Generation, transmission, processing method, device and its system of sensing data
CN107277351A (en) * 2017-06-30 2017-10-20 维沃移动通信有限公司 The processing method and mobile terminal of a kind of view data
CN111698414A (en) * 2019-03-14 2020-09-22 北京小米移动软件有限公司 Image signal processing method and device, electronic device and readable storage medium
CN111698414B (en) * 2019-03-14 2021-11-16 北京小米移动软件有限公司 Image signal processing method and device, electronic device and readable storage medium
CN110730311A (en) * 2019-10-08 2020-01-24 西安万像电子科技有限公司 Image processing method, host, single board and system
CN110730311B (en) * 2019-10-08 2024-02-20 西安万像电子科技有限公司 Image processing method, host, single board and system
WO2023236115A1 (en) * 2022-06-08 2023-12-14 北京小米移动软件有限公司 Image processing method and apparatus, and storage medium

Similar Documents

Publication Publication Date Title
CN106251279A (en) A kind of image processing method and terminal
CN106341571A (en) Image processing method and terminal
CN111726533B (en) Image processing method, image processing device, mobile terminal and computer readable storage medium
CN107172345B (en) Image processing method and terminal
EP3547218B1 (en) File processing device and method, and graphical user interface
US20140310654A1 (en) Method and system for interworking plurality of applications
CN104869305B (en) Method and apparatus for processing image data
CN108737739B (en) Preview picture acquisition method, preview picture acquisition device and electronic equipment
CN112770059B (en) Photographing method and device and electronic equipment
CN107302666A (en) Photographic method, mobile terminal and computer-readable recording medium
CN108388671B (en) Information sharing method and device, mobile terminal and computer readable medium
CN110457963B (en) Display control method, display control device, mobile terminal and computer-readable storage medium
TWI615807B (en) Method, apparatus and system for recording the results of visibility tests at the input geometry object granularity
CN108924440A (en) Paster display methods, device, terminal and computer readable storage medium
CN110390641B (en) Image desensitizing method, electronic device and storage medium
CN111464864B (en) Reverse order video acquisition method and device, electronic equipment and storage medium
CN105573585A (en) Information display method and terminal
CN110166696B (en) Photographing method, photographing device, terminal equipment and computer-readable storage medium
WO2015094182A1 (en) Camera array analysis mechanism
CN103823610B (en) A kind of electronic equipment and its information processing method
CN110969587A (en) Image acquisition method and device and electronic equipment
CN109271543B (en) Thumbnail display method and device, terminal and computer-readable storage medium
CN115631109A (en) Image processing method, image processing device and electronic equipment
CN112203020B (en) Method, device and system for configuring camera configuration parameters of terminal equipment
CN111447439B (en) Image coding method, image coding device and mobile terminal

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20161221