WO2021227693A1 - Photographic method and apparatus, and mobile terminal and chip system - Google Patents

Photographic method and apparatus, and mobile terminal and chip system Download PDF

Info

Publication number
WO2021227693A1
WO2021227693A1 PCT/CN2021/084589 CN2021084589W WO2021227693A1 WO 2021227693 A1 WO2021227693 A1 WO 2021227693A1 CN 2021084589 W CN2021084589 W CN 2021084589W WO 2021227693 A1 WO2021227693 A1 WO 2021227693A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
area
tracking
preview
original image
Prior art date
Application number
PCT/CN2021/084589
Other languages
French (fr)
Chinese (zh)
Inventor
刘宏马
张雅琪
张超
陈艳花
吴文海
贾志平
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2021227693A1 publication Critical patent/WO2021227693A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/682Vibration or motion blur correction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/681Motion detection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/682Vibration or motion blur correction
    • H04N23/683Vibration or motion blur correction performed by a processor, e.g. controlling the readout of an image memory

Definitions

  • This application relates to the field of video shooting, in particular to a shooting method, device, mobile terminal and chip system.
  • the picture shake When the user follows the subject through the camera on the mobile terminal, the picture shake often occurs, especially when following the subject in a high magnification scene, the picture shake is more serious, and it is even difficult to capture the subject of interest. .
  • the embodiments of the present application provide a shooting method, device, mobile terminal, and chip system, which solves the problem that the picture shakes in a high-magnification scene and the shooting object of interest cannot be captured.
  • an embodiment of the present application provides a shooting method, including:
  • the image in the preview area is displayed as a preview screen.
  • the performing jitter correction processing and target tracking processing on the original image to obtain a processed image and a preview area of the processed image containing the tracking target includes:
  • the performing jitter correction processing and target tracking processing on the original image to obtain a processed image and a preview area of the processed image containing the tracking target includes:
  • Find a tracking target from the correction area determine a tracking area in the correction area based on the position of the tracking target in the correction area, and use the tracking area as a preview area.
  • the performing jitter correction processing and target tracking processing on the original image to obtain a processed image and a preview area of the processed image containing a tracking target includes:
  • Performing a shake correction process on the image in the tracking area to obtain a processed image and a correction area in the processed image, and use the correction area as a preview area.
  • the jitter correction processing includes:
  • the target tracking processing includes:
  • a second cropping process is performed to determine a tracking area in the image to be tracked.
  • the method before displaying the image in the preview area as a preview screen, the method further includes:
  • the displaying the image in the preview area as a preview screen includes:
  • the image in the preview area after the smoothing process is displayed as a preview screen.
  • the performing smoothing processing on the preview area includes:
  • the smooth subregion corresponding to each filter is fused to obtain a smoothed preview region.
  • the method further includes:
  • the number of cameras provided on the mobile terminal is at least two;
  • the method further includes:
  • the main camera is a camera that collects the next frame of original image.
  • the selecting one of the zoomed cameras as the main camera includes:
  • one of the cameras after the zoom processing is selected as the main camera.
  • an embodiment of the present application provides a photographing device, including:
  • the image acquisition unit is used to acquire the original image collected by the camera of the mobile terminal;
  • An image processing unit configured to perform jitter correction processing and target tracking processing on the original image to obtain a processed image and a preview area containing the tracking target in the processed image;
  • the image display unit is used to display the image in the preview area as a preview picture.
  • a mobile terminal including: a memory, a processor, and a computer program stored in the memory and running on the processor, wherein the processor executes the computer program when the computer program is executed.
  • a chip system including: a processor coupled with a memory, and the processor executes a computer program stored in the memory to implement the steps of any one of the methods described in the first aspect of the present application .
  • a computer-readable storage medium stores a computer program that, when executed by one or more processors, implements the method described in any one of the first aspects of the present application A step of.
  • embodiments of the present application provide a computer program product, which when the computer program product runs on a mobile terminal, causes the mobile terminal to execute the method described in any one of the above-mentioned first aspects.
  • the shooting method provided by this application uses a camera to collect the original image, the original image is a low-magnification image, the shake correction process avoids the shaking of the picture, and the target tracking process avoids the shaking of the tracking target in the picture.
  • the processed image after the shake correction processing can be obtained, and at the same time, a certain area in the processed image can be obtained.
  • the size of the area is smaller than the original image size, and finally the image in the area is displayed as a preview screen
  • a stable high-magnification shooting effect can be presented; that is, the shake correction processing and target tracking processing in the embodiments of the present application avoid image shake through cropping compensation, and avoid the shaking of the tracking target in the image through cropping.
  • the way also achieves high magnification scene shooting.
  • FIG. 1 is a schematic diagram of an application scenario of a shooting method provided by an embodiment of this application
  • FIG. 2 is a schematic flowchart of a shooting method provided by an embodiment of the application
  • FIG. 3 is a schematic diagram of a correction area after shake correction processing and a tracking area after target tracking processing provided by an embodiment of the application;
  • FIG. 4 is a schematic flowchart of another shooting method provided by an embodiment of the application.
  • FIG. 5 is a schematic diagram of a correction area, a tracking area, and a preview area corresponding to the shooting method provided by the embodiment shown in FIG. 4;
  • FIG. 6 is a schematic flowchart of another shooting method provided by an embodiment of the application.
  • FIG. 7 is a schematic diagram of a correction area, a tracking area, and a preview area corresponding to the shooting method provided by the embodiment shown in FIG. 6;
  • FIG. 7 is a schematic diagram of a correction area, a tracking area, and a preview area corresponding to the shooting method provided by the embodiment shown in FIG. 6;
  • FIG. 8 is a schematic flowchart of another shooting method provided by an embodiment of the application.
  • FIG. 9 is a schematic diagram of a correction area, a tracking area, and a preview area corresponding to the shooting method provided by an embodiment of the application;
  • FIG. 10 is a schematic diagram of a smoothing process in a shooting method provided by an embodiment of the application.
  • FIG. 11 is a schematic flowchart of another shooting method provided by an embodiment of the application.
  • FIG. 12 is a schematic block diagram of a photographing device provided by an embodiment of the application.
  • FIG. 13 is a schematic block diagram of a mobile terminal provided by an embodiment of this application.
  • FIG. 14 is a schematic diagram of the software structure of a mobile terminal provided by an embodiment of the application.
  • one or more refers to one, two or more than two; "and/or” describes the association relationship of the associated objects, indicating that there may be three relationships; for example, A and/or B can mean the situation where A exists alone, A and B exist at the same time, and B exists alone, where A and B can be singular or plural.
  • the character “/” generally indicates that the associated objects before and after are in an "or” relationship.
  • the term “if” can be construed as “when” or “once” or “in response to determination” or “in response to detecting “.
  • the phrase “if determined” or “if detected [described condition or event]” can be interpreted as meaning “once determined” or “in response to determination” or “once detected [described condition or event]” depending on the context ]” or “in response to detection of [condition or event described]”.
  • Figure 1 shows a schematic diagram of an application scenario of the shooting method provided by an embodiment of the present application.
  • a user person 1 holds a mobile phone to shoot a subject (person 2), and person 2 may be walking or running.
  • person 1 may also need to walk, run, etc.
  • the phone will jitter as the user walks or runs.
  • shooting at high magnification it will zoom in.
  • This kind of jitter causes the blur phenomenon of person 2 in the phone; in addition, when person 2 is running or beating, the user may need to run to be able to follow person 2, and run as person 2 is beating or person 1 is shooting.
  • FIG. 2 shows a schematic flowchart of a photographing method provided by an embodiment of the present application. As an example and not a limitation, the method may be applied to a mobile terminal.
  • Step S201 Obtain the original image collected by the camera of the mobile terminal.
  • the mobile terminal may be a movable device with a camera function, for example, a mobile phone, a camera, a video camera, a tablet computer, a surveillance camera, a notebook, etc., which can be used on the mobile terminal.
  • the original image represents an image before subsequent shake correction processing and target tracking processing.
  • the original image may be the original large image collected by the camera, or it may be the original large image collected by the camera after preprocessing. Image.
  • the original large image may be an image generated by information collected by the sensor when all pixels in the sensor of the camera are working.
  • Step S202 Perform jitter correction processing and target tracking processing on the original image to obtain a processed image and a preview area containing the tracking target in the processed image.
  • the user will also move the body, causing the mobile terminal held by the user to shake.
  • the mobile terminal held by the user even if the user holds the mobile terminal through a certain exercise or In some postures with better stability, it is difficult to avoid the phenomenon of "hand shaking".
  • the methods of shake correction processing include: optical image stabilization, electronic image stabilization, and body sensor image stabilization.
  • Optical image stabilization is to correct the "optical axis shift" through the floating lens of the lens.
  • the principle is to detect the slight movement through the gyroscope in the lens, and then transmit the signal to the microprocessor.
  • the microprocessor immediately calculates the amount of displacement that needs to be compensated, and then through the compensation lens group, according to the lens shake direction and displacement amount To be compensated; thus effectively overcome the image blur caused by camera vibration.
  • the principle of sensor anti-shake technology is similar to that of lens anti-shake. It mainly installs the sensor on a free-floating bracket, and also needs a gyroscope to sense the direction and amplitude of the camera's shaking, and then control the sensor to perform corresponding displacement compensation. The main reason why so many correction directions are involved is to reduce the irregular hand shaking when taking pictures.
  • Both the optical anti-shake and the sensor anti-shake require the participation of hardware, while the electronic anti-shake can be implemented without hardware assistance, and the image on the sensor is mainly analyzed and processed by software.
  • the edge image is used to compensate for the blurry part in the middle, so as to achieve "anti-shake".
  • the anti-shake principle is more like the "post-processing" of the photo.
  • the viewfinder screen After turning on the electronic image stabilization, the viewfinder screen will have a very obvious cropping, but the cropped part is not that the edge sensor has stopped working, but the electronic image stabilization system will use this part of the data for shake compensation. It can also be understood that the image is cut into two parts, and the outermost part (edge part) is used to compensate the inner part (middle blurred part). It can still bring a certain anti-shake effect without hardware anti-shake.
  • the embodiments of this application can use electronic anti-shake, and without adding external hardware, the "anti-shake" processing is also realized, and after the electronic anti-shake is used for the shake correction processing, the middle compensated area in the processed image obtained ( The intermediate image) can become clearer. This area can become a correction area.
  • a correction detection frame can be generated, and the obtained coordinates can also be the coordinates of the correction detection frame.
  • Figure 3 (a) A schematic diagram of the post-correction area and the tracking area after the target tracking process is shown in Figure 3 (a). Assuming A is the original image, then the original image undergoes the shake correction process to obtain the processed image A', and the processed image A'and the original The size of the image A is the same.
  • the intermediate image of the processed image A' is clearer than the original image A, and B is the correction area.
  • the obtained can be the coordinates of the four corners of the correction area, or the correction area The coordinates of the center point and the distance between the center point coordinates of the correction area and the side length.
  • Figure 3 does not show the change in image clarity after the shake correction process, but only shows the positional relationship between the correction area and the processed image (original image).
  • the correction area is mainly described in the embodiment of the application, it is undeniable that the correction is
  • the region generation process is carried out with the shake correction process, that is, the process of the shake correction process first compensates the middle blurred part of the original image, and the processed image is obtained after the compensation.
  • the original image is shaken corrected in the embodiment of the application.
  • the processed image obtained after processing has the same size as the original image. Therefore, the processed image contains the edge image and the compensated image in the middle. After compensation, the corrected area is naturally obtained.
  • the image in the corrected area is the clearer after compensation.
  • the tracking target ie tracking target
  • the user will also move the body.
  • the tracking target and the user's pace are inconsistent, or the shooting angle or direction changes, the tracking target will be in the lens.
  • the position of the camera often shakes, and in severe cases, the tracking target even jumps out of the camera screen.
  • the embodiment of the present application also adds target tracking processing.
  • the tracking target needs to be determined.
  • the user selects an area on the touch screen of the mobile terminal (for example, selects an area within a preset range around the selected position by clicking, or by drawing Select an area in a circle), and set the target in the area as the target to be tracked; you can also detect the foreground image in the picture, and use the foreground image as the tracking target; you can also use the Attention algorithm to find the saliency in the current image
  • the object serves as the tracking target.
  • the obtained area may also be a region.
  • the coordinates of the tracking detection frame obtained after the target tracking processing are shown in Figure 3 (b), where A is the original image, and C is Tracking area, after target tracking processing, the obtained can be the coordinates of the four corners of the tracking area, or the center point coordinates of the tracking area and the distance between the center point coordinates of the tracking area and the side length.
  • the same logic can be used in the shake correction processing process, for example, the principle of minimum scene change in the obtained correction detection frame.
  • the target tracking process can also adopt the same logic, for example, the principle of maximizing the matching degree of the position and size of the tracking target in the obtained tracking detection frame. In this way, it is avoided that the preview picture sequence in the preview area corresponding to the original image sequence has larger picture shake and larger tracking target position shake.
  • the principle of minimal changes in the scene in the correction detection frame obtained is illustrated by examples.
  • the picture in the correction detection frame obtained is B i
  • the frame i+1 is The image shake correction process is to ensure that the difference between the obtained picture B i+1 in the correction detection frame and the background image in B i is the smallest, that is, the continuous image frame (the picture in the correction detection frame corresponding to the continuous original image)
  • the background image changes little between the time, so as to achieve a smoother viewing effect.
  • the tracking process is to ensure that the tracking and detection of the screen frame obtained C i + 1 and C i in the target track position and size of the smallest difference, i.e., successive image frames (consecutive track detection frame picture corresponding to the original image)
  • the position and size of the tracking target have minimal changes between them, achieving a smoother viewing effect.
  • the corrected detection frame obtained after the shake correction processing can be marked as the correction area
  • the tracking detection frame obtained after the target tracking processing is marked as the tracking area
  • the jitter correction processing and target tracking are performed on the collected original image.
  • the processed area is recorded as the preview area.
  • Step S203 Display the image in the preview area as a preview picture.
  • the image in the preview area may be output to the display and displayed as a preview screen.
  • the preview picture is an image frame in the displayed video stream.
  • the original image is collected by the camera, the original image is a low-magnification image, and then the shake correction process is used to avoid the shaking of the picture, and the shaking of the tracking target in the picture is avoided through the target tracking process.
  • the shake correction process and the target tracking process After that, what is obtained is a certain area in the processed image corresponding to the original image, and finally the processed image in this area is displayed as a preview screen, so that a stable high-magnification effect can be presented. That is, the shake correction processing and target tracking processing in the embodiments of the present application not only avoid the shaking of the picture, but also avoid the shaking of the tracking target in the picture by cropping, and also delimit a certain area in the original image by cropping.
  • the corresponding processed image is displayed as a preview screen, and high-magnification scene shooting is realized at the same time.
  • FIG. 4 shows a schematic flowchart of a shooting method provided by an embodiment of the present application. As an example and not a limitation, the method may be applied to a mobile terminal.
  • Step S401 Obtain the original image collected by the camera of the mobile terminal.
  • step S401 and step S201 are the same.
  • step S201 please refer to the description of step S201, which will not be repeated here.
  • Step S402 Perform a shake correction process on the original image to obtain a processed image and a correction area in the processed image.
  • the process of the image stabilization processing is to use an electronic image stabilization (EIS) algorithm to avoid blurring through image cropping compensation.
  • EIS electronic image stabilization
  • FIG. 5 is a schematic flow diagram of the shake correction processing and target tracking processing provided by an embodiment of the application; as shown in FIG.
  • the size of the image A′ is the same as that of the original image A
  • the correction area B is the area corresponding to the intermediate blurred image after cropping during the correction process.
  • Step S403 Find a tracking target from the original image, and determine a tracking area in the original image based on the position of the tracking target in the original image.
  • the target tracking processing when the target tracking processing is performed, can be performed based on the original image.
  • the tracking target can be found from the original image, and the position of the tracking target in the original image can be determined in the original image. Tracking area.
  • the principle of determining the tracking area in the original image may also be determined according to the position and size of the tracking target in the original image and the composition model corresponding to the tracking target.
  • the tracking area C is obtained.
  • Step S404 Jointly crop the processed image based on the correction area and the tracking area to obtain a preview area in the processed image.
  • the purpose of the joint cropping process is not to crop the processed image, but to finally determine an area, which is recorded as the preview area, and the picture corresponding to the preview area in the processed image can be used as the preview picture .
  • the correction area, the tracking area, and the preview area can be representations of coordinates.
  • the coordinates corresponding to the correction area can be mapped to the original image or the processed image; similarly, the coordinates corresponding to the tracking area can also be mapped to the original image or the processed image; the coordinates corresponding to the preview area can also be mapped Mapping to the original image can also be mapped to the processed image.
  • the mapping of the preview area to the corresponding picture in the processed image is the picture within the range indicated by the coordinates corresponding to the preview area in the processed image.
  • the side length of) is merged to obtain the preview area in the processed image.
  • both the correction area and the tracking area are frames with coordinates. After the two frames are fused, the coordinates of the fused frame obtained are mapped to the processed image to be the preview area. It can also be known from the above description that, in addition to its own processing of image sharpness, the correction area actually obtained is the coordinates relative to the original image or the coordinates relative to the processed image (the same size as the original image). .
  • the preview image corresponding to the original image of the current frame and the preview area corresponding to the original image of the previous frame have a smoother visual effect
  • F is the preview area
  • ⁇ and ⁇ are constants
  • the center point of the correction area corresponding to the original image of the current frame Is a function of variables
  • the center point of the tracking area corresponding to the original image of the current frame As a function of variables.
  • the preview area is related to the position of the correction area and tracking area corresponding to the original image of the current frame during joint cropping, and is also related to the position of the correction area and tracking area corresponding to the original image of the previous frame ( Or the preview area corresponding to the previous frame of the original image) is related.
  • the position correlation between the correction area and the tracking area corresponding to the original image of the previous frame is to enable the obtained preview area of the current frame and the preview area corresponding to the original image of the previous frame to have a relatively stable preview effect.
  • the position comparison between the correction area B and the tracking area C is the preview area obtained after the correction area B and the tracking area C are jointly cropped D.
  • Step S404 Display the image in the preview area as a preview screen.
  • step S404 and step S203 are the same, and the description of step S203 can be referred to, and details are not repeated here.
  • the processed image and the correction area are obtained by performing jitter correction processing on the original image, and the target tracking processing is performed on the original image to obtain the tracking area. Finally, the processed image is jointly cropped based on the correction area and the tracking area, which can be used in high magnification scenes. Obtain a more stable and smooth video picture.
  • FIG. 6 shows a schematic flowchart of a shooting method provided by an embodiment of the present application.
  • the method can be applied to the above-mentioned mobile terminal.
  • Step S601 Obtain the original image collected by the camera of the mobile terminal.
  • step S601 and step S201 are the same.
  • step S201 which will not be repeated here.
  • Step S602 Perform a shake correction process on the original image to obtain a processed image and a correction area in the processed image.
  • step S602 the content of step S602 is the same as that of step S402.
  • step S402 please refer to the description of step S402, which will not be repeated here.
  • Step S603 Find a tracking target from the correction area, determine a tracking area in the correction area based on the position of the tracking target in the correction area, and use the tracking area as a preview area.
  • the target tracking processing performed in this step is different from step S403 in the embodiment shown in FIG. 4 in that: in the embodiment shown in FIG. 4, the tracking area is determined from the original image, while The tracking area in the illustrated embodiment is determined from the image in the correction area obtained in step S602, for example, from the corresponding picture when the correction area is mapped in the processed image. Or it is determined from the corresponding picture when the correction area is mapped in the original image.
  • the tracking area you also need to consider the location of the tracking target in the image within the correction area, or the composition model.
  • FIG. 7 is a schematic diagram of the correction area, the tracking area, and the preview area corresponding to the shooting method provided by the embodiment of the application; as shown in FIG. 7 (a), the original image A is first subjected to shake correction processing to obtain a processed image A'and the correction area B, and then perform target tracking processing on the image in the correction area B (the corresponding picture when the correction area is mapped in the processed image) to obtain the tracking area C, which is the preview area. Since the target tracking processing requires a cropping process, the tracking area C is located in the correction area B. Of course, when determining the tracking area, it is also possible to perform target tracking processing on the corresponding picture when the correction area B is mapped in the original image, without limitation.
  • Step S604 Display the image in the preview area as a preview picture.
  • step S604 and step S203 are the same, and the description of step S203 can be referred to, and the details are not repeated here.
  • the original large image is shake-corrected to obtain the processed image and the correction area, and then the target tracking process is performed on the image in the corrected area in the processed image to obtain the tracking area.
  • the obtained tracking area is the preview area. You can get a more stable and smooth video picture under high magnification scenes.
  • Fig. 8 shows a schematic flowchart of a shooting method provided by an embodiment of the present application.
  • the method can be applied to the above-mentioned mobile terminal.
  • Step S801 Obtain the original image collected by the camera of the mobile terminal.
  • step S801 and step S201 are the same.
  • step S201 which will not be repeated here.
  • Step S802 Find a tracking target from the original image, and determine a tracking area in the original image based on the position of the tracking target in the original image;
  • step S802 and step S403 are the same, and the description of step S403 may be referred to and will not be repeated here.
  • Step S803 Perform a shake correction process on the image of the tracking area, determine a correction area in the tracking area, and use the correction area as a preview area.
  • the difference from the embodiment shown in FIG. 4 and FIG. 6 is that the embodiment of this application first performs target tracking processing on the original image, and after obtaining the tracking area, the image in the tracking area is shaken corrected. Processing to obtain the processed image and the correction area, it needs to be explained that the processed image obtained at this time is not the size of the original image, but the size of the tracking area.
  • FIG. 9 is a schematic diagram of the correction area, the tracking area, and the preview area corresponding to the shooting method provided by the embodiment of the application; as shown in FIG. 9(a), the original image A is first subjected to target tracking processing to obtain the tracking area C, then perform a shake correction process on the image in the tracking area C to obtain a processed image C′ and a correction area B, and the correction area B is the preview area. Since the shake correction processing requires a cropping process, the correction area is located in the tracking area.
  • Step S804 Display the image in the preview area as a preview screen.
  • step S804 and step S203 are the same, and the description of step S203 can be referred to, and details are not described herein again.
  • the target tracking process is performed on the original large image to obtain the tracking area, and then the image in the tracking area is subjected to shake correction processing to obtain the processed image and the correction area in the processed image.
  • the obtained correction area is the preview Area, you can get a more stable and smooth video picture in a high magnification scene.
  • the process of the jitter correction processing includes:
  • the image to be corrected is an image to be subjected to shake correction processing.
  • the original image is subjected to shake correction processing, that is, the original image is to be corrected Processed image.
  • the image in the tracking area is subjected to shake correction processing, that is, the image in the tracking area is the image to be corrected.
  • the jitter information of the mobile terminal can be used to collect the jitter information of the mobile terminal by using the gyroscope set inside the mobile terminal, generate the jitter information of the mobile terminal corresponding to each moment, and obtain the image collection to be corrected from the jitter information of the mobile terminal.
  • the jitter information of the mobile terminal corresponding to the moment can be used to collect the jitter information of the mobile terminal by using the gyroscope set inside the mobile terminal, generate the jitter information of the mobile terminal corresponding to each moment, and obtain the image collection to be corrected from the jitter information of the mobile terminal.
  • the jitter information of the mobile terminal corresponding to the moment is necessary to use the image acquisition time to be corrected.
  • the image to be corrected can be reversely compensated.
  • the first cropping process is required, that is, the edge image and the intermediate blurred image are cut out, and the jitter information is calculated The amount of compensation, and then reversely compensate the intermediate blurred image through the edge image.
  • the intermediate blurred image will become clearer after reverse compensation. Because the intermediate blurred image is cropped from the original image Therefore, after the first cropping process, an area will be obtained. The image in this area is a compensated image, and this area can be recorded as a correction area.
  • the ratio for example, area ratio, length ratio, width ratio, etc.
  • the finally determined correction area to the image to be subjected to the shake correction processing can be set in advance.
  • the target tracking processing includes:
  • a second cropping process is performed to determine the position in the image to be tracked.
  • the tracking target when performing target tracking processing on the image to be tracked, the tracking target needs to be determined in advance.
  • the tracking target can be found from the image to be tracked based on the attention mechanism, or from the image to be tracked. In the N frames before the image, the tracking target can be found, and the tracking target can also be found based on the image to be tracked and the N frames before the image to be tracked.
  • a convolutional neural network model needs to be constructed, and the input data or different parts of the feature map in the constructed convolutional neural network model correspond to different degrees of concentration.
  • a classification network and an attention proposal network are used at each target scale of concern.
  • APN can be composed of two fully connected layers, and output 3 parameters to indicate the position of the box.
  • the next scale classification network only extracts features from this newly generated box image for classification.
  • the classification result obtained by controlling the latter scale by the loss function is better than that of the previous scale, so that APN can extract the target parts that are more conducive to fine classification. As the training progresses, the APN will become more and more focused. The subtle, distinguishing part of the target.
  • the current image to be tracked can be input into the convolutional neural network model, or the original image corresponding to the original image corresponding to the image to be tracked can be N frames of original images (or N frames of original images corresponding to the original image).
  • the preview screen is input into the convolutional neural network model, and the N frames before the original image corresponding to the image to be tracked (or the preview screen corresponding to the N frame of the original image) and the current image to be tracked can also be input into the convolution In the neural network model, the tracking target is thus output, and the input image of the input convolutional neural network model is not limited here.
  • the image to be tracked is about to be processed for target tracking.
  • the image to be tracked is the original image during target tracking processing.
  • the image to be tracked is It is the image within the correction area (the correction area is mapped to the picture in the original image or the picture in the processed image).
  • the location and size of the tracking area can be determined according to the location and size of the tracking target. For example, the center point of the smallest bounding rectangle that can be set as the tracking target is the center point of the tracking area.
  • the length and width can be set in advance.
  • the method before displaying the image in the preview area as a preview screen, the method further includes:
  • the displaying the image in the preview area as a preview screen includes:
  • the image in the preview area after the smoothing process is displayed as a preview screen.
  • the image in the preview area may also be smoothed to obtain a smoothed preview area.
  • the image in the preview area after the smoothing process is displayed as a preview screen. Since the final image that needs to be obtained is the image after the shake correction process, the preview image obtained after the smoothing process is the image in which the preview area is mapped to the processed image.
  • the smoothing process may be that the filter moves the position and size of the preview area with reference to the position and size of the tracking target in the previous frame of preview picture.
  • the smoothing process on the preview area includes:
  • the smooth subregion corresponding to each filter is fused to obtain a smoothed preview region.
  • FIG. 10 is a schematic diagram of the smoothing process in the shooting method provided by the embodiment of the present application.
  • a filter bank can be designed, and there are multiple filters in the filter bank.
  • a different filtering algorithm is used in each filter, and the preview area is smoothed separately through each filter, and then the smooth sub-area processed by each filter can be obtained. Because the filters in the filter bank use different Therefore, the location of the smooth sub-region obtained by each filter may be different.
  • the smooth sub-region corresponding to each filter can be fused to obtain a smoothed preview area.
  • the position of the preview area after smoothing may have changed relative to the position of the preview area before smoothing.
  • the image in the preview area after smoothing can also be processed by a super-division algorithm to improve the quality of the image.
  • the super-resolution algorithm is a low-level image processing task, which maps low-resolution images to high-resolution, and has achieved the effect of enhancing image details.
  • a deep learning method can be used for super-resolution algorithm processing.
  • a large number of high-resolution images are used to accumulate and perform model learning.
  • high-resolution images are reduced in quality according to a reduced-quality model to generate a training model , And then divide the image into blocks according to the correspondence between the low-frequency part and the high-frequency part of the high-resolution image, obtain prior knowledge through learning, and establish a learning model; then input the low-resolution image into the model, and use the input low-resolution image Based on the block, search for the highest matching high-frequency block in the established learning set to restore the low-resolution image. Finally, the high-frequency details of the image can be obtained and the quality of the image can be improved.
  • the image stabilization process provided by the embodiments of this application is an electronic image stabilization method, it may reduce the quality of the image. Therefore, before the preview display is performed, the image quality is improved by the super-resolution algorithm to avoid the image in the high-magnification shooting scene. The problem of reduced quality.
  • FIG. 11 shows a schematic flowchart of a shooting method provided by an embodiment of the present application.
  • the method includes:
  • Step S1101 Obtain the original image collected by the camera of the mobile terminal.
  • Step S1102 Perform shake correction processing and target tracking processing on the original image to obtain a processed image and a preview area containing the tracking target in the processed image.
  • Step S1103 Display the image in the preview area as a preview screen.
  • step S1101 to step S1103 is consistent with the content of step S201 to step S203.
  • step S201 to step S203 please refer to the description of step S201 to step S203.
  • Step S1104 Perform zoom processing on the camera based on the size of the tracking target in the preview area in the original image, where the zoomed camera is used to collect the next frame of the original image.
  • the first cropping process may be performed during the jitter correction process, and the target tracking process may also be based on the found tracking process.
  • the target is subjected to the second cutting process. Therefore, the size of the preview area picture is smaller than the picture size of the original image.
  • the camera can be zoomed according to the position and proportion of the screen size (or tracking target) in the original image in the preview area.
  • the process of zooming is to make the next frame of the original image captured by the camera perform at least twice After cropping, you can get a better composition preview screen.
  • the process of zoom processing may include optical zoom and/or digital zoom.
  • the zooming process does not make the position of the tracking target in the next frame of original image captured by the zoomed camera and the position of the tracking target in the preview area corresponding to the current frame of the original image have a higher size.
  • the degree of matching instead, the preview image (position and size of the tracking target) obtained after the jitter correction processing and target tracking processing of the next frame of the original image captured by the zoomed camera is required and the preview obtained from the original image of the current frame
  • the picture (the position and size of the tracking target) has a high degree of matching. That is, when zooming, it is necessary to consider that the next frame of the original image also needs to be cropped.
  • one of the zoomed cameras can also be selected as the main camera, and the main camera is used to collect the next frame of original image. Refer to the description of step S1205.
  • Step S1105 Select one of the zoomed cameras as the main camera, where the main camera is a camera that collects the next frame of original image.
  • one of the cameras after the zoom processing may be selected as the main camera.
  • the zoom-processed cameras can also select one of the zoom-processed cameras as the main camera according to the composition model.
  • the camera that captures the original image of the current frame will be estimated to be captured even if the original image of the current frame is zoomed.
  • the position and size of the tracking target in the preview area corresponding to the original image of the next frame cannot match the position and size of the tracking target in the preview area corresponding to the original image of the current frame, or the matching degree is low.
  • next original image, the preview area corresponding to the next original image, and the preview area corresponding to the next original image are all calculated and estimated. .
  • the process of obtaining the composition model may be: generating a composition model based on the previous preview frame and the position and size of the tracking target in the preview frame, or it may be based on the previous frame preview frame and the tracking area in the preview frame, Choose a composition model with the highest matching degree from the preset composition models.
  • the preview image obtained after the jitter correction processing and target tracking processing of the captured original image of the next frame can have a good composition.
  • FIG. 12 shows a structural block diagram of a shooting device provided in an embodiment of the present application. For ease of description, only parts related to the embodiment of the present application are shown.
  • the device 12 includes:
  • the image acquisition unit 121 is configured to acquire the original image collected by the camera of the mobile terminal;
  • the image processing unit 122 is configured to perform jitter correction processing and target tracking processing on the original image to obtain a processed image and a preview area containing a tracking target in the processed image, wherein the preview area includes the tracking target;
  • the image display unit 123 is configured to display the image in the preview area as a preview screen.
  • the image processing unit 122 includes:
  • the shake correction processing module 1221 is configured to perform shake correction processing on the original image to obtain a processed image and a correction area in the processed image;
  • the target tracking processing module 1222 is configured to find a tracking target from the original image, and determine a tracking area in the original image based on the position of the tracking target in the original image;
  • the joint cropping module 1223 is configured to perform joint cropping on the processed image based on the correction area and the tracking area to obtain a preview area in the processed image.
  • the image processing unit 122 includes:
  • a jitter correction processing module configured to perform jitter correction processing on the original image to obtain a processed image and a correction area in the processed image;
  • the target tracking processing module is configured to find a tracking target from the correction area, determine a tracking area in the correction area based on the position of the tracking target in the correction area, and use the tracking area as a preview area .
  • the image processing unit 122 includes:
  • a target tracking processing module configured to find a tracking target from the original image, and determine a tracking area in the original image based on the position of the tracking target in the original image;
  • the shake correction processing module is configured to perform shake correction processing on the image of the tracking area to obtain a processed image and a correction area in the processed image, and use the correction area as a preview area.
  • the device 12 includes:
  • the smoothing processing unit 124 is configured to perform smoothing processing on the preview area before displaying the image in the preview area as a preview picture to obtain a smoothed preview area;
  • the image display unit 123 is also used for:
  • the image in the preview area after the smoothing process is displayed as a preview screen.
  • the smoothing processing unit 124 includes:
  • a smoothing processing module configured to respectively perform smoothing processing on the preview area through at least two filters to obtain a smooth sub-area corresponding to each filter;
  • the weight generation module is used to determine the weight of each filter through the decision maker
  • the fusion module is used to perform fusion processing on the smooth sub-region corresponding to each filter based on the weight of each filter to obtain a smoothed preview area.
  • the device 12 further includes:
  • the zoom unit 125 is configured to perform zoom processing on the camera based on the size of the tracking target in the preview area in the original image after obtaining the preview area containing the tracking target in the processed image, wherein the zoom processing The rear camera is used to capture the next frame of the original image.
  • the device 12 further includes:
  • the camera switching unit 126 is configured to select one of the zoomed cameras as the main camera after performing the zoom processing on the camera, where the main camera is a camera that collects the next frame of original image.
  • the camera switching unit 126 is further configured to:
  • one of the cameras after the zoom processing is selected as the main camera.
  • the shooting method provided in the embodiments of this application can be applied to mobile terminals such as mobile phones, cameras, tablet computers, augmented reality (AR)/virtual reality (VR) devices, and notebook computers. There are no restrictions on the specific types of terminals.
  • mobile terminals such as mobile phones, cameras, tablet computers, augmented reality (AR)/virtual reality (VR) devices, and notebook computers.
  • AR augmented reality
  • VR virtual reality
  • notebook computers There are no restrictions on the specific types of terminals.
  • the mobile terminal provided by the embodiment of the present application includes a memory, a processor, and a computer program stored in the memory and capable of running on the processor.
  • the processor executes the computer program, the implementation is as in the embodiment of the present application. Steps of any shooting method provided.
  • FIG. 13 shows a block diagram of a part of the structure of a mobile phone provided by an embodiment of the present application.
  • the mobile phone includes: a radio frequency (RF) circuit 1310, a memory 1320, an input unit 1330, a display unit 1340, a sensor 1350, an audio circuit 1360, a wireless fidelity (WiFi) module 1370, and a processor 1380 , And power supply 1390 and other components.
  • RF radio frequency
  • the structure of the mobile phone shown in FIG. 13 does not constitute a limitation on the mobile phone, and may include more or fewer components than those shown in the figure, or a combination of some components, or different component arrangements.
  • the RF circuit 1310 can be used for receiving and sending signals during the process of sending and receiving information or talking. In particular, after receiving the downlink information of the base station, it is processed by the processor 1380; in addition, the designed uplink data is sent to the base station.
  • the RF circuit includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier (LNA), a duplexer, and the like.
  • the RF circuit 1310 can also communicate with the network and other devices through wireless communication.
  • the above-mentioned wireless communication can use any communication standard or protocol, including but not limited to Global System of Mobile Communication (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (Code Division) Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE)), Email, Short Messaging Service (SMS), etc.
  • GSM Global System of Mobile Communication
  • GPRS General Packet Radio Service
  • CDMA Code Division Multiple Access
  • WCDMA Wideband Code Division Multiple Access
  • LTE Long Term Evolution
  • Email Short Messaging Service
  • the memory 1320 may be used to store software programs and modules.
  • the processor 1380 executes various functional applications and data processing of the mobile phone by running the software programs and modules stored in the memory 1320, such as processing images.
  • the memory 1320 may mainly include a storage program area and a storage data area.
  • the storage program area may store an operating system, an application program required by at least one function (such as an image playback function, etc.), etc.; Created data (such as the location of the correction area, tracking area, or preview area, etc.), etc.
  • the memory 1320 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, or other volatile solid-state storage devices.
  • the input unit 1330 can be used to receive input digital or character information, and generate key signal input related to user settings and function control of the mobile phone.
  • the input unit 1330 may include a touch panel 1331 and other input devices 1332.
  • the touch panel 1331 also called a touch screen, can collect the user's touch operations on or near it (for example, the user uses any suitable objects or accessories such as fingers, stylus, etc.) on the touch panel 1331 or near the touch panel 1331. Operation), and drive the corresponding connection device according to the preset program.
  • the touch panel 1331 may include two parts: a touch detection device and a touch controller.
  • the touch detection device detects the user's touch position, and detects the signal brought by the touch operation, and transmits the signal to the touch controller; the touch controller receives the touch information from the touch detection device, converts it into contact coordinates, and then sends it To the processor 1380, and can receive and execute the commands sent by the processor 1380.
  • the touch panel 1331 can be implemented in multiple types such as resistive, capacitive, infrared, and surface acoustic wave.
  • the input unit 1330 may also include other input devices 1332.
  • the other input device 1332 may include, but is not limited to, a physical keyboard and function keys (such as volume control keys, switch keys, etc.).
  • the display unit 1340 may be used to display information input by the user or information provided to the user and various menus of the mobile phone. For example, a preview screen is displayed.
  • the display unit 1340 may include a display panel 1341.
  • the display panel 1341 may be configured in the form of a liquid crystal display (LCD), an organic light-emitting diode (OLED), etc.
  • the touch panel 1331 can cover the display panel 1341. When the touch panel 1331 detects a touch operation on or near it, it transmits it to the processor 1380 to determine the type of the touch event, and then the processor 1380 responds to the touch event. Type provides corresponding visual output on the display panel 1341.
  • the touch panel 1331 and the display panel 1341 are used as two independent components to implement the input and input functions of the mobile phone, but in some embodiments, the touch panel 1331 and the display panel 1341 can be integrated. Realize the input and output functions of the mobile phone.
  • the mobile phone may also include at least one sensor 1350, such as a light sensor, a motion sensor, and other sensors.
  • the light sensor may include an ambient light sensor and a proximity sensor.
  • the ambient light sensor can adjust the brightness of the display panel 1341 according to the brightness of the ambient light.
  • the proximity sensor can close the display panel 1341 and/or when the mobile phone is moved to the ear. Or backlight.
  • the accelerometer sensor can detect the magnitude of acceleration in various directions (usually three-axis), and can detect the magnitude and direction of gravity when it is stationary.
  • mobile phone posture applications such as horizontal and vertical screen switching, related Games, magnetometer posture calibration), vibration recognition related functions (such as pedometer, percussion), etc.; as for mobile phone configurable gyroscope (to obtain mobile phone jitter information), barometer, hygrometer, thermometer, infrared sensor, etc. Other sensors will not be repeated here.
  • the audio circuit 1360, the speaker 1361, and the microphone 1362 can provide an audio interface between the user and the mobile phone.
  • the audio circuit 1360 can transmit the electrical signal converted from the received audio data to the speaker 1361, which is converted into a sound signal for output by the speaker 1361; on the other hand, the microphone 1362 converts the collected sound signal into an electrical signal, and the audio circuit 1360 After being received, it is converted into audio data, and then processed by the audio data output processor 1380, and then sent to, for example, another mobile phone via the RF circuit 1310, or the audio data is output to the memory 1320 for further processing.
  • WiFi is a short-distance wireless transmission technology.
  • the mobile phone can help users send and receive emails, browse web pages, and access streaming media through the WiFi module 1370. It provides users with wireless broadband Internet access.
  • FIG. 13 shows the WiFi module 1370, it is understandable that it is not a necessary component of the mobile phone, and can be omitted as needed without changing the essence of the invention.
  • the processor 1380 is the control center of the mobile phone. It uses various interfaces and lines to connect various parts of the entire mobile phone. Various functions and processing data of the mobile phone can be used to monitor the mobile phone as a whole.
  • the processor 1380 may include one or more processing units; preferably, the processor 1380 may integrate an application processor and a modem processor, where the application processor mainly processes the operating system, user interface, application programs, etc. , The modem processor mainly deals with wireless communication. It can be understood that the foregoing modem processor may not be integrated into the processor 1380.
  • the mobile phone also includes a power supply 1390 (such as a battery) for supplying power to various components.
  • a power supply 1390 (such as a battery) for supplying power to various components.
  • the power supply can be logically connected to the processor 1380 through a power management system, so that functions such as charging, discharging, and power consumption management can be managed through the power management system.
  • the mobile phone may also include a camera.
  • the position of the camera on the mobile phone may be front-mounted or rear-mounted, which is not limited in the embodiment of the present application.
  • the mobile phone may include a single camera, a dual camera, or a triple camera, etc., which is not limited in the embodiment of the present application.
  • a mobile phone may include three cameras, of which one is a main camera, one is a wide-angle camera, and one is a telephoto camera.
  • the positions of the multiple cameras may be set according to actual conditions, which is not limited in the embodiment of the present application.
  • the mobile phone may also include a Bluetooth module, etc., which will not be repeated here.
  • FIG. 14 is a schematic diagram of the software structure of a mobile terminal (mobile phone) according to an embodiment of the present application.
  • the Android system is divided into four layers, namely the application layer, the application framework layer (framework, FWK), the system layer, and the hardware abstraction layer. Through the software interface communication between.
  • the application layer can be a series of application packages, and the application packages can include applications such as short message, calendar, camera, video, navigation, gallery, and call.
  • the application framework layer provides an application programming interface (application programming interface, API) and a programming framework for applications in the application layer.
  • the application framework layer may include some predefined functions, such as functions for receiving events sent by the application framework layer.
  • the application framework layer can include a window manager, a resource manager, and a notification manager.
  • the window manager is used to manage window programs.
  • the window manager can obtain the size of the display screen, determine whether there is a status bar, lock the screen, take a screenshot, etc.
  • the content provider is used to store and retrieve data and make these data accessible to applications.
  • the data may include videos, images, audios, phone calls made and received, browsing history and bookmarks, phone book, etc.
  • the resource manager provides various resources for the application, such as localized strings, icons, pictures, layout files, video files, and so on.
  • the notification manager enables the application to display notification information in the status bar, which can be used to convey notification-type messages, and it can automatically disappear after a short stay without user interaction.
  • the notification manager is used to notify download completion, message reminders, and so on.
  • the notification manager can also be a notification that appears in the status bar at the top of the system in the form of a chart or a scroll bar text, such as a notification of an application running in the background, or a notification that appears on the screen in the form of a dialog window. For example, text messages are prompted in the status bar, prompt sounds, electronic devices vibrate, and indicator lights flash.
  • the application framework layer can also include:
  • a view system which includes visual controls, such as controls that display text, controls that display pictures, and so on.
  • the view system can be used to build applications.
  • the display interface can be composed of one or more views.
  • a display interface that includes a short message notification icon may include a view that displays text and a view that displays pictures.
  • the phone manager is used to provide the communication function of the mobile phone. For example, the management of the call status (including connecting, hanging up, etc.).
  • the system layer can include multiple functional modules. For example: sensor service module, physical state recognition module, 3D graphics processing library (for example: OpenGL ES), etc.
  • the sensor service module is used to monitor the sensor data uploaded by various sensors at the hardware layer to determine the physical state of the mobile phone;
  • Physical state recognition module used to analyze and recognize user gestures, faces, etc.
  • the 3D graphics processing library is used to implement 3D graphics drawing, image rendering, synthesis, and layer processing.
  • the system layer can also include:
  • the surface manager is used to manage the display subsystem and provides a combination of 2D and 3D layers for multiple applications.
  • the media library supports playback and recording of a variety of commonly used audio and video formats, as well as still image files.
  • the media library can support multiple audio and video encoding formats, such as: MPEG4, H.264, MP3, AAC, AMR, JPG, PNG, etc.
  • the hardware abstraction layer is the layer between hardware and software.
  • the hardware abstraction layer can include display drivers, camera drivers, sensor drivers, etc., used to drive related hardware at the hardware layer, such as display screens, cameras, sensors, and so on.
  • the above embodiments of the shooting method can be implemented on a mobile phone having the above hardware structure/software structure.
  • the embodiments of the present application also provide a computer-readable storage medium, where the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, the steps in each of the foregoing method embodiments can be realized.
  • the embodiments of the present application provide a computer program product.
  • the steps in the foregoing method embodiments can be realized when the mobile terminal is executed.
  • the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium.
  • the computer program can be stored in a computer-readable storage medium. When executed by the processor, the steps of the foregoing method embodiments can be implemented.
  • the computer program includes computer program code, and the computer program code may be in the form of source code, object code, executable file, or some intermediate forms.
  • the computer-readable medium may at least include: any entity or device capable of carrying the computer program code to the camera device/mobile terminal, recording medium, computer memory, read-only memory (ROM, Read-Only Memory), and random access memory (RAM, Random Access Memory), electric carrier signal, telecommunications signal and software distribution medium.
  • ROM read-only memory
  • RAM random access memory
  • electric carrier signal telecommunications signal and software distribution medium.
  • U disk mobile hard disk, floppy disk or CD-ROM, etc.
  • computer-readable media cannot be electrical carrier signals and telecommunication signals.
  • An embodiment of the present application also provides a chip system, wherein the chip system includes a processor, the processor is coupled to a memory, and the processor executes a computer program stored in the memory to implement any The steps of an alarm method in a vehicle left behind by a child.
  • the chip system may be a single chip or a chip module composed of multiple chips.

Abstract

A photographic method and apparatus, and a mobile terminal and a chip system, which are applied to the technical field of video photographing. The photographic method comprises: acquiring an original image captured by a camera of a mobile terminal; performing jitter correction processing and target tracking processing on the original image, and obtaining a processed image and a preview area, which comprises a tracking target, in the processed image; and taking an image in the preview area as a preview picture and displaying same. The jitter correction processing is performed on a low-magnification original image captured by a camera to avoid the jitter of a picture, and the target tracking processing is performed thereon to avoid the shaking of a tracking target in the picture. After the jitter correction processing and the target tracking processing, an obtained preview area is a certain area in a processed image corresponding to the low-magnification original image, and a stable high-magnification photographic effect can be presented by taking an image in the area as a preview picture and displaying same.

Description

拍摄方法、装置、移动终端及芯片系统Shooting method, device, mobile terminal and chip system
本申请要求于2020年05月15日提交国家知识产权局、申请号为202010417818.9、申请名称为“拍摄方法、装置、移动终端及芯片系统”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims the priority of a Chinese patent application filed with the State Intellectual Property Office on May 15, 2020, the application number is 202010417818.9, and the application name is "Photographing method, device, mobile terminal and chip system", the entire content of which is incorporated by reference In this application.
技术领域Technical field
本申请涉及视频拍摄领域,尤其涉及一种拍摄方法、装置、移动终端及芯片系统。This application relates to the field of video shooting, in particular to a shooting method, device, mobile terminal and chip system.
背景技术Background technique
随着摄像技术的发展,越来越多的移动终端上集成了摄像头,用户携带移动终端可以随时随地的进行拍摄。With the development of camera technology, more and more mobile terminals are integrated with cameras, and users can take pictures anytime and anywhere with the mobile terminal.
当用户通过移动终端上的摄像头跟拍拍摄对象时,经常会出现画面抖动的现象,尤其是在高倍场景下跟拍摄对象时,画面抖动现象更严重,甚至出现很难抓拍到感兴趣的拍摄对象。When the user follows the subject through the camera on the mobile terminal, the picture shake often occurs, especially when following the subject in a high magnification scene, the picture shake is more serious, and it is even difficult to capture the subject of interest. .
发明内容Summary of the invention
本申请实施例提供一种拍摄方法、装置、移动终端及芯片系统,解决了高倍场景下画面抖动,无法抓拍感兴趣的拍摄对象的问题。The embodiments of the present application provide a shooting method, device, mobile terminal, and chip system, which solves the problem that the picture shakes in a high-magnification scene and the shooting object of interest cannot be captured.
为达到上述目的,本申请采用如下技术方案:In order to achieve the above objectives, this application adopts the following technical solutions:
第一方面,本申请实施例提供一种拍摄方法,包括:In the first aspect, an embodiment of the present application provides a shooting method, including:
获取移动终端的摄像头采集的原始图像;Obtain the original image collected by the camera of the mobile terminal;
对所述原始图像进行抖动校正处理和目标跟踪处理,获得处理图像以及所述处理图像中包含跟踪目标的预览区域;Performing jitter correction processing and target tracking processing on the original image to obtain a processed image and a preview area containing the tracking target in the processed image;
将所述预览区域中的图像作为预览画面进行显示。The image in the preview area is displayed as a preview screen.
在第一方面的一种可能的实现方式中,所述对所述原始图像进行抖动校正处理和目标跟踪处理,获得处理图像以及所述处理图像中包含跟踪目标的预览区域,包括:In a possible implementation manner of the first aspect, the performing jitter correction processing and target tracking processing on the original image to obtain a processed image and a preview area of the processed image containing the tracking target includes:
对所述原始图像进行抖动校正处理,获得处理图像以及所述处理图像中的校正区域;Performing jitter correction processing on the original image to obtain a processed image and a correction area in the processed image;
从所述原始图像中找出跟踪目标,并基于所述跟踪目标在所述原始图像中的位置,在所述原始图像中确定跟踪区域;Finding a tracking target from the original image, and determining a tracking area in the original image based on the position of the tracking target in the original image;
基于所述校正区域和所述跟踪区域对所述处理图像进行联合裁切,获得所述处理图像中的预览区域。Jointly crop the processed image based on the correction area and the tracking area to obtain a preview area in the processed image.
在第一方面的一种可能的实现方式中,所述对所述原始图像进行抖动校正处理和目标跟踪处理,获得处理图像以及所述处理图像中包含跟踪目标的预览区域,包括:In a possible implementation manner of the first aspect, the performing jitter correction processing and target tracking processing on the original image to obtain a processed image and a preview area of the processed image containing the tracking target includes:
对所述原始图像进行抖动校正处理,获得处理图像以及所述处理图像中的校正区域;Performing jitter correction processing on the original image to obtain a processed image and a correction area in the processed image;
从所述校正区域中找出跟踪目标,基于所述跟踪目标在所述校正区域的位置,在所述校正区域中确定跟踪区域,并将所述跟踪区域作为预览区域。Find a tracking target from the correction area, determine a tracking area in the correction area based on the position of the tracking target in the correction area, and use the tracking area as a preview area.
在第一方面的一种可能的实现方式中,所述对所述原始图像进行抖动校正处理和 目标跟踪处理,获得处理图像以及所述处理图像中包含跟踪目标的预览区域,包括:In a possible implementation manner of the first aspect, the performing jitter correction processing and target tracking processing on the original image to obtain a processed image and a preview area of the processed image containing a tracking target includes:
从所述原始图像中找出跟踪目标,并基于所述跟踪目标在所述原始图像的位置,在所述原始图像中确定跟踪区域;Finding a tracking target from the original image, and determining a tracking area in the original image based on the position of the tracking target in the original image;
对所述跟踪区域内的图像进行抖动校正处理,获得处理图像以及所述处理图像中的校正区域,并将所述校正区域作为预览区域。Performing a shake correction process on the image in the tracking area to obtain a processed image and a correction area in the processed image, and use the correction area as a preview area.
在第一方面的一种可能的实现方式中,所述抖动校正处理包括:In a possible implementation manner of the first aspect, the jitter correction processing includes:
获取待校正处理的图像,并获取所述待校正处理的图像采集时刻所述移动终端的抖动信息;Acquiring an image to be corrected, and acquiring jitter information of the mobile terminal at the time when the image to be corrected is collected;
对所述待校正处理的图像进行第一裁切处理,获得边缘图像和中间图像;Performing a first cropping process on the image to be corrected to obtain an edge image and an intermediate image;
基于所述抖动信息和所述边缘图像对所述中间图像进行补偿,获得处理图像,其中,所述中间图像对应的区域为校正区域。Compensating the intermediate image based on the shaking information and the edge image to obtain a processed image, wherein the area corresponding to the intermediate image is a correction area.
在第一方面的一种可能的实现方式中,所述目标跟踪处理包括:In a possible implementation manner of the first aspect, the target tracking processing includes:
基于注意力机制从待跟踪处理的图像和/或待跟踪处理的图像之前的N帧图像中,找到跟踪目标,其中,N≥1;Find the tracking target from the image to be tracked and/or N frames before the image to be tracked based on the attention mechanism, where N≥1;
基于所述跟踪目标在所述待跟踪处理的图像中的位置,进行第二裁切处理,以在待跟踪处理的图像中确定出跟踪区域。Based on the position of the tracking target in the image to be tracked, a second cropping process is performed to determine a tracking area in the image to be tracked.
在第一方面的一种可能的实现方式中,在将所述预览区域中的图像作为预览画面进行显示之前,还包括:In a possible implementation of the first aspect, before displaying the image in the preview area as a preview screen, the method further includes:
对所述处理图像中的预览区域进行平滑处理,获得平滑处理后的预览区域;Smoothing the preview area in the processed image to obtain a smoothed preview area;
相应的,所述将所述预览区域中的图像作为预览画面进行显示包括:Correspondingly, the displaying the image in the preview area as a preview screen includes:
将所述平滑处理后的预览区域中的图像作为预览画面进行显示。The image in the preview area after the smoothing process is displayed as a preview screen.
在第一方面的一种可能的实现方式中,所述对所述预览区域进行平滑处理包括:In a possible implementation manner of the first aspect, the performing smoothing processing on the preview area includes:
通过至少两个滤波器分别对所述处理图像中的预览区域进行平滑处理,获得每个滤波器对应的平滑子区域;Smoothing the preview area in the processed image by using at least two filters, respectively, to obtain a smooth sub-area corresponding to each filter;
通过决策器判定每个滤波器的权重;Determine the weight of each filter through the decision maker;
基于每个滤波器的权重,对每个滤波器对应的平滑子区域进行融合处理,获得平滑处理后的预览区域。Based on the weight of each filter, the smooth subregion corresponding to each filter is fused to obtain a smoothed preview region.
在第一方面的一种可能的实现方式中,在获得处理图像以及所述处理图像中包含跟踪目标的预览区域之后,还包括:In a possible implementation of the first aspect, after obtaining the processed image and the processed image including the preview area of the tracking target, the method further includes:
基于所述预览区域中的跟踪目标在所述原始图像中的大小,对所述摄像头进行变焦处理,其中,变焦处理后的摄像头用于采集下一帧原始图像。Perform zoom processing on the camera based on the size of the tracking target in the preview area in the original image, where the zoomed camera is used to collect the next frame of the original image.
在第一方面的一种可能的实现方式中,所述移动终端上设置的摄像头的个数为至少两个;In a possible implementation manner of the first aspect, the number of cameras provided on the mobile terminal is at least two;
在对所述摄像头进行变焦处理之后,还包括:After performing zoom processing on the camera, the method further includes:
从变焦处理后的摄像头中选取一个作为主摄像头,其中,所述主摄像头为采集下一帧原始图像的摄像头。One of the cameras after zoom processing is selected as the main camera, where the main camera is a camera that collects the next frame of original image.
在第一方面的一种可能的实现方式中,所述从变焦处理后的摄像头中选取一个作为主摄像头包括:In a possible implementation manner of the first aspect, the selecting one of the zoomed cameras as the main camera includes:
根据所述预览区域中的跟踪目标在所述原始图像中的位置,从所述变焦处理后的 摄像头中选取一个作为主摄像头。According to the position of the tracking target in the preview area in the original image, one of the cameras after the zoom processing is selected as the main camera.
第二方面,本申请实施例提供一种拍摄装置,包括:In a second aspect, an embodiment of the present application provides a photographing device, including:
图像获取单元,用于获取移动终端的摄像头采集的原始图像;The image acquisition unit is used to acquire the original image collected by the camera of the mobile terminal;
图像处理单元,用于对所述原始图像进行抖动校正处理和目标跟踪处理,获得处理图像以及所述处理图像中包含跟踪目标的预览区域;An image processing unit, configured to perform jitter correction processing and target tracking processing on the original image to obtain a processed image and a preview area containing the tracking target in the processed image;
图像显示单元,用于将所述预览区域中的图像作为预览画面进行显示。The image display unit is used to display the image in the preview area as a preview picture.
第三方面,提供一种移动终端,包括:存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,其特征在于,所述处理器执行所述计算机程序时实现本申请第一方面任一项所述方法的步骤。In a third aspect, a mobile terminal is provided, including: a memory, a processor, and a computer program stored in the memory and running on the processor, wherein the processor executes the computer program when the computer program is executed. The steps of any one of the methods described in the first aspect of the present application are implemented.
第四方面,提供一种芯片系统,包括:处理器,所述处理器与存储器耦合,所述处理器执行存储器中存储的计算机程序,以实现本申请第一方面任一项所述方法的步骤。In a fourth aspect, a chip system is provided, including: a processor coupled with a memory, and the processor executes a computer program stored in the memory to implement the steps of any one of the methods described in the first aspect of the present application .
第五方面,提供一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序被一个或多个处理器执行时实现本申请第一方面任一项所述方法的步骤。In a fifth aspect, a computer-readable storage medium is provided, and the computer-readable storage medium stores a computer program that, when executed by one or more processors, implements the method described in any one of the first aspects of the present application A step of.
第六方面,本申请实施例提供了一种计算机程序产品,当计算机程序产品在移动终端上运行时,使得移动终端执行上述第一方面中任一项所述方法。In a sixth aspect, embodiments of the present application provide a computer program product, which when the computer program product runs on a mobile terminal, causes the mobile terminal to execute the method described in any one of the above-mentioned first aspects.
本申请提供的拍摄方法通过摄像头采集原始图像,原始图像为低倍图像,进行抖动校正处理的过程避免了画面的抖动,进行目标跟踪处理的过程避免了画面中跟踪目标的晃动,在经过抖动校正处理和目标跟踪处理后,可以获得抖动校正处理后的处理图像,同时能够获得处理图像中的某个区域,该区域大小是小于原始图像大小的,最后将该区域中的图像作为预览画面进行显示,这样就能够呈现稳定的高倍拍摄效果;即本申请实施例的抖动校正处理和目标跟踪处理通过裁剪补偿的方式避免了画面抖动,通过裁剪的方式又避免了画面中跟踪目标的晃动,通过裁剪的方式还实现高倍场景拍摄。The shooting method provided by this application uses a camera to collect the original image, the original image is a low-magnification image, the shake correction process avoids the shaking of the picture, and the target tracking process avoids the shaking of the tracking target in the picture. After shaking correction After processing and target tracking processing, the processed image after the shake correction processing can be obtained, and at the same time, a certain area in the processed image can be obtained. The size of the area is smaller than the original image size, and finally the image in the area is displayed as a preview screen In this way, a stable high-magnification shooting effect can be presented; that is, the shake correction processing and target tracking processing in the embodiments of the present application avoid image shake through cropping compensation, and avoid the shaking of the tracking target in the image through cropping. The way also achieves high magnification scene shooting.
可以理解的是,上述第二方面至第六方面的有益效果可以参见上述第一方面中的相关描述,在此不再赘述。It is understandable that the beneficial effects of the second aspect to the sixth aspect can be referred to the related description in the first aspect, and will not be repeated here.
附图说明Description of the drawings
图1为本申请实施例提供的一种拍摄方法的应用场景的示意图;FIG. 1 is a schematic diagram of an application scenario of a shooting method provided by an embodiment of this application;
图2为本申请实施例提供的一种拍摄方法的流程示意图;FIG. 2 is a schematic flowchart of a shooting method provided by an embodiment of the application;
图3为本申请实施例提供的一种抖动校正处理后校正区域和目标跟踪处理后的跟踪区域的示意图;3 is a schematic diagram of a correction area after shake correction processing and a tracking area after target tracking processing provided by an embodiment of the application;
图4为本申请实施例提供的另一种拍摄方法的流程示意图;FIG. 4 is a schematic flowchart of another shooting method provided by an embodiment of the application;
图5为图4所示实施例提供的拍摄方法对应的校正区域、跟踪区域和预览区域的示意图;5 is a schematic diagram of a correction area, a tracking area, and a preview area corresponding to the shooting method provided by the embodiment shown in FIG. 4;
图6为本申请实施例提供的另一种拍摄方法的流程示意图;FIG. 6 is a schematic flowchart of another shooting method provided by an embodiment of the application;
图7为图6所示实施例提供的拍摄方法对应的校正区域、跟踪区域和预览区域的示意图;FIG. 7 is a schematic diagram of a correction area, a tracking area, and a preview area corresponding to the shooting method provided by the embodiment shown in FIG. 6; FIG.
图8为本申请实施例提供的另一种拍摄方法的流程示意图;FIG. 8 is a schematic flowchart of another shooting method provided by an embodiment of the application;
图9为本申请实施例提供的拍摄方法对应的校正区域、跟踪区域和预览区域的示意图;FIG. 9 is a schematic diagram of a correction area, a tracking area, and a preview area corresponding to the shooting method provided by an embodiment of the application;
图10为本申请实施例提供的拍摄方法中平滑处理过程的示意图;FIG. 10 is a schematic diagram of a smoothing process in a shooting method provided by an embodiment of the application;
图11为本申请实施例提供的另一种拍摄方法的流程示意图;FIG. 11 is a schematic flowchart of another shooting method provided by an embodiment of the application;
图12为本申请实施例提供的一种拍摄装置的示意性框图;FIG. 12 is a schematic block diagram of a photographing device provided by an embodiment of the application;
图13为本申请实施例提供的一种移动终端的示意性框图;FIG. 13 is a schematic block diagram of a mobile terminal provided by an embodiment of this application;
图14为本申请实施例提供的一种移动终端的软件结构示意图。FIG. 14 is a schematic diagram of the software structure of a mobile terminal provided by an embodiment of the application.
具体实施方式Detailed ways
以下描述中,为了说明而不是为了限定,提出了诸如特定系统结构、技术之类的具体细节,以便透彻理解本申请实施例。然而,本领域的技术人员应当清楚,在没有这些具体细节的其它实施例中也可以实现本申请。在其它情况中,省略对众所周知的系统、装置、电路以及方法的详细说明,以免不必要的细节妨碍本申请的描述。In the following description, for the purpose of illustration rather than limitation, specific details such as a specific system structure and technology are proposed for a thorough understanding of the embodiments of the present application. However, it should be clear to those skilled in the art that the present application can also be implemented in other embodiments without these specific details. In other cases, detailed descriptions of well-known systems, devices, circuits, and methods are omitted to avoid unnecessary details from obstructing the description of this application.
应当理解,当在本申请说明书和所附权利要求书中使用时,术语“包括”指示所描述特征、整体、步骤、操作、元素和/或组件的存在,但并不排除一个或多个其它特征、整体、步骤、操作、元素、组件和/或其集合的存在或添加。It should be understood that when used in the specification and appended claims of this application, the term "comprising" indicates the existence of the described features, wholes, steps, operations, elements and/or components, but does not exclude one or more other The existence or addition of features, wholes, steps, operations, elements, components, and/or collections thereof.
还应当理解,在本申请实施例中,“一个或多个”是指一个、两个或两个以上;“和/或”,描述关联对象的关联关系,表示可以存在三种关系;例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B的情况,其中A、B可以是单数或者复数。字符“/”一般表示前后关联对象是一种“或”的关系。It should also be understood that in the embodiments of the present application, "one or more" refers to one, two or more than two; "and/or" describes the association relationship of the associated objects, indicating that there may be three relationships; for example, A and/or B can mean the situation where A exists alone, A and B exist at the same time, and B exists alone, where A and B can be singular or plural. The character "/" generally indicates that the associated objects before and after are in an "or" relationship.
如在本申请说明书和所附权利要求书中所使用的那样,术语“如果”可以依据上下文被解释为“当...时”或“一旦”或“响应于确定”或“响应于检测到”。类似地,短语“如果确定”或“如果检测到[所描述条件或事件]”可以依据上下文被解释为意指“一旦确定”或“响应于确定”或“一旦检测到[所描述条件或事件]”或“响应于检测到[所描述条件或事件]”。As used in the description of this application and the appended claims, the term "if" can be construed as "when" or "once" or "in response to determination" or "in response to detecting ". Similarly, the phrase "if determined" or "if detected [described condition or event]" can be interpreted as meaning "once determined" or "in response to determination" or "once detected [described condition or event]" depending on the context ]" or "in response to detection of [condition or event described]".
另外,在本申请说明书和所附权利要求书的描述中,术语“第一”、“第二”、“第三”等仅用于区分描述,而不能理解为指示或暗示相对重要性。In addition, in the description of the specification of this application and the appended claims, the terms "first", "second", "third", etc. are only used to distinguish the description, and cannot be understood as indicating or implying relative importance.
在本申请说明书中描述的参考“一个实施例”或“一些实施例”等意味着在本申请的一个或多个实施例中包括结合该实施例描述的特定特征、结构或特点。由此,在本说明书中的不同之处出现的语句“在一个实施例中”、“在一些实施例中”、“在其他一些实施例中”、“在另外一些实施例中”等不是必然都参考相同的实施例,而是意味着“一个或多个但不是所有的实施例”,除非是以其他方式另外特别强调。术语“包括”、“包含”、“具有”及它们的变形都意味着“包括但不限于”,除非是以其他方式另外特别强调。The reference to "one embodiment" or "some embodiments" described in the specification of this application means that one or more embodiments of this application include a specific feature, structure, or characteristic described in combination with the embodiment. Therefore, the sentences "in one embodiment", "in some embodiments", "in some other embodiments", "in some other embodiments", etc. appearing in different places in this specification are not necessarily All refer to the same embodiment, but mean "one or more but not all embodiments" unless it is specifically emphasized otherwise. The terms "including", "including", "having" and their variations all mean "including but not limited to", unless otherwise specifically emphasized.
为了说明本申请所述的技术方案,下面通过具体实施例来进行说明。In order to illustrate the technical solutions described in the present application, specific embodiments are used for description below.
图1示出了本申请实施例提供的拍摄方法的一种应用场景示意图,如图所示,用户(人1)手持手机对拍摄对象(人2)进行拍摄,人2可能在进行走路、奔跑等,人1为了对人2进行跟踪拍摄,可能同样需要走路、奔跑等,用户在走路或奔跑的过程中,手机会随着用户走路或奔跑的出现抖动,当高倍拍摄场景下,会放大这种抖动,导致手机内的人2出现模糊现象;另外,当人2在进行奔跑或进行跳动时,用户可能 需要奔跑以能够跟随上人2,随着人2跳动或者人1拍摄时进行奔跑,高倍拍摄场景下,也会放大画面中人2的晃动,导致手机内的人2一会出现手机画面中的左侧,例如图1中(a);一会出现在手机画面中的右侧,甚至手机画面中仅出现人2的部分画面,例如图1中(b)。为了解决高倍场景下的上述问题,提供了一种拍摄方法,具体可参见下述对拍摄方法的描述。Figure 1 shows a schematic diagram of an application scenario of the shooting method provided by an embodiment of the present application. As shown in the figure, a user (person 1) holds a mobile phone to shoot a subject (person 2), and person 2 may be walking or running. In order to track and shoot person 2, person 1 may also need to walk, run, etc. When the user is walking or running, the phone will jitter as the user walks or runs. When shooting at high magnification, it will zoom in. This kind of jitter causes the blur phenomenon of person 2 in the phone; in addition, when person 2 is running or beating, the user may need to run to be able to follow person 2, and run as person 2 is beating or person 1 is shooting. In high-magnification shooting scenes, the shaking of Person 2 in the zoomed screen will also be enlarged, causing Person 2 in the mobile phone to appear on the left side of the phone screen, such as (a) in Figure 1, and once on the right side of the phone screen. Even only part of the screen of Person 2 appears in the screen of the mobile phone, such as (b) in Figure 1. In order to solve the above problems in high magnification scenes, a shooting method is provided. For details, please refer to the description of the shooting method below.
图2示出了本申请实施例提供的一种拍摄方法的示意性流程图,作为示例而非限定,该方法可以应用于移动终端中。FIG. 2 shows a schematic flowchart of a photographing method provided by an embodiment of the present application. As an example and not a limitation, the method may be applied to a mobile terminal.
步骤S201,获取移动终端的摄像头采集的原始图像。Step S201: Obtain the original image collected by the camera of the mobile terminal.
在本申请实施例中,所述移动终端可以是带有摄像功能的可移动的设备,例如,手机、相机、摄影机、平板电脑、监控摄像头、笔记本等,可以通过所述移动终端上自带的摄像头或者另外接入的摄像头进行拍摄。所述原始图像表示进行后续抖动校正处理和目标跟踪处理前的图像,所述原始图像可以是所述摄像头采集到的原始大图,也可以是所述摄像头采集到的原始大图进行预处理后的图像。所述原始大图可以是摄像头的传感器中的全部像素点均工作时,传感器采集的信息生成的图像。In the embodiment of the present application, the mobile terminal may be a movable device with a camera function, for example, a mobile phone, a camera, a video camera, a tablet computer, a surveillance camera, a notebook, etc., which can be used on the mobile terminal. Camera or another connected camera for shooting. The original image represents an image before subsequent shake correction processing and target tracking processing. The original image may be the original large image collected by the camera, or it may be the original large image collected by the camera after preprocessing. Image. The original large image may be an image generated by information collected by the sensor when all pixels in the sensor of the camera are working.
步骤S202,对所述原始图像进行抖动校正处理和目标跟踪处理,获得处理图像以及所述处理图像中包含跟踪目标的预览区域。Step S202: Perform jitter correction processing and target tracking processing on the original image to obtain a processed image and a preview area containing the tracking target in the processed image.
在本申请实施例中,由于在跟踪拍摄的过程中,随着跟拍对象移动,用户也会移动身体,导致用户手持的移动终端出现抖动,另外,即使用户手持移动终端时通过一定的锻炼或者一些稳定性较好的姿势,也很难避免出现“手抖”现象。In the embodiments of the present application, as the subject moves during the tracking and shooting process, the user will also move the body, causing the mobile terminal held by the user to shake. In addition, even if the user holds the mobile terminal through a certain exercise or In some postures with better stability, it is difficult to avoid the phenomenon of "hand shaking".
在低倍场景下进行拍摄时,由于移动终端采集图像时的视野范围较大,移动终端的抖动现象对采集的图像造成的影响比较小。然而,在高倍场景下进行拍摄时,移动终端采集图像时的视野范围较小,移动终端的抖动现象会对采集的图像造成比较大的影响,例如,视频中的画面抖动等。因此,在高倍场景下,就需要进行抖动校正处理。When shooting in a low-magnification scene, because the mobile terminal has a larger field of view when collecting images, the jitter phenomenon of the mobile terminal has a relatively small impact on the collected images. However, when shooting in a high-magnification scene, the field of view of the mobile terminal when capturing images is small, and the jitter phenomenon of the mobile terminal will have a relatively large impact on the captured images, such as image jitter in the video. Therefore, in high-magnification scenes, shake correction processing is required.
目前,抖动校正处理的方式包括:光学防抖、电子防抖和机身传感器防抖等。Currently, the methods of shake correction processing include: optical image stabilization, electronic image stabilization, and body sensor image stabilization.
光学防抖是通过镜头的浮动透镜来纠正“光轴偏移”。其原理是通过镜头内的陀螺仪侦测到微小的移动,然后将信号传至微处理器,微处理器立即计算需要补偿的位移量,然后通过补偿镜片组,根据镜头的抖动方向及位移量加以补偿;从而有效地克服因相机的振动产生的影像模糊。Optical image stabilization is to correct the "optical axis shift" through the floating lens of the lens. The principle is to detect the slight movement through the gyroscope in the lens, and then transmit the signal to the microprocessor. The microprocessor immediately calculates the amount of displacement that needs to be compensated, and then through the compensation lens group, according to the lens shake direction and displacement amount To be compensated; thus effectively overcome the image blur caused by camera vibration.
传感器防抖技术原理和镜头防抖差不多,其主要将传感器安装在一个可自由浮动的支架上,同样配合需要陀螺仪感应相机的抖动方向和幅度,进而控制传感器进行对应的位移补偿。为何要涉及如此多的修正方向,主要原因是减少拍照时手的不规则抖动。The principle of sensor anti-shake technology is similar to that of lens anti-shake. It mainly installs the sensor on a free-floating bracket, and also needs a gyroscope to sense the direction and amplitude of the camera's shaking, and then control the sensor to perform corresponding displacement compensation. The main reason why so many correction directions are involved is to reduce the irregular hand shaking when taking pictures.
所述光学防抖和所述传感器防抖均需要硬件上的参与,而电子防抖无需硬件辅助也可实现,主要通过软件对传感器上的图像进行分析和处理。当图像被拍糊时,利用边缘图像对中间模糊部分进行补偿,从而实现“防抖”,其防抖原理更像是对照片进行“后期处理”。Both the optical anti-shake and the sensor anti-shake require the participation of hardware, while the electronic anti-shake can be implemented without hardware assistance, and the image on the sensor is mainly analyzed and processed by software. When the image is blurred, the edge image is used to compensate for the blurry part in the middle, so as to achieve "anti-shake". The anti-shake principle is more like the "post-processing" of the photo.
开启电子防抖后,取景画面会有非常明显的裁切,然而被裁切的部分并非是边缘的传感器停止了工作,而是电子防抖系统会将这一部分的数据用于抖动补偿。也可以理解为图像被裁切成两部分,最外面的部分(边缘部分)用于补偿里面的一部分(中 间模糊部分)。在没有硬件防抖的前提下依然能带来一定的防抖效果。After turning on the electronic image stabilization, the viewfinder screen will have a very obvious cropping, but the cropped part is not that the edge sensor has stopped working, but the electronic image stabilization system will use this part of the data for shake compensation. It can also be understood that the image is cut into two parts, and the outermost part (edge part) is used to compensate the inner part (middle blurred part). It can still bring a certain anti-shake effect without hardware anti-shake.
本申请实施例可以采用电子防抖,在不增加外部硬件的情况下,同样实现“防抖”处理,且采用的电子防抖进行抖动校正处理后,得到的处理图像中中间被补偿的区域(中间图像)能够变得更加清晰,这个区域可以成为校正区域,同时可以生成校正检测框,得到的也可以是校正检测框的坐标,参见图3,图3是本申请实施例提供的抖动校正处理后校正区域和目标跟踪处理后的跟踪区域的示意图,如图3中(a)所示,假设A为原始图像,那么原始图像经过抖动校正处理后得到处理图像A’,处理图像A’和原始图像A的大小一致,区别在于处理图像A’的中间图像相对于原始图像A更清晰,B为校正区域,抖动校正处理后,得到的可以是校正区域四个角的坐标,还可以是校正区域的中心点坐标以及校正区域的中心点坐标与边长的距离。The embodiments of this application can use electronic anti-shake, and without adding external hardware, the "anti-shake" processing is also realized, and after the electronic anti-shake is used for the shake correction processing, the middle compensated area in the processed image obtained ( The intermediate image) can become clearer. This area can become a correction area. At the same time, a correction detection frame can be generated, and the obtained coordinates can also be the coordinates of the correction detection frame. A schematic diagram of the post-correction area and the tracking area after the target tracking process is shown in Figure 3 (a). Assuming A is the original image, then the original image undergoes the shake correction process to obtain the processed image A', and the processed image A'and the original The size of the image A is the same. The difference is that the intermediate image of the processed image A'is clearer than the original image A, and B is the correction area. After the shake correction processing, the obtained can be the coordinates of the four corners of the correction area, or the correction area The coordinates of the center point and the distance between the center point coordinates of the correction area and the side length.
图3中未示出抖动校正处理后图像清晰度的变化,仅示出了校正区域与处理图像(原始图像)的位置关系,本申请实施例中虽然重点描述了校正区域,但不可否认,校正区域的生成过程是随着抖动校正处理过程进行的,即抖动校正处理的过程首先对原始图像的中间模糊部分进行补偿,补偿后获得的是处理图像,本申请实施例的对原始图像进行抖动校正处理后获得的处理图像和原始图像大小一致,因此,处理图像中包含了边缘图像和中间被补偿后的图像,在补偿后自然就获得了校正区域,校正区域内的图像就是补偿后的较清晰的中间部分的图像。Figure 3 does not show the change in image clarity after the shake correction process, but only shows the positional relationship between the correction area and the processed image (original image). Although the correction area is mainly described in the embodiment of the application, it is undeniable that the correction is The region generation process is carried out with the shake correction process, that is, the process of the shake correction process first compensates the middle blurred part of the original image, and the processed image is obtained after the compensation. The original image is shaken corrected in the embodiment of the application. The processed image obtained after processing has the same size as the original image. Therefore, the processed image contains the edge image and the compensated image in the middle. After compensation, the corrected area is naturally obtained. The image in the corrected area is the clearer after compensation. The middle part of the image.
在跟踪拍摄的过程中,随着跟拍对象(即跟踪目标)移动,用户也会移动身体,当跟踪目标和用户的步伐不一致,或者拍摄角度、方向改变时,会导致跟踪目标在镜头画面中的位置经常晃动,严重的情况下,跟踪目标甚至跳出镜头画面之外。In the process of tracking and shooting, as the tracking target (ie tracking target) moves, the user will also move the body. When the tracking target and the user's pace are inconsistent, or the shooting angle or direction changes, the tracking target will be in the lens. The position of the camera often shakes, and in severe cases, the tracking target even jumps out of the camera screen.
作为举例,当用户手持手机拍摄奔跑的小狗时,用户需要紧跟小狗进行奔跑,然而,小狗奔跑的路线并不固定,是随机的,用户甚至无法预判小狗奔跑的方向,这就导致用户在跟拍的过程中,镜头画面中小狗的位置一直在晃动,可能几秒钟之内,小狗一会出现在镜头画面的中间,一会出现在镜头画面的左侧,一会出现在镜头画面的右下角。这样,在进行回看时,观看的效果非常差。As an example, when a user holds a mobile phone to photograph a running puppy, the user needs to follow the puppy to run. However, the running route of the puppy is not fixed, it is random, and the user cannot even predict the direction the puppy is running. As a result, the position of the puppy in the camera image has been trembling during the user's follow-up shot. Within a few seconds, the puppy will appear in the middle of the camera image, then on the left side of the camera image, and then for a while. Appears in the lower right corner of the lens screen. In this way, when looking back, the viewing effect is very poor.
鉴于上述情况,本申请实施例还增加了目标跟踪处理。在进行目标跟踪处理时,需要确定出跟踪目标。例如,用户跟拍的过程中,通过在所述移动终端的触控屏选定某个区域(例如,通过点选的方式选定点选的位置周围预设范围内的区域,或者,通过画圈的方式选定某个区域),将该区域内的目标作为要跟踪的目标;也可以检测画面中的前景图像,将前景图像作为跟踪目标;还可以采用Attention算法在当前图像中找到显著性物体作为跟踪目标。In view of the foregoing, the embodiment of the present application also adds target tracking processing. In the target tracking process, the tracking target needs to be determined. For example, in the process of following a photo, the user selects an area on the touch screen of the mobile terminal (for example, selects an area within a preset range around the selected position by clicking, or by drawing Select an area in a circle), and set the target in the area as the target to be tracked; you can also detect the foreground image in the picture, and use the foreground image as the tracking target; you can also use the Attention algorithm to find the saliency in the current image The object serves as the tracking target.
本申请实施例在确定了跟踪目标后,得到的也可以是一个区域,例如,目标跟踪处理后得到的跟踪检测框的坐标,如图3中(b)所示,A为原始图像,C为跟踪区域,目标跟踪处理后,得到的可以是跟踪区域四个角的坐标,还可以是跟踪区域的中心点坐标以及跟踪区域的中心点坐标与边长的距离。After the tracking target is determined in the embodiment of the application, the obtained area may also be a region. For example, the coordinates of the tracking detection frame obtained after the target tracking processing are shown in Figure 3 (b), where A is the original image, and C is Tracking area, after target tracking processing, the obtained can be the coordinates of the four corners of the tracking area, or the center point coordinates of the tracking area and the distance between the center point coordinates of the tracking area and the side length.
在本申请实施例中,抖动校正处理过程可以采用相同的逻辑,例如,获得的校正检测框中场景的变化最小原则。目标跟踪处理过程也可以采用相同的逻辑,例如,获得的跟踪检测框中跟踪目标的位置和大小匹配度最大原则。这样,避免原始图像序列对应的预览区域中的预览画面序列出现较大的画面抖动以及较大的跟踪目标位置抖动。In the embodiment of the present application, the same logic can be used in the shake correction processing process, for example, the principle of minimum scene change in the obtained correction detection frame. The target tracking process can also adopt the same logic, for example, the principle of maximizing the matching degree of the position and size of the tracking target in the obtained tracking detection frame. In this way, it is avoided that the preview picture sequence in the preview area corresponding to the original image sequence has larger picture shake and larger tracking target position shake.
便于对方案更清晰的理解,通过举例说明获得的校正检测框中场景的变化最小原则,第i帧图像进行抖动校正处理后,获得的校正检测框内的画面为B i,第i+1帧图像进行抖动校正处理过程是保证获得的校正检测框内的画面B i+1与B i中背景图像的差异最小,即连续的图像帧(连续的原始图像对应的校正检测框内的画面)之间背景图像变化的较小,从而实现比较流畅的观看效果。 To facilitate a clearer understanding of the solution, the principle of minimal changes in the scene in the correction detection frame obtained is illustrated by examples. After the image of the i-th frame is subjected to the shake correction processing, the picture in the correction detection frame obtained is B i , and the frame i+1 is The image shake correction process is to ensure that the difference between the obtained picture B i+1 in the correction detection frame and the background image in B i is the smallest, that is, the continuous image frame (the picture in the correction detection frame corresponding to the continuous original image) The background image changes little between the time, so as to achieve a smoother viewing effect.
通过举例说明获得的跟踪检测框中跟踪目标的位置和大小匹配度最大原则,第i帧图像进行目标跟踪处理后,获得的跟踪检测框内的画面为C i,第i+1帧图像进行目标跟踪处理过程是保证获得的跟踪检测框内的画面C i+1与C i中跟踪目标的位置和大小的差异最小,即连续的图像帧(连续的原始图像对应的跟踪检测框内的画面)之间跟踪目标的位置和大小变化最小,实现比较流畅的观看效果。 Through an example to illustrate the principle of maximizing the matching degree between the position and size of the tracking target in the tracking detection frame, after the target tracking processing is performed on the i-th frame image, the image in the tracking detection frame obtained is C i , and the image in the i+1th frame is the target the tracking process is to ensure that the tracking and detection of the screen frame obtained C i + 1 and C i in the target track position and size of the smallest difference, i.e., successive image frames (consecutive track detection frame picture corresponding to the original image) The position and size of the tracking target have minimal changes between them, achieving a smoother viewing effect.
为了方便描述,可以将进行抖动校正处理后获得的校正检测框记为校正区域,将对进行目标跟踪处理后获得的跟踪检测框记为跟踪区域,对采集的原始图像进行抖动校正处理和目标跟踪处理后的区域记为预览区域。For the convenience of description, the corrected detection frame obtained after the shake correction processing can be marked as the correction area, the tracking detection frame obtained after the target tracking processing is marked as the tracking area, and the jitter correction processing and target tracking are performed on the collected original image. The processed area is recorded as the preview area.
步骤S203,将所述预览区域中的图像作为预览画面进行显示。Step S203: Display the image in the preview area as a preview picture.
在本申请实施例中,可以将所述预览区域中的图像输给显示器,作为预览画面进行显示。在进行视频拍摄时,所述预览画面为显示的视频流中的图像帧。In the embodiment of the present application, the image in the preview area may be output to the display and displayed as a preview screen. During video shooting, the preview picture is an image frame in the displayed video stream.
本申请实施例通过摄像头采集原始图像,原始图像为低倍图像,然后通过抖动校正处理避免了画面的抖动,通过目标跟踪处理避免了画面中跟踪目标的晃动,在经过抖动校正处理和目标跟踪处理后,获得的是原始图像对应的处理图像中某个区域,最后将该区域中的处理图像作为预览画面进行显示,这样就能够呈现稳定的高倍效果。即本申请实施例的抖动校正处理和目标跟踪处理通过裁剪的方式既避免了画面抖动,又避免了画面中跟踪目标的晃动,还通过裁剪的方式划定原始图像中的某个区域,该区域对应的处理图像作为预览画面显示,同时实现高倍场景拍摄。In the embodiment of the application, the original image is collected by the camera, the original image is a low-magnification image, and then the shake correction process is used to avoid the shaking of the picture, and the shaking of the tracking target in the picture is avoided through the target tracking process. After the shake correction process and the target tracking process, After that, what is obtained is a certain area in the processed image corresponding to the original image, and finally the processed image in this area is displayed as a preview screen, so that a stable high-magnification effect can be presented. That is, the shake correction processing and target tracking processing in the embodiments of the present application not only avoid the shaking of the picture, but also avoid the shaking of the tracking target in the picture by cropping, and also delimit a certain area in the original image by cropping. The corresponding processed image is displayed as a preview screen, and high-magnification scene shooting is realized at the same time.
图4示出了本申请实施例提供的一种拍摄方法的示意性流程图,作为示例而非限定,该方法可以应用于移动终端中。FIG. 4 shows a schematic flowchart of a shooting method provided by an embodiment of the present application. As an example and not a limitation, the method may be applied to a mobile terminal.
步骤S401,获取移动终端的摄像头采集的原始图像。Step S401: Obtain the original image collected by the camera of the mobile terminal.
在本申请实施例中,步骤S401和步骤S201内容一致,具体可参照步骤S201的描述,在此不再赘述。In the embodiment of the present application, the content of step S401 and step S201 are the same. For details, please refer to the description of step S201, which will not be repeated here.
步骤S402,对所述原始图像进行抖动校正处理,获得处理图像以及所述处理图像中的校正区域。Step S402: Perform a shake correction process on the original image to obtain a processed image and a correction area in the processed image.
在本申请实施例中,所述抖动校正处理的过程是是采用电子防抖(Electric Image Stabilization,EIS)演算法运算,通过画面裁剪补偿方式来避免模糊。In the embodiment of the present application, the process of the image stabilization processing is to use an electronic image stabilization (EIS) algorithm to avoid blurring through image cropping compensation.
参见图5,图5为本申请实施例提供的抖动校正处理和目标跟踪处理的流程示意图;如图5中(a)所示,原始图像A经过抖动校正处理后,获得处理图像A’,处理图像A’与原始图像A的大小一致,校正区域B为校正处理过程中裁切后的中间模糊图像对应的区域。Referring to FIG. 5, FIG. 5 is a schematic flow diagram of the shake correction processing and target tracking processing provided by an embodiment of the application; as shown in FIG. The size of the image A′ is the same as that of the original image A, and the correction area B is the area corresponding to the intermediate blurred image after cropping during the correction process.
步骤S403,从所述原始图像中找出跟踪目标,并基于所述跟踪目标在所述原始图像中的位置,在所述原始图像中确定跟踪区域。Step S403: Find a tracking target from the original image, and determine a tracking area in the original image based on the position of the tracking target in the original image.
在本申请实施例中,进行目标跟踪处理时,可以基于原始图像进行目标跟踪处理, 例如,从原始图像中找出跟踪目标,根据跟踪目标在原始图像中的位置,可以在原始图像中确定出跟踪区域。In the embodiment of the present application, when the target tracking processing is performed, the target tracking processing can be performed based on the original image. For example, the tracking target can be found from the original image, and the position of the tracking target in the original image can be determined in the original image. Tracking area.
在原始图像中确定出跟踪区域的原则还可以是根据跟踪目标在原始图像中的位置和大小以及与所述跟踪目标对应的构图模型确定。The principle of determining the tracking area in the original image may also be determined according to the position and size of the tracking target in the original image and the composition model corresponding to the tracking target.
如图5中(b)所示,原始图像A经过目标跟踪处理后,获得跟踪区域C。As shown in Fig. 5(b), after the original image A is subject to target tracking processing, the tracking area C is obtained.
步骤S404,基于所述校正区域和所述跟踪区域对所述处理图像进行联合裁切,获得所述处理图像中的预览区域。Step S404: Jointly crop the processed image based on the correction area and the tracking area to obtain a preview area in the processed image.
在本申请实施例中,所述联合裁切的过程目的不是对处理图像进行裁剪,而是最终会确定出一个区域,该区域记为预览区域,处理图像中预览区域对应的画面可以作为预览画面。同时,校正区域、跟踪区域和预览区域可以是坐标的表示。校正区域对应的坐标可以映射到原始图像中,也可以映射到处理图像中;同样,跟踪区域对应的坐标也可以映射到原始图像中,也可以映射到处理图像中;预览区域对应的坐标也可以映射到原始图像中,也可以映射到处理图像中。预览区域映射到处理图像中对应的画面就是处理图像中预览区域对应的坐标表示的范围内的画面。In the embodiment of this application, the purpose of the joint cropping process is not to crop the processed image, but to finally determine an area, which is recorded as the preview area, and the picture corresponding to the preview area in the processed image can be used as the preview picture . At the same time, the correction area, the tracking area, and the preview area can be representations of coordinates. The coordinates corresponding to the correction area can be mapped to the original image or the processed image; similarly, the coordinates corresponding to the tracking area can also be mapped to the original image or the processed image; the coordinates corresponding to the preview area can also be mapped Mapping to the original image can also be mapped to the processed image. The mapping of the preview area to the corresponding picture in the processed image is the picture within the range indicated by the coordinates corresponding to the preview area in the processed image.
在进行联合裁切时,需要根据处理图像(当前帧原始图像对应的处理图像)对应的校正区域的中心点(也可以包括校正区域的边长)和跟踪区域的中心点(也可以包括跟踪区域的边长)进行融合,获得处理图像中的预览区域。也可以理解为校正区域和跟踪区域均为带有坐标的框,将两个框融合后,获得的融合后的框的坐标映射到处理图像中就是预览区域。通过上述描述也可得知,抖动校正处理除了本身对图像的清晰度的处理之外,实际上获得的校正区域就是相对于原始图像的坐标或者相对于处理图像(与原始图像大小一致)的坐标。When performing joint cropping, the center point of the correction area (or the side length of the correction area) corresponding to the processed image (the processed image corresponding to the original image of the current frame) and the center point of the tracking area (may also include the tracking area) The side length of) is merged to obtain the preview area in the processed image. It can also be understood that both the correction area and the tracking area are frames with coordinates. After the two frames are fused, the coordinates of the fused frame obtained are mapped to the processed image to be the preview area. It can also be known from the above description that, in addition to its own processing of image sharpness, the correction area actually obtained is the coordinates relative to the original image or the coordinates relative to the processed image (the same size as the original image). .
为了使得当前帧原始图像对应的预览区域与上一帧原始图像对应的预览区域具有较流畅的视觉效果,还可以参照上一帧原始图像对应的预览画面,例如上一帧原始图像对应的预览画面的中心点,或上一帧原始图像对应的校正区域的中心点,上一帧原始图像对应的跟踪区域的中心点,当然,除了上述中心点可以作为参数之外,还可以将区域(预览区域、校正区域或跟踪区域)边长作为一个参数。In order to make the preview area corresponding to the original image of the current frame and the preview area corresponding to the original image of the previous frame have a smoother visual effect, you can also refer to the preview image corresponding to the original image of the previous frame, for example, the preview image corresponding to the original image of the previous frame. The center point of the correction area, or the center point of the correction area corresponding to the previous frame of the original image, and the center point of the tracking area , Correction area or tracking area) side length as a parameter.
作为举例,在以当前帧原始图像对应的校正区域的中心点、上一帧原始图像对应的校正区域的中心点、和当前图像对应的跟踪区域的中心点和上一帧原始图像对应的跟踪区域的中心点联合获得预览区域为参数时,可以通过以下公式:As an example, take the center point of the correction area corresponding to the current frame of the original image, the center point of the correction area corresponding to the previous frame of the original image, the center point of the tracking area corresponding to the current image, and the tracking area corresponding to the previous frame of the original image When the center point of is combined to obtain the preview area as a parameter, the following formula can be used:
Figure PCTCN2021084589-appb-000001
Figure PCTCN2021084589-appb-000001
其中,F为预览区域,α和β为常数,
Figure PCTCN2021084589-appb-000002
为以上一帧原始图像对应的校正区域的中心点
Figure PCTCN2021084589-appb-000003
和当前帧原始图像对应的校正区域的中心点
Figure PCTCN2021084589-appb-000004
为变量的函数,
Figure PCTCN2021084589-appb-000005
为以上一帧原始图像对应的跟踪区域的中心点
Figure PCTCN2021084589-appb-000006
和当前帧原始图像对应的跟踪区域的中心点
Figure PCTCN2021084589-appb-000007
为变量的函数。
Among them, F is the preview area, α and β are constants,
Figure PCTCN2021084589-appb-000002
Is the center point of the correction area corresponding to the original image of the previous frame
Figure PCTCN2021084589-appb-000003
The center point of the correction area corresponding to the original image of the current frame
Figure PCTCN2021084589-appb-000004
Is a function of variables,
Figure PCTCN2021084589-appb-000005
Is the center point of the tracking area corresponding to the original image of the previous frame
Figure PCTCN2021084589-appb-000006
The center point of the tracking area corresponding to the original image of the current frame
Figure PCTCN2021084589-appb-000007
As a function of variables.
通过上述公式也可以看出,联合裁切时所述预览区域和当前帧原始图像对应的校正区域和跟踪区域的位置相关,同时也和上一帧原始图像对应的校正区域和跟踪区域的位置(或上一帧原始图像对应的预览区域)相关。It can also be seen from the above formula that the preview area is related to the position of the correction area and tracking area corresponding to the original image of the current frame during joint cropping, and is also related to the position of the correction area and tracking area corresponding to the original image of the previous frame ( Or the preview area corresponding to the previous frame of the original image) is related.
与上一帧原始图像对应的校正区域和跟踪区域的位置相关是为了使得获得的当前帧的预览区域可以和上一帧原始图像对应的预览区域能够具有较稳定的预览效果。The position correlation between the correction area and the tracking area corresponding to the original image of the previous frame is to enable the obtained preview area of the current frame and the preview area corresponding to the original image of the previous frame to have a relatively stable preview effect.
如图5中(c)所示,为校正区域B和跟踪区域C的位置对比,如图5中(d)所示,是将校正区域B和跟踪区域C联合裁切后,获得的预览区域D。As shown in Figure 5 (c), the position comparison between the correction area B and the tracking area C, as shown in Figure 5 (d), is the preview area obtained after the correction area B and the tracking area C are jointly cropped D.
步骤S404,将所述预览区域中的图像作为预览画面进行显示。Step S404: Display the image in the preview area as a preview screen.
在本申请实施例中,步骤S404和步骤S203内容一致,可参照步骤S203的描述,在此不再赘述。In the embodiment of the present application, the content of step S404 and step S203 are the same, and the description of step S203 can be referred to, and details are not repeated here.
本申请实施例通过对原始图像进行抖动校正处理获得处理图像和校正区域,对原始图像进行目标跟踪处理获得跟踪区域,最后基于校正区域和跟踪区域对处理图像进行联合裁切,可以在高倍场景下获得较稳定平滑的视频画面。In the embodiment of the application, the processed image and the correction area are obtained by performing jitter correction processing on the original image, and the target tracking processing is performed on the original image to obtain the tracking area. Finally, the processed image is jointly cropped based on the correction area and the tracking area, which can be used in high magnification scenes. Obtain a more stable and smooth video picture.
图6示出了本申请实施例提供的一种拍摄方法的示意性流程图,作为示例而非限定,该方法可以应用于上述移动终端中。FIG. 6 shows a schematic flowchart of a shooting method provided by an embodiment of the present application. As an example and not a limitation, the method can be applied to the above-mentioned mobile terminal.
步骤S601,获取移动终端的摄像头采集的原始图像。Step S601: Obtain the original image collected by the camera of the mobile terminal.
在本申请实施例中,步骤S601和步骤S201内容一致,具体可参照步骤S201的描述,在此不再赘述。In the embodiment of the present application, the contents of step S601 and step S201 are the same. For details, please refer to the description of step S201, which will not be repeated here.
步骤S602,对所述原始图像进行抖动校正处理,获得处理图像以及所述处理图像中的校正区域。Step S602: Perform a shake correction process on the original image to obtain a processed image and a correction area in the processed image.
在本申请实施例中,步骤S602与步骤S402内容一致,具体可参照步骤S402的描述,在此不再赘述。In the embodiment of the present application, the content of step S602 is the same as that of step S402. For details, please refer to the description of step S402, which will not be repeated here.
步骤S603,从所述校正区域中找出跟踪目标,基于所述跟踪目标在所述校正区域的位置,在所述校正区域中确定跟踪区域,并将所述跟踪区域作为预览区域。Step S603: Find a tracking target from the correction area, determine a tracking area in the correction area based on the position of the tracking target in the correction area, and use the tracking area as a preview area.
在本申请实施例中,该步骤进行的目标跟踪处理,与图4所示实施例中步骤S403不同的是:图4所示实施例中,跟踪区域是从原始图像中确定的,而图6所示实施例跟踪区域是从步骤S602中获得的校正区域内的图像确定的,例如从校正区域映射在处理图像中时对应的画面确定的。或者从校正区域映射在原始图像中时对应的画面确定的。当然,确定跟踪区域时,还需要考虑跟踪目标在校正区域内的图像中的位置,或者构图模型等。In the embodiment of the present application, the target tracking processing performed in this step is different from step S403 in the embodiment shown in FIG. 4 in that: in the embodiment shown in FIG. 4, the tracking area is determined from the original image, while The tracking area in the illustrated embodiment is determined from the image in the correction area obtained in step S602, for example, from the corresponding picture when the correction area is mapped in the processed image. Or it is determined from the corresponding picture when the correction area is mapped in the original image. Of course, when determining the tracking area, you also need to consider the location of the tracking target in the image within the correction area, or the composition model.
参见图7,图7为本申请实施例提供的拍摄方法对应的校正区域、跟踪区域和预览区域的示意图;如图7中(a)所示,先对原始图像A进行抖动校正处理获得处理图像A’和校正区域B,然后对校正区域B内的图像(校正区域映射在处理图像中时对应的画面)进行目标跟踪处理,获得跟踪区域C,跟踪区域C即预览区域。由于目标跟踪处理需要进行裁剪的过程,因此,跟踪区域C位于校正区域B内。当然,在确定跟踪区域时,还可以对校正区域B映射在原始图像中时对应的画面进行目标跟踪处理,不做限制。Referring to FIG. 7, FIG. 7 is a schematic diagram of the correction area, the tracking area, and the preview area corresponding to the shooting method provided by the embodiment of the application; as shown in FIG. 7 (a), the original image A is first subjected to shake correction processing to obtain a processed image A'and the correction area B, and then perform target tracking processing on the image in the correction area B (the corresponding picture when the correction area is mapped in the processed image) to obtain the tracking area C, which is the preview area. Since the target tracking processing requires a cropping process, the tracking area C is located in the correction area B. Of course, when determining the tracking area, it is also possible to perform target tracking processing on the corresponding picture when the correction area B is mapped in the original image, without limitation.
步骤S604,将所述预览区域中的图像作为预览画面进行显示。Step S604: Display the image in the preview area as a preview picture.
在本申请实施例中,步骤S604和步骤S203内容一致,可参照步骤S203的描述,在此不再赘述。In the embodiment of the present application, the content of step S604 and step S203 are the same, and the description of step S203 can be referred to, and the details are not repeated here.
本申请实施例通过先对原始大图进行抖动校正处理,获得处理图像和校正区域,然后再对处理图像中校正区域内的图像进行目标跟踪处理,获得跟踪区域,获得跟踪区域即为预览区域,可以在高倍场景下获得较稳定平滑的视频画面。In the embodiment of this application, the original large image is shake-corrected to obtain the processed image and the correction area, and then the target tracking process is performed on the image in the corrected area in the processed image to obtain the tracking area. The obtained tracking area is the preview area. You can get a more stable and smooth video picture under high magnification scenes.
图8示出了本申请实施例提供的一种拍摄方法的示意性流程图,作为示例而非限 定,该方法可以应用于上述移动终端中。Fig. 8 shows a schematic flowchart of a shooting method provided by an embodiment of the present application. As an example and not a limitation, the method can be applied to the above-mentioned mobile terminal.
步骤S801,获取移动终端的摄像头采集的原始图像。Step S801: Obtain the original image collected by the camera of the mobile terminal.
在本申请实施例中,步骤S801和步骤S201内容一致,具体可参照步骤S201的描述,在此不再赘述。In the embodiment of the present application, the contents of step S801 and step S201 are the same. For details, please refer to the description of step S201, which will not be repeated here.
步骤S802,从所述原始图像中找出跟踪目标,并基于所述跟踪目标在所述原始图像的位置,在所述原始图像中确定跟踪区域;Step S802: Find a tracking target from the original image, and determine a tracking area in the original image based on the position of the tracking target in the original image;
在本申请实施例中,步骤S802和步骤S403内容一致,可参照步骤S403的描述在此不再赘述。In the embodiment of the present application, the content of step S802 and step S403 are the same, and the description of step S403 may be referred to and will not be repeated here.
步骤S803,对所述跟踪区域的图像进行抖动校正处理,在所述跟踪区域中确定校正区域,并将所述校正区域作为预览区域。Step S803: Perform a shake correction process on the image of the tracking area, determine a correction area in the tracking area, and use the correction area as a preview area.
在本申请实施例中,和图4、图6所示实施例不同的是:本申请实施例是先对原始图像进行目标跟踪处理,获得跟踪区域后,再对跟踪区域内的图像进行抖动校正处理,获得处理图像和校正区域,需要说明,此时获得的处理图像并不是原始图像大小,而是跟踪区域的大小。In the embodiment of this application, the difference from the embodiment shown in FIG. 4 and FIG. 6 is that the embodiment of this application first performs target tracking processing on the original image, and after obtaining the tracking area, the image in the tracking area is shaken corrected. Processing to obtain the processed image and the correction area, it needs to be explained that the processed image obtained at this time is not the size of the original image, but the size of the tracking area.
参见图9,图9为本申请实施例提供的拍摄方法对应的校正区域、跟踪区域和预览区域的示意图;如图9中(a)所示,先对原始图像A进行目标跟踪处理获得跟踪区域C,然后对跟踪区域C内的图像进行抖动校正处理,获得处理图像C’和校正区域B,校正区域B即预览区域。由于抖动校正处理需要进行裁剪的过程,因此,校正区域位于跟踪区域内。Referring to FIG. 9, FIG. 9 is a schematic diagram of the correction area, the tracking area, and the preview area corresponding to the shooting method provided by the embodiment of the application; as shown in FIG. 9(a), the original image A is first subjected to target tracking processing to obtain the tracking area C, then perform a shake correction process on the image in the tracking area C to obtain a processed image C′ and a correction area B, and the correction area B is the preview area. Since the shake correction processing requires a cropping process, the correction area is located in the tracking area.
步骤S804,将所述预览区域中的图像作为预览画面进行显示。Step S804: Display the image in the preview area as a preview screen.
在本申请实施例中,步骤S804和步骤S203内容一致,可参照步骤S203的描述,在此不再赘述。In the embodiment of the present application, the content of step S804 and step S203 are the same, and the description of step S203 can be referred to, and details are not described herein again.
本申请实施例通过先对原始大图进行目标跟踪处理,获得跟踪区域,然后再对跟踪区域内的图像进行抖动校正处理,获得处理图像和处理图像内的校正区域,获得的校正区域即为预览区域,可以在高倍场景下获得较稳定平滑的视频画面。In the embodiment of this application, the target tracking process is performed on the original large image to obtain the tracking area, and then the image in the tracking area is subjected to shake correction processing to obtain the processed image and the correction area in the processed image. The obtained correction area is the preview Area, you can get a more stable and smooth video picture in a high magnification scene.
作为本申请另一实施例,所述抖动校正处理的过程包括:As another embodiment of the present application, the process of the jitter correction processing includes:
获取待校正处理的图像,并获取所述待校正处理的图像采集时刻所述移动终端的抖动信息;Acquiring an image to be corrected, and acquiring jitter information of the mobile terminal at the time when the image to be corrected is collected;
对所述待校正处理的图像进行第一裁切处理,获得边缘图像和中间图像;Performing a first cropping process on the image to be corrected to obtain an edge image and an intermediate image;
基于所述抖动信息和所述边缘图像对所述中间图像进行补偿,获得处理图像,其中,所述中间图像对应的区域为校正区域。Compensating the intermediate image based on the shaking information and the edge image to obtain a processed image, wherein the area corresponding to the intermediate image is a correction area.
在本申请实施例中,所述待校正处理的图像为待进行抖动校正处理的图像,例如,图4和图6所示实施例中,对原始图像进行抖动校正处理,即原始图像为待校正处理的图像。图8所示实施例中,对跟踪区域的图像进行抖动校正处理,即跟踪区域内的图像为待校正处理的图像。In the embodiment of the present application, the image to be corrected is an image to be subjected to shake correction processing. For example, in the embodiment shown in FIG. 4 and FIG. 6, the original image is subjected to shake correction processing, that is, the original image is to be corrected Processed image. In the embodiment shown in FIG. 8, the image in the tracking area is subjected to shake correction processing, that is, the image in the tracking area is the image to be corrected.
在进行抖动校正处理的过程中,需要利用待校正处理的图像采集时刻(若待校正处理的图像为跟踪区域内的图像,则跟踪区域内的图像对应的原始图像的采集时刻)所述移动终端的抖动信息,可以利用所述移动终端内部设置的陀螺仪采集所述移动终端的抖动信息,生成每个时刻对应的移动终端的抖动信息,从移动终端的抖动信息中 获取待校正处理的图像采集时刻对应的移动终端的抖动信息。In the process of shaking correction processing, it is necessary to use the image acquisition time to be corrected (if the image to be corrected is an image in the tracking area, the acquisition time of the original image corresponding to the image in the tracking area) The jitter information of the mobile terminal can be used to collect the jitter information of the mobile terminal by using the gyroscope set inside the mobile terminal, generate the jitter information of the mobile terminal corresponding to each moment, and obtain the image collection to be corrected from the jitter information of the mobile terminal. The jitter information of the mobile terminal corresponding to the moment.
基于所述抖动信息,可以对所述待校正处理的图像进行反向补偿,在进行反向补偿时需要进行第一裁切处理,即裁切出边缘图像和中间模糊图像,通过抖动信息计算出补偿量,然后通过边缘图像对中间模糊图像进行反向补偿,在进行抖动校正处理后,中间模糊图像反向补偿后会变得较为清晰,由于中间模糊图像为所述原始图像中的裁剪出来的,因此,在进行第一裁切处理后,就会得到一个区域,该区域内的图像是经过补偿后的图像,该区域可以记为校正区域。在进行抖动校正处理时,最后确定的校正区域与待进行抖动校正处理的图像的比例(例如,面积比例、长度比例、宽度比例等)可以预先设置。Based on the jitter information, the image to be corrected can be reversely compensated. When reverse compensation is performed, the first cropping process is required, that is, the edge image and the intermediate blurred image are cut out, and the jitter information is calculated The amount of compensation, and then reversely compensate the intermediate blurred image through the edge image. After the shake correction process, the intermediate blurred image will become clearer after reverse compensation. Because the intermediate blurred image is cropped from the original image Therefore, after the first cropping process, an area will be obtained. The image in this area is a compensated image, and this area can be recorded as a correction area. When performing the shake correction processing, the ratio (for example, area ratio, length ratio, width ratio, etc.) of the finally determined correction area to the image to be subjected to the shake correction processing can be set in advance.
作为本申请另一实施例,所述目标跟踪处理包括:As another embodiment of the present application, the target tracking processing includes:
基于注意力机制从待跟踪处理的图像和/或待跟踪处理的图像之前的N帧图像中,找到跟踪目标,其中,N≥1;Find the tracking target from the image to be tracked and/or N frames before the image to be tracked based on the attention mechanism, where N≥1;
基于所述跟踪目标在所述待跟踪处理的图像中的位置,进行第二裁切处理,以在待跟踪处理的图像中确定出。Based on the position of the tracking target in the image to be tracked, a second cropping process is performed to determine the position in the image to be tracked.
在本申请实施例中,在对待跟踪处理的图像进行目标跟踪处理时,需要预先确定出跟踪目标,可以基于注意力机制从待跟踪处理的图像中找出跟踪目标,也可以从待跟踪处理的图像之前的N帧图像中,找出跟踪目标,还可以基于待跟踪处理的图像和待跟踪处理的图像之前的N帧图像中找出跟踪目标。In the embodiment of the present application, when performing target tracking processing on the image to be tracked, the tracking target needs to be determined in advance. The tracking target can be found from the image to be tracked based on the attention mechanism, or from the image to be tracked. In the N frames before the image, the tracking target can be found, and the tracking target can also be found based on the image to be tracked and the N frames before the image to be tracked.
基于注意力机制的算法进行目标跟踪处理时,需要构建卷积神经网络模型,构建的卷积神经网络模型中输入数据或者特征图的不同部分对应的专注度不同。例如,在关注的每一个目标尺度上,都采用一个分类的网络和一个产生attention proposal的网络(APN)。APN可以由两个全连接层构成,输出3个参数表示方框的位置,接下来的尺度的分类网络只在这个新产生的方框图像中提取特征进行分类。在进行训练时,通过损失函数控制后一个尺度得到的分类结果要比上一个尺度的好,从而使APN提取出更利于精细分类的目标局部出来,随着训练进行后,APN将越来越聚焦目标上的细微的有区分性的部分。When an algorithm based on the attention mechanism performs target tracking processing, a convolutional neural network model needs to be constructed, and the input data or different parts of the feature map in the constructed convolutional neural network model correspond to different degrees of concentration. For example, at each target scale of concern, a classification network and an attention proposal network (APN) are used. APN can be composed of two fully connected layers, and output 3 parameters to indicate the position of the box. The next scale classification network only extracts features from this newly generated box image for classification. During training, the classification result obtained by controlling the latter scale by the loss function is better than that of the previous scale, so that APN can extract the target parts that are more conducive to fine classification. As the training progresses, the APN will become more and more focused. The subtle, distinguishing part of the target.
在具体确定跟踪目标的过程中,可以将当前待跟踪处理的图像输入卷积神经网络模型中,也可以将待跟踪处理的图像对应的原始图像之前的N帧原始图像(或N帧原始图像对应的预览画面)输入卷积神经网络模型中,还可以将待跟踪处理的图像对应的原始图像之前的N帧图像(或N帧原始图像对应的预览画面)以及当前待跟踪处理的图像输入卷积神经网络模型中,从而输出跟踪目标,在此不对输入卷积神经网络模型的输入图像进行限制。In the process of specifically determining the tracking target, the current image to be tracked can be input into the convolutional neural network model, or the original image corresponding to the original image corresponding to the image to be tracked can be N frames of original images (or N frames of original images corresponding to the original image). The preview screen) is input into the convolutional neural network model, and the N frames before the original image corresponding to the image to be tracked (or the preview screen corresponding to the N frame of the original image) and the current image to be tracked can also be input into the convolution In the neural network model, the tracking target is thus output, and the input image of the input convolutional neural network model is not limited here.
如果是根据待跟踪处理的图像之前的N帧图像中确定的跟踪目标,在找出跟踪目标后,需要确定待跟踪处理的图像中跟踪目标的位置,待跟踪处理的图像为即将进行目标跟踪处理的图像,如图4和图8所示实施例,在进行目标跟踪处理时,待跟踪处理的图像为原始图像,如图6所示实施例,在进行目标跟踪处理时,待跟踪处理的图像为校正区域内的图像(校正区域映射到原始图像中的画面或映射到处理图像中的画面)。在根据跟踪目标的位置输出跟踪区域时,跟踪区域的位置和大小可以根据跟踪目标的位置和大小确定,例如,可以设置为跟踪目标的最小外接矩形的中心点为跟踪 区域的中心点,跟踪区域的长度和宽度可以预先进行设置。If it is based on the tracking target determined in N frames before the image to be tracked, after finding the tracking target, it is necessary to determine the location of the tracking target in the image to be tracked. The image to be tracked is about to be processed for target tracking. In the embodiment shown in Figures 4 and 8, the image to be tracked is the original image during target tracking processing. In the embodiment shown in Figure 6, the image to be tracked is It is the image within the correction area (the correction area is mapped to the picture in the original image or the picture in the processed image). When outputting the tracking area according to the location of the tracking target, the location and size of the tracking area can be determined according to the location and size of the tracking target. For example, the center point of the smallest bounding rectangle that can be set as the tracking target is the center point of the tracking area. The length and width can be set in advance.
需要说明,上述描述的第二裁切处理获得跟踪区域的过程仅用于举例,并不对获得跟踪区域的过程造成限制,在实际应用中,还可以选择其他获得跟踪区域的位置和大小的方式,在此不做限制。It should be noted that the process of obtaining the tracking area by the second cropping process described above is only used as an example, and does not limit the process of obtaining the tracking area. In practical applications, other methods for obtaining the location and size of the tracking area can also be selected. There is no restriction here.
在此需要说明,上述抖动校正处理和目标跟踪处理的过程仅用于举例,在实际应用中,图4、图6和图8实施例中的抖动校正处理和目标跟踪处理的过程还可以是区别与上述抖动校正处理和目标跟踪处理的过程。具体采用何种抖动校正处理和目标跟踪处理的方法在此不做限制。It should be noted here that the above-mentioned process of shake correction processing and target tracking processing is only used as an example. In practical applications, the process of shake correction processing and target tracking processing in the embodiments of FIG. 4, FIG. 6 and FIG. 8 can also be different. The process of the above-mentioned shake correction processing and target tracking processing. The specific method of shake correction and target tracking is not limited here.
作为本申请另一实施例,在将所述预览区域中的图像作为预览画面进行显示之前,还包括:As another embodiment of the present application, before displaying the image in the preview area as a preview screen, the method further includes:
对所述处理图像中的预览区域进行平滑处理,获得平滑处理后的预览区域;Smoothing the preview area in the processed image to obtain a smoothed preview area;
相应的,所述将所述预览区域中的图像作为预览画面进行显示包括:Correspondingly, the displaying the image in the preview area as a preview screen includes:
将所述平滑处理后的预览区域中的图像作为预览画面进行显示。The image in the preview area after the smoothing process is displayed as a preview screen.
在本申请实施例中,为了使得多帧预览画面之间具有更平滑的预览效果,还可以对所述预览区域中的图像进行平滑处理,获得平滑处理后的预览区域。将所述平滑处理后的预览区域中的图像作为预览画面进行显示。由于最终需要获得的为抖动校正处理后的图像,因此平滑处理后获得的预览画面是预览区域映射到处理图像中的画面。In the embodiment of the present application, in order to have a smoother preview effect between the multiple frames of preview pictures, the image in the preview area may also be smoothed to obtain a smoothed preview area. The image in the preview area after the smoothing process is displayed as a preview screen. Since the final image that needs to be obtained is the image after the shake correction process, the preview image obtained after the smoothing process is the image in which the preview area is mapped to the processed image.
所述平滑处理的过程可以是滤波器参照上一帧预览画面中跟踪目标的位置和大小对所述预览区域的位置和大小进行移动。The smoothing process may be that the filter moves the position and size of the preview area with reference to the position and size of the tracking target in the previous frame of preview picture.
作为本申请另一实施例,所述对所述预览区域进行平滑处理包括:As another embodiment of the present application, the smoothing process on the preview area includes:
通过至少两个滤波器分别对所述预览区域进行平滑处理,获得每个滤波器对应的平滑子区域;Smoothing the preview area by using at least two filters to obtain a smooth sub-area corresponding to each filter;
通过决策器判定每个滤波器的权重;Determine the weight of each filter through the decision maker;
基于每个滤波器的权重,对每个滤波器对应的平滑子区域进行融合处理,获得平滑处理后的预览区域。Based on the weight of each filter, the smooth subregion corresponding to each filter is fused to obtain a smoothed preview region.
在本申请实施例中,参见图10,图10为本申请实施例提供的拍摄方法中平滑处理过程的示意图,如图所示,可以设计滤波器组,滤波器组中有多个滤波器,每个滤波器中采用不同的滤波算法,通过每个滤波器分别对所述预览区域进行平滑处理,就可以获得每个滤波器处理后的平滑子区域,由于滤波器组内的滤波器采用不同的滤波算法,因此,每个滤波器得到的平滑子区域的位置可能存在不同。In the embodiment of the present application, refer to FIG. 10, which is a schematic diagram of the smoothing process in the shooting method provided by the embodiment of the present application. As shown in the figure, a filter bank can be designed, and there are multiple filters in the filter bank. A different filtering algorithm is used in each filter, and the preview area is smoothed separately through each filter, and then the smooth sub-area processed by each filter can be obtained. Because the filters in the filter bank use different Therefore, the location of the smooth sub-region obtained by each filter may be different.
为了将多个滤波器分别对应的平滑子区域进行融合,还可以为每个滤波器设置权重,例如,基于连续的M(M大于或等于1)帧预览画面内跟踪目标的位置和大小以及当前预览区域内跟踪目标的位置和大小,通过决策器判定每个滤波器的权重。在获得每个滤波器的权重后,就可以基于每个滤波器的权重,将每个滤波器对应的平滑子区域进行融合处理,获得平滑处理后的预览区域。平滑处理后的预览区域相对于平滑处理前的预览区域位置可能发生了变化。In order to fuse the smooth sub-regions corresponding to multiple filters, you can also set a weight for each filter, for example, based on the position and size of the tracking target in the continuous M (M greater than or equal to 1) frame preview screen Preview the location and size of the tracking target in the preview area, and determine the weight of each filter through the decision maker. After obtaining the weight of each filter, based on the weight of each filter, the smooth sub-region corresponding to each filter can be fused to obtain a smoothed preview area. The position of the preview area after smoothing may have changed relative to the position of the preview area before smoothing.
当然,在对所述预览区域进行平滑处理后,还可以对预览区域的图像进行其他处理,在进行其他处理后,再将处理后的预览区域的图像送显。例如,还可以将平滑处理后的预览区域的图像进行超分算法处理,以提高图像的质量。超分算法是一项底层 图像处理任务,将低分辨率的图像映射至高分辨率,已达到增强图像细节的作用。Of course, after the preview area is smoothed, other processing may be performed on the image in the preview area, and after other processing is performed, the processed image in the preview area is sent to display. For example, the image in the preview area after smoothing can also be processed by a super-division algorithm to improve the quality of the image. The super-resolution algorithm is a low-level image processing task, which maps low-resolution images to high-resolution, and has achieved the effect of enhancing image details.
作为举例,可以采用深度学习的方法进行超分辨率算法处理,先利用大量的高分辨率图像积累并进行模型学习,例如,先将高分辨率图像按照降低质量的模型进行降低质量,产生训练模型,然后根据高分辨率图像的低频部分和高频部分对应关系对图像进行分块,通过学习获得先验知识,建立学习模型;再将低分辨率的图像输入模型中,以输入的低分辨率块为依据,在建立好的学习集中搜索最高匹配的高频块,从而对低分辨率图像进行恢复,最后就可以得到图像的高频细节,提高图像的质量。As an example, a deep learning method can be used for super-resolution algorithm processing. First, a large number of high-resolution images are used to accumulate and perform model learning. For example, first, high-resolution images are reduced in quality according to a reduced-quality model to generate a training model , And then divide the image into blocks according to the correspondence between the low-frequency part and the high-frequency part of the high-resolution image, obtain prior knowledge through learning, and establish a learning model; then input the low-resolution image into the model, and use the input low-resolution image Based on the block, search for the highest matching high-frequency block in the established learning set to restore the low-resolution image. Finally, the high-frequency details of the image can be obtained and the quality of the image can be improved.
由于本申请实施例提供的抖动校正处理的过程为电子防抖的方法,有可能会降低图像的质量,因此,在进行预览显示之前,通过超分算法提高图像质量,以避免高倍拍摄场景下图像质量降低的问题。Since the image stabilization process provided by the embodiments of this application is an electronic image stabilization method, it may reduce the quality of the image. Therefore, before the preview display is performed, the image quality is improved by the super-resolution algorithm to avoid the image in the high-magnification shooting scene. The problem of reduced quality.
图11示出了本申请实施例提供的一种拍摄方法的示意性流程图,作为示例而非限定,该方法包括:FIG. 11 shows a schematic flowchart of a shooting method provided by an embodiment of the present application. As an example and not a limitation, the method includes:
步骤S1101,获取移动终端的摄像头采集的原始图像。Step S1101: Obtain the original image collected by the camera of the mobile terminal.
步骤S1102,对所述原始图像进行抖动校正处理和目标跟踪处理,获得处理图像以及所述处理图像中包含跟踪目标的预览区域。Step S1102: Perform shake correction processing and target tracking processing on the original image to obtain a processed image and a preview area containing the tracking target in the processed image.
步骤S1103,将所述预览区域中的图像作为预览画面进行显示。Step S1103: Display the image in the preview area as a preview screen.
在本申请实施例中,步骤S1101至步骤S1103的内容和步骤S201至步骤S203的内容一致,具体可参照步骤S201至步骤S203的描述。In the embodiment of the present application, the content of step S1101 to step S1103 is consistent with the content of step S201 to step S203. For details, please refer to the description of step S201 to step S203.
步骤S1104,基于所述预览区域中的跟踪目标在所述原始图像中的大小,对所述摄像头进行变焦处理,其中,变焦处理后的摄像头用于采集下一帧原始图像。Step S1104: Perform zoom processing on the camera based on the size of the tracking target in the preview area in the original image, where the zoomed camera is used to collect the next frame of the original image.
在本申请实施例中,由于获得预览区域的过程中,会经过裁剪,例如,在抖动校正处理过程中有可能会进行第一裁切处理,在目标跟踪处理过程中也可能会根据找到的跟踪目标进行第二裁切处理。因此,所述预览区域画面的大小是小于所述原始图像的画面大小的。可以根据所述预览区域内画面大小(或跟踪目标)在所述原始图像内的位置和比例,对摄像头进行变焦处理,变焦处理的过程是为了使得摄像头采集的下一帧原始图像进行至少两次裁切后能够获得较好构图效果的预览画面。当然,变焦处理的过程可以包括光学变焦和/或数码变焦。In the embodiment of the present application, since the preview area is obtained, cropping is performed. For example, the first cropping process may be performed during the jitter correction process, and the target tracking process may also be based on the found tracking process. The target is subjected to the second cutting process. Therefore, the size of the preview area picture is smaller than the picture size of the original image. The camera can be zoomed according to the position and proportion of the screen size (or tracking target) in the original image in the preview area. The process of zooming is to make the next frame of the original image captured by the camera perform at least twice After cropping, you can get a better composition preview screen. Of course, the process of zoom processing may include optical zoom and/or digital zoom.
在此需要说明,变焦处理的过程并不是使得变焦处理后的摄像头采集到的下一帧原始图像中跟踪目标的位置大小和当前帧原始图像对应的预览区域内的跟踪目标的位置大小具有较高的匹配度,而是,需要变焦处理后的摄像头采集到的下一帧原始图像进行抖动校正处理和目标跟踪处理后得到的预览画面(跟踪目标的位置和大小)与当前帧原始图像得到的预览画面(跟踪目标的位置和大小)具有较高的匹配度。即在变焦处理时需要考虑下一帧原始图像也需要经过裁切处理。It needs to be explained here that the zooming process does not make the position of the tracking target in the next frame of original image captured by the zoomed camera and the position of the tracking target in the preview area corresponding to the current frame of the original image have a higher size. The degree of matching, instead, the preview image (position and size of the tracking target) obtained after the jitter correction processing and target tracking processing of the next frame of the original image captured by the zoomed camera is required and the preview obtained from the original image of the current frame The picture (the position and size of the tracking target) has a high degree of matching. That is, when zooming, it is necessary to consider that the next frame of the original image also needs to be cropped.
当移动终端上设置的摄像头的个数为至少两个时,还可以从变焦处理后的摄像头中选择一个作为主摄像头,利用主摄像头采集下一帧原始图像,可参照步骤S1205的描述。When the number of cameras set on the mobile terminal is at least two, one of the zoomed cameras can also be selected as the main camera, and the main camera is used to collect the next frame of original image. Refer to the description of step S1205.
步骤S1105,从变焦处理后的摄像头中选取一个作为主摄像头,其中,所述主摄像头为采集下一帧原始图像的摄像头。Step S1105: Select one of the zoomed cameras as the main camera, where the main camera is a camera that collects the next frame of original image.
在本申请实施例中,可以根据所述预览区域中的跟踪目标在所述原始图像中的位 置,从所述变焦处理后的摄像头中选取一个作为主摄像头。In the embodiment of the present application, according to the position of the tracking target in the preview area in the original image, one of the cameras after the zoom processing may be selected as the main camera.
当然,还可以根据构图模型,从变焦处理后的摄像头中选取一个作为主摄像头,作为举例,由于摄像头的可调焦距范围不同,导致采集当前帧原始图像的摄像头即使经过变焦处理,预估采集到的下一帧原始图像对应的预览区域内的跟踪目标的位置和大小也无法和当前帧原始图像对应的预览区域内的跟踪目标的位置和大小匹配或匹配度较低,因此,需要通过切换其他摄像头,切换到的其他摄像头预估采集到的下一帧原始图像对应的预览区域内的跟踪目标的位置和大小和当前帧原始图像对应的预览区域内的跟踪目标的位置和大小匹配度更高。另一种情况,由于多个摄像头可能在移动终端上的位置存在一些区别,导致每个摄像头采集到的原始图像中跟踪目标的位置可能存在区别,为了使得下一帧原始图像对应的预览区域内的画面与构图模型更匹配,因此,可以切换到另一个摄像头,切换到的摄像头采集的下一帧原始图像对应的预览区域内的画面与构图模型更匹配。Of course, you can also select one of the zoom-processed cameras as the main camera according to the composition model. As an example, due to the different adjustable focal lengths of the cameras, the camera that captures the original image of the current frame will be estimated to be captured even if the original image of the current frame is zoomed. The position and size of the tracking target in the preview area corresponding to the original image of the next frame cannot match the position and size of the tracking target in the preview area corresponding to the original image of the current frame, or the matching degree is low. Therefore, you need to switch other Camera, switch to the other camera to estimate that the position and size of the tracking target in the preview area corresponding to the next frame of the original image collected and the position and size of the tracking target in the preview area corresponding to the current frame of the original image are more closely matched . In another case, because multiple cameras may have some differences in their positions on the mobile terminal, there may be differences in the location of the tracking target in the original image collected by each camera. In order to make the next frame of the original image correspond to the preview area The picture of the camera matches the composition model better. Therefore, you can switch to another camera, and the picture in the preview area corresponding to the next frame of the original image collected by the camera you switched to matches the composition model more closely.
在此需要说明,在进行变焦处理和切换摄像头处理时,下一帧原始图像、下一帧原始图像对应的预览区域、下一帧原始图像对应的预览区域内的画面均为计算预估获得的。It should be noted here that during zoom processing and camera switching processing, the next original image, the preview area corresponding to the next original image, and the preview area corresponding to the next original image are all calculated and estimated. .
构图模型的获取过程可以是:基于上一帧预览画面以及所述预览画面中跟踪目标的位置和大小,生成构图模型,还可以是,基于上一帧预览画面以及该预览画面中的跟踪区域,从预设的构图模型中选取一个匹配度最高的构图模型。The process of obtaining the composition model may be: generating a composition model based on the previous preview frame and the position and size of the tracking target in the preview frame, or it may be based on the previous frame preview frame and the tracking area in the preview frame, Choose a composition model with the highest matching degree from the preset composition models.
本申请实施例通过对摄像头进行变焦处理和切换摄像头的操作,可以使得采集的下一帧原始图像进行抖动校正处理和目标跟踪处理后获得的预览画面具有良好的构图。In the embodiment of the present application, by performing zoom processing on the camera and the operation of switching the camera, the preview image obtained after the jitter correction processing and target tracking processing of the captured original image of the next frame can have a good composition.
应理解,上述实施例中各步骤的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。It should be understood that the size of the sequence number of each step in the foregoing embodiment does not mean the order of execution. The execution sequence of each process should be determined by its function and internal logic, and should not constitute any limitation on the implementation process of the embodiment of the present application.
对应于上文实施例所述的拍摄方法,图12示出了本申请实施例提供的拍摄装置的结构框图,为了便于说明,仅示出了与本申请实施例相关的部分。Corresponding to the shooting method described in the above embodiment, FIG. 12 shows a structural block diagram of a shooting device provided in an embodiment of the present application. For ease of description, only parts related to the embodiment of the present application are shown.
参照图12,该装置12包括:Referring to FIG. 12, the device 12 includes:
图像获取单元121,用于获取移动终端的摄像头采集的原始图像;The image acquisition unit 121 is configured to acquire the original image collected by the camera of the mobile terminal;
图像处理单元122,用于对所述原始图像进行抖动校正处理和目标跟踪处理,获得处理图像以及所述处理图像中包含跟踪目标的预览区域,其中,所述预览区域中包括跟踪目标;The image processing unit 122 is configured to perform jitter correction processing and target tracking processing on the original image to obtain a processed image and a preview area containing a tracking target in the processed image, wherein the preview area includes the tracking target;
图像显示单元123,用于将所述预览区域中的图像作为预览画面进行显示。The image display unit 123 is configured to display the image in the preview area as a preview screen.
作为本申请另一实施例,所述图像处理单元122包括:As another embodiment of the present application, the image processing unit 122 includes:
抖动校正处理模块1221,用于对所述原始图像进行抖动校正处理,获得处理图像以及所述处理图像中的校正区域;The shake correction processing module 1221 is configured to perform shake correction processing on the original image to obtain a processed image and a correction area in the processed image;
目标跟踪处理模块1222,用于从所述原始图像中找出跟踪目标,并基于所述跟踪目标在所述原始图像中的位置,在所述原始图像中确定跟踪区域;The target tracking processing module 1222 is configured to find a tracking target from the original image, and determine a tracking area in the original image based on the position of the tracking target in the original image;
联合裁切模块1223,用于基于所述校正区域和所述跟踪区域对所述处理图像进行联合裁切,获得所述处理图像中的预览区域。The joint cropping module 1223 is configured to perform joint cropping on the processed image based on the correction area and the tracking area to obtain a preview area in the processed image.
作为本申请另一实施例,所述图像处理单元122包括:As another embodiment of the present application, the image processing unit 122 includes:
抖动校正处理模块,用于对所述原始图像进行抖动校正处理,获得处理图像以及所述处理图像中的校正区域;A jitter correction processing module, configured to perform jitter correction processing on the original image to obtain a processed image and a correction area in the processed image;
目标跟踪处理模块,用于从所述校正区域中找出跟踪目标,基于所述跟踪目标在所述校正区域的位置,在所述校正区域中确定跟踪区域,并将所述跟踪区域作为预览区域。The target tracking processing module is configured to find a tracking target from the correction area, determine a tracking area in the correction area based on the position of the tracking target in the correction area, and use the tracking area as a preview area .
作为本申请另一实施例,所述图像处理单元122包括:As another embodiment of the present application, the image processing unit 122 includes:
目标跟踪处理模块,用于从所述原始图像中找出跟踪目标,并基于所述跟踪目标在所述原始图像的位置,在所述原始图像中确定跟踪区域;A target tracking processing module, configured to find a tracking target from the original image, and determine a tracking area in the original image based on the position of the tracking target in the original image;
抖动校正处理模块,用于对所述跟踪区域的图像进行抖动校正处理,获得处理图像以及所述处理图像中的校正区域,并将所述校正区域作为预览区域。The shake correction processing module is configured to perform shake correction processing on the image of the tracking area to obtain a processed image and a correction area in the processed image, and use the correction area as a preview area.
作为本申请另一实施例,所述装置12包括:As another embodiment of the present application, the device 12 includes:
平滑处理单元124,用于在将所述预览区域中的图像作为预览画面进行显示之前,对所述预览区域进行平滑处理,获得平滑处理后的预览区域;The smoothing processing unit 124 is configured to perform smoothing processing on the preview area before displaying the image in the preview area as a preview picture to obtain a smoothed preview area;
相应的,所述图像显示单元123还用于:Correspondingly, the image display unit 123 is also used for:
将所述平滑处理后的预览区域中的图像作为预览画面进行显示。The image in the preview area after the smoothing process is displayed as a preview screen.
作为本申请另一实施例,所述平滑处理单元124包括:As another embodiment of the present application, the smoothing processing unit 124 includes:
平滑处理模块,用于通过至少两个滤波器分别对所述预览区域进行平滑处理,获得每个滤波器对应的平滑子区域;A smoothing processing module, configured to respectively perform smoothing processing on the preview area through at least two filters to obtain a smooth sub-area corresponding to each filter;
权重生成模块,用于通过决策器判定每个滤波器的权重;The weight generation module is used to determine the weight of each filter through the decision maker;
融合模块,用于基于每个滤波器的权重,对每个滤波器对应的平滑子区域进行融合处理,获得平滑处理后的预览区域。The fusion module is used to perform fusion processing on the smooth sub-region corresponding to each filter based on the weight of each filter to obtain a smoothed preview area.
作为本申请另一实施例,所述装置12还包括:As another embodiment of the present application, the device 12 further includes:
变焦单元125,用于获得所述处理图像中包含跟踪目标的预览区域之后,基于所述预览区域中的跟踪目标在所述原始图像中的大小,对所述摄像头进行变焦处理,其中,变焦处理后的摄像头用于采集下一帧原始图像。The zoom unit 125 is configured to perform zoom processing on the camera based on the size of the tracking target in the preview area in the original image after obtaining the preview area containing the tracking target in the processed image, wherein the zoom processing The rear camera is used to capture the next frame of the original image.
作为本申请另一实施例,所述装置12还包括:As another embodiment of the present application, the device 12 further includes:
摄像头切换单元126,用于在对所述摄像头进行变焦处理之后,从变焦处理后的摄像头中选取一个作为主摄像头,其中,所述主摄像头为采集下一帧原始图像的摄像头。The camera switching unit 126 is configured to select one of the zoomed cameras as the main camera after performing the zoom processing on the camera, where the main camera is a camera that collects the next frame of original image.
作为本申请另一实施例,所述摄像头切换单元126还用于:As another embodiment of the present application, the camera switching unit 126 is further configured to:
根据所述预览区域中的跟踪目标在所述原始图像中的位置,从所述变焦处理后的摄像头中选取一个作为主摄像头。According to the position of the tracking target in the preview area in the original image, one of the cameras after the zoom processing is selected as the main camera.
需要说明的是,上述装置/单元之间的信息交互、执行过程等内容,由于与本申请方法实施例基于同一构思,其具体功能及带来的技术效果,具体可参见方法实施例部分,此处不再赘述。It should be noted that the information interaction and execution process between the above-mentioned devices/units are based on the same concept as the method embodiment of this application, and its specific functions and technical effects can be found in the method embodiment section. I won't repeat it here.
所属领域的技术人员可以清楚地了解到,为了描述的方便和简洁,仅以上述各功能单元、模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能单元、模块完成,即将所述装置的内部结构划分成不同的功能单元或模块,以完成以上描述的全部或者部分功能。实施例中的各功能单元、模块可以集成在一个 处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中,上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。另外,各功能单元、模块的具体名称也只是为了便于相互区分,并不用于限制本申请的保护范围。上述系统中单元、模块的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。Those skilled in the art can clearly understand that, for the convenience and conciseness of description, only the division of the above functional units and modules is used as an example. In practical applications, the above functions can be allocated to different functional units and modules as needed. Module completion, that is, the internal structure of the device is divided into different functional units or modules to complete all or part of the functions described above. The functional units and modules in the embodiments can be integrated into one processing unit, or each unit can exist alone physically, or two or more units can be integrated into one unit. The above-mentioned integrated units can be hardware-based Formal realization can also be realized in the form of a software functional unit. In addition, the specific names of the functional units and modules are only for the convenience of distinguishing each other, and are not used to limit the protection scope of the present application. For the specific working process of the units and modules in the foregoing system, reference may be made to the corresponding process in the foregoing method embodiment, which will not be repeated here.
本申请实施例提供的拍摄方法可以应用于手机、摄像机、平板电脑、增强现实(augmented reality,AR)/虚拟现实(virtual reality,VR)设备、笔记本电脑等移动终端上,本申请实施例对移动终端的具体类型不作任何限制。The shooting method provided in the embodiments of this application can be applied to mobile terminals such as mobile phones, cameras, tablet computers, augmented reality (AR)/virtual reality (VR) devices, and notebook computers. There are no restrictions on the specific types of terminals.
本申请实施例提供的移动终端,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现如本申请实施例提供的任一拍摄方法的步骤。The mobile terminal provided by the embodiment of the present application includes a memory, a processor, and a computer program stored in the memory and capable of running on the processor. When the processor executes the computer program, the implementation is as in the embodiment of the present application. Steps of any shooting method provided.
以所述移动终端为手机为例。图13示出的是与本申请实施例提供的手机的部分结构的框图。参考图13,手机包括:射频(Radio Frequency,RF)电路1310、存储器1320、输入单元1330、显示单元1340、传感器1350、音频电路1360、无线保真(wireless fidelity,WiFi)模块1370、处理器1380、以及电源1390等部件。本领域技术人员可以理解,图13中示出的手机结构并不构成对手机的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。Take the mobile terminal as an example of a mobile phone. FIG. 13 shows a block diagram of a part of the structure of a mobile phone provided by an embodiment of the present application. Referring to FIG. 13, the mobile phone includes: a radio frequency (RF) circuit 1310, a memory 1320, an input unit 1330, a display unit 1340, a sensor 1350, an audio circuit 1360, a wireless fidelity (WiFi) module 1370, and a processor 1380 , And power supply 1390 and other components. Those skilled in the art can understand that the structure of the mobile phone shown in FIG. 13 does not constitute a limitation on the mobile phone, and may include more or fewer components than those shown in the figure, or a combination of some components, or different component arrangements.
下面结合图13对手机的各个构成部件进行具体的介绍:The following is a detailed introduction to each component of the mobile phone in conjunction with Figure 13:
RF电路1310可用于收发信息或通话过程中,信号的接收和发送,特别地,将基站的下行信息接收后,给处理器1380处理;另外,将设计上行的数据发送给基站。通常,RF电路包括但不限于天线、至少一个放大器、收发信机、耦合器、低噪声放大器(Low Noise Amplifier,LNA)、双工器等。此外,RF电路1310还可以通过无线通信与网络和其他设备通信。上述无线通信可以使用任一通信标准或协议,包括但不限于全球移动通讯系统(Global System of Mobile communication,GSM)、通用分组无线服务(General Packet Radio Service,GPRS)、码分多址(Code Division Multiple Access,CDMA)、宽带码分多址(Wideband Code Division Multiple Access,WCDMA)、长期演进(Long Term Evolution,LTE))、电子邮件、短消息服务(Short Messaging Service,SMS)等。The RF circuit 1310 can be used for receiving and sending signals during the process of sending and receiving information or talking. In particular, after receiving the downlink information of the base station, it is processed by the processor 1380; in addition, the designed uplink data is sent to the base station. Generally, the RF circuit includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier (LNA), a duplexer, and the like. In addition, the RF circuit 1310 can also communicate with the network and other devices through wireless communication. The above-mentioned wireless communication can use any communication standard or protocol, including but not limited to Global System of Mobile Communication (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (Code Division) Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE)), Email, Short Messaging Service (SMS), etc.
存储器1320可用于存储软件程序以及模块,处理器1380通过运行存储在存储器1320的软件程序以及模块,从而执行手机的各种功能应用以及数据处理,例如对图像进行处理。存储器1320可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序(比如图像播放功能等)等;存储数据区可存储根据手机的使用所创建的数据(比如校正区域、跟踪区域或预览区域的位置等)等。此外,存储器1320可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他易失性固态存储器件。The memory 1320 may be used to store software programs and modules. The processor 1380 executes various functional applications and data processing of the mobile phone by running the software programs and modules stored in the memory 1320, such as processing images. The memory 1320 may mainly include a storage program area and a storage data area. The storage program area may store an operating system, an application program required by at least one function (such as an image playback function, etc.), etc.; Created data (such as the location of the correction area, tracking area, or preview area, etc.), etc. In addition, the memory 1320 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, or other volatile solid-state storage devices.
输入单元1330可用于接收输入的数字或字符信息,以及产生与手机的用户设置以及功能控制有关的键信号输入。具体地,输入单元1330可包括触控面板1331以及其他输入设备1332。触控面板1331,也称为触摸屏,可收集用户在其上或附近的触摸操作(比如用户使用手指、触笔等任何适合的物体或附件在触控面板1331上或在触控面 板1331附近的操作),并根据预先设定的程式驱动相应的连接装置。可选的,触控面板1331可包括触摸检测装置和触摸控制器两个部分。其中,触摸检测装置检测用户的触摸方位,并检测触摸操作带来的信号,将信号传送给触摸控制器;触摸控制器从触摸检测装置上接收触摸信息,并将它转换成触点坐标,再送给处理器1380,并能接收处理器1380发来的命令并加以执行。此外,可以采用电阻式、电容式、红外线以及表面声波等多种类型实现触控面板1331。除了触控面板1331,输入单元1330还可以包括其他输入设备1332。具体地,其他输入设备1332可以包括但不限于物理键盘、功能键(比如音量控制按键、开关按键等)。The input unit 1330 can be used to receive input digital or character information, and generate key signal input related to user settings and function control of the mobile phone. Specifically, the input unit 1330 may include a touch panel 1331 and other input devices 1332. The touch panel 1331, also called a touch screen, can collect the user's touch operations on or near it (for example, the user uses any suitable objects or accessories such as fingers, stylus, etc.) on the touch panel 1331 or near the touch panel 1331. Operation), and drive the corresponding connection device according to the preset program. Optionally, the touch panel 1331 may include two parts: a touch detection device and a touch controller. Among them, the touch detection device detects the user's touch position, and detects the signal brought by the touch operation, and transmits the signal to the touch controller; the touch controller receives the touch information from the touch detection device, converts it into contact coordinates, and then sends it To the processor 1380, and can receive and execute the commands sent by the processor 1380. In addition, the touch panel 1331 can be implemented in multiple types such as resistive, capacitive, infrared, and surface acoustic wave. In addition to the touch panel 1331, the input unit 1330 may also include other input devices 1332. Specifically, the other input device 1332 may include, but is not limited to, a physical keyboard and function keys (such as volume control keys, switch keys, etc.).
显示单元1340可用于显示由用户输入的信息或提供给用户的信息以及手机的各种菜单。例如,显示预览画面等。显示单元1340可包括显示面板1341,可选的,可以采用液晶显示器(Liquid Crystal Display,LCD)、有机发光二极管(Organic Light-Emitting Diode,OLED)等形式来配置显示面板1341。进一步的,触控面板1331可覆盖显示面板1341,当触控面板1331检测到在其上或附近的触摸操作后,传送给处理器1380以确定触摸事件的类型,随后处理器1380根据触摸事件的类型在显示面板1341上提供相应的视觉输出。虽然在图13中,触控面板1331与显示面板1341是作为两个独立的部件来实现手机的输入和输入功能,但是在某些实施例中,可以将触控面板1331与显示面板1341集成而实现手机的输入和输出功能。The display unit 1340 may be used to display information input by the user or information provided to the user and various menus of the mobile phone. For example, a preview screen is displayed. The display unit 1340 may include a display panel 1341. Optionally, the display panel 1341 may be configured in the form of a liquid crystal display (LCD), an organic light-emitting diode (OLED), etc. Further, the touch panel 1331 can cover the display panel 1341. When the touch panel 1331 detects a touch operation on or near it, it transmits it to the processor 1380 to determine the type of the touch event, and then the processor 1380 responds to the touch event. Type provides corresponding visual output on the display panel 1341. Although in FIG. 13, the touch panel 1331 and the display panel 1341 are used as two independent components to implement the input and input functions of the mobile phone, but in some embodiments, the touch panel 1331 and the display panel 1341 can be integrated. Realize the input and output functions of the mobile phone.
手机还可包括至少一种传感器1350,比如光传感器、运动传感器以及其他传感器。具体地,光传感器可包括环境光传感器及接近传感器,其中,环境光传感器可根据环境光线的明暗来调节显示面板1341的亮度,接近传感器可在手机移动到耳边时,关闭显示面板1341和/或背光。作为运动传感器的一种,加速计传感器可检测各个方向上(一般为三轴)加速度的大小,静止时可检测出重力的大小及方向,可用于识别手机姿态的应用(比如横竖屏切换、相关游戏、磁力计姿态校准)、振动识别相关功能(比如计步器、敲击)等;至于手机还可配置的陀螺仪(获取手机的抖动信息)、气压计、湿度计、温度计、红外线传感器等其他传感器,在此不再赘述。The mobile phone may also include at least one sensor 1350, such as a light sensor, a motion sensor, and other sensors. Specifically, the light sensor may include an ambient light sensor and a proximity sensor. The ambient light sensor can adjust the brightness of the display panel 1341 according to the brightness of the ambient light. The proximity sensor can close the display panel 1341 and/or when the mobile phone is moved to the ear. Or backlight. As a kind of motion sensor, the accelerometer sensor can detect the magnitude of acceleration in various directions (usually three-axis), and can detect the magnitude and direction of gravity when it is stationary. It can be used to identify mobile phone posture applications (such as horizontal and vertical screen switching, related Games, magnetometer posture calibration), vibration recognition related functions (such as pedometer, percussion), etc.; as for mobile phone configurable gyroscope (to obtain mobile phone jitter information), barometer, hygrometer, thermometer, infrared sensor, etc. Other sensors will not be repeated here.
音频电路1360、扬声器1361,传声器1362可提供用户与手机之间的音频接口。音频电路1360可将接收到的音频数据转换后的电信号,传输到扬声器1361,由扬声器1361转换为声音信号输出;另一方面,传声器1362将收集的声音信号转换为电信号,由音频电路1360接收后转换为音频数据,再将音频数据输出处理器1380处理后,经RF电路1310以发送给比如另一手机,或者将音频数据输出至存储器1320以便进一步处理。The audio circuit 1360, the speaker 1361, and the microphone 1362 can provide an audio interface between the user and the mobile phone. The audio circuit 1360 can transmit the electrical signal converted from the received audio data to the speaker 1361, which is converted into a sound signal for output by the speaker 1361; on the other hand, the microphone 1362 converts the collected sound signal into an electrical signal, and the audio circuit 1360 After being received, it is converted into audio data, and then processed by the audio data output processor 1380, and then sent to, for example, another mobile phone via the RF circuit 1310, or the audio data is output to the memory 1320 for further processing.
WiFi属于短距离无线传输技术,手机通过WiFi模块1370可以帮助用户收发电子邮件、浏览网页和访问流式媒体等,它为用户提供了无线的宽带互联网访问。虽然图13示出了WiFi模块1370,但是可以理解的是,其并不属于手机的必须构成,完全可以根据需要在不改变发明的本质的范围内而省略。WiFi is a short-distance wireless transmission technology. The mobile phone can help users send and receive emails, browse web pages, and access streaming media through the WiFi module 1370. It provides users with wireless broadband Internet access. Although FIG. 13 shows the WiFi module 1370, it is understandable that it is not a necessary component of the mobile phone, and can be omitted as needed without changing the essence of the invention.
处理器1380是手机的控制中心,利用各种接口和线路连接整个手机的各个部分,通过运行或执行存储在存储器1320内的软件程序和/或模块,以及调用存储在存储器1320内的数据,执行手机的各种功能和处理数据,从而对手机进行整体监控。可选的,处理器1380可包括一个或多个处理单元;优选的,处理器1380可集成应用处理器和 调制解调处理器,其中,应用处理器主要处理操作系统、用户界面和应用程序等,调制解调处理器主要处理无线通信。可以理解的是,上述调制解调处理器也可以不集成到处理器1380中。The processor 1380 is the control center of the mobile phone. It uses various interfaces and lines to connect various parts of the entire mobile phone. Various functions and processing data of the mobile phone can be used to monitor the mobile phone as a whole. Optionally, the processor 1380 may include one or more processing units; preferably, the processor 1380 may integrate an application processor and a modem processor, where the application processor mainly processes the operating system, user interface, application programs, etc. , The modem processor mainly deals with wireless communication. It can be understood that the foregoing modem processor may not be integrated into the processor 1380.
手机还包括给各个部件供电的电源1390(比如电池),优选的,电源可以通过电源管理系统与处理器1380逻辑相连,从而通过电源管理系统实现管理充电、放电、以及功耗管理等功能。The mobile phone also includes a power supply 1390 (such as a battery) for supplying power to various components. Preferably, the power supply can be logically connected to the processor 1380 through a power management system, so that functions such as charging, discharging, and power consumption management can be managed through the power management system.
尽管未示出,手机还可以包括摄像头。可选地,摄像头在手机的上的位置可以为前置的,也可以为后置的,本申请实施例对此不作限定。Although not shown, the mobile phone may also include a camera. Optionally, the position of the camera on the mobile phone may be front-mounted or rear-mounted, which is not limited in the embodiment of the present application.
可选地,手机可以包括单摄像头、双摄像头或三摄像头等,本申请实施例对此不作限定。Optionally, the mobile phone may include a single camera, a dual camera, or a triple camera, etc., which is not limited in the embodiment of the present application.
例如,手机可以包括三摄像头,其中,一个为主摄像头、一个为广角摄像头、一个为长焦摄像头。For example, a mobile phone may include three cameras, of which one is a main camera, one is a wide-angle camera, and one is a telephoto camera.
可选地,当手机包括多个摄像头时,这多个摄像头的位置可以根据实际情况设置,本申请实施例对此不作限定。Optionally, when the mobile phone includes multiple cameras, the positions of the multiple cameras may be set according to actual conditions, which is not limited in the embodiment of the present application.
另外,尽管未示出,手机还可以包括蓝牙模块等,在此不再赘述。In addition, although not shown, the mobile phone may also include a Bluetooth module, etc., which will not be repeated here.
图14是本申请实施例的移动终端(手机)的软件结构示意图。以手机操作系统为Android系统为例,在一些实施例中,将Android系统分为四层,分别为应用程序层、应用程序框架层(framework,FWK)、系统层以及硬件抽象层,层与层之间通过软件接口通信。FIG. 14 is a schematic diagram of the software structure of a mobile terminal (mobile phone) according to an embodiment of the present application. Taking the Android system as the mobile phone operating system as an example, in some embodiments, the Android system is divided into four layers, namely the application layer, the application framework layer (framework, FWK), the system layer, and the hardware abstraction layer. Through the software interface communication between.
如图14所示,所述应用程序层可以一系列应用程序包,应用程序包可以包括短信息,日历,相机,视频,导航,图库,通话等应用程序。As shown in FIG. 14, the application layer can be a series of application packages, and the application packages can include applications such as short message, calendar, camera, video, navigation, gallery, and call.
应用程序框架层为应用程序层的应用程序提供应用编程接口(application programming interface,API)和编程框架。应用程序框架层可以包括一些预先定义的函数,例如用于接收应用程序框架层所发送的事件的函数。The application framework layer provides an application programming interface (application programming interface, API) and a programming framework for applications in the application layer. The application framework layer may include some predefined functions, such as functions for receiving events sent by the application framework layer.
如图14所示,应用程序框架层可以包括窗口管理器、资源管理器以及通知管理器等。As shown in Figure 14, the application framework layer can include a window manager, a resource manager, and a notification manager.
窗口管理器用于管理窗口程序。窗口管理器可以获取显示屏大小,判断是否有状态栏,锁定屏幕,截取屏幕等。内容提供器用来存放和获取数据,并使这些数据可以被应用程序访问。所述数据可以包括视频,图像,音频,拨打和接听的电话,浏览历史和书签,电话簿等。The window manager is used to manage window programs. The window manager can obtain the size of the display screen, determine whether there is a status bar, lock the screen, take a screenshot, etc. The content provider is used to store and retrieve data and make these data accessible to applications. The data may include videos, images, audios, phone calls made and received, browsing history and bookmarks, phone book, etc.
资源管理器为应用程序提供各种资源,比如本地化字符串,图标,图片,布局文件,视频文件等等。The resource manager provides various resources for the application, such as localized strings, icons, pictures, layout files, video files, and so on.
通知管理器使应用程序可以在状态栏中显示通知信息,可以用于传达告知类型的消息,可以短暂停留后自动消失,无需用户交互。比如通知管理器被用于告知下载完成,消息提醒等。通知管理器还可以是以图表或者滚动条文本形式出现在系统顶部状态栏的通知,例如后台运行的应用程序的通知,还可以是以对话窗口形式出现在屏幕上的通知。例如在状态栏提示文本信息,发出提示音,电子设备振动,指示灯闪烁等。The notification manager enables the application to display notification information in the status bar, which can be used to convey notification-type messages, and it can automatically disappear after a short stay without user interaction. For example, the notification manager is used to notify download completion, message reminders, and so on. The notification manager can also be a notification that appears in the status bar at the top of the system in the form of a chart or a scroll bar text, such as a notification of an application running in the background, or a notification that appears on the screen in the form of a dialog window. For example, text messages are prompted in the status bar, prompt sounds, electronic devices vibrate, and indicator lights flash.
应用程序框架层还可以包括:The application framework layer can also include:
视图系统,所述视图系统包括可视控件,例如显示文字的控件,显示图片的控件 等。视图系统可用于构建应用程序。显示界面可以由一个或多个视图组成的。例如,包括短信通知图标的显示界面,可以包括显示文字的视图以及显示图片的视图。A view system, which includes visual controls, such as controls that display text, controls that display pictures, and so on. The view system can be used to build applications. The display interface can be composed of one or more views. For example, a display interface that includes a short message notification icon may include a view that displays text and a view that displays pictures.
电话管理器用于提供手机的通信功能。例如通话状态的管理(包括接通,挂断等)。The phone manager is used to provide the communication function of the mobile phone. For example, the management of the call status (including connecting, hanging up, etc.).
系统层可以包括多个功能模块。例如:传感器服务模块,物理状态识别模块,三维图形处理库(例如:OpenGL ES)等。The system layer can include multiple functional modules. For example: sensor service module, physical state recognition module, 3D graphics processing library (for example: OpenGL ES), etc.
传感器服务模块,用于对硬件层各类传感器上传的传感器数据进行监测,确定手机的物理状态;The sensor service module is used to monitor the sensor data uploaded by various sensors at the hardware layer to determine the physical state of the mobile phone;
物理状态识别模块,用于对用户手势、人脸等进行分析和识别;Physical state recognition module, used to analyze and recognize user gestures, faces, etc.;
三维图形处理库用于实现三维图形绘图,图像渲染,合成,和图层处理等。The 3D graphics processing library is used to implement 3D graphics drawing, image rendering, synthesis, and layer processing.
系统层还可以包括:The system layer can also include:
表面管理器用于对显示子系统进行管理,并且为多个应用程序提供了2D和3D图层的融合。The surface manager is used to manage the display subsystem and provides a combination of 2D and 3D layers for multiple applications.
媒体库支持多种常用的音频,视频格式回放和录制,以及静态图像文件等。媒体库可以支持多种音视频编码格式,例如:MPEG4,H.264,MP3,AAC,AMR,JPG,PNG等。The media library supports playback and recording of a variety of commonly used audio and video formats, as well as still image files. The media library can support multiple audio and video encoding formats, such as: MPEG4, H.264, MP3, AAC, AMR, JPG, PNG, etc.
硬件抽象层是硬件和软件之间的层。硬件抽象层可以包括显示驱动,摄像头驱动,传感器驱动等,用于驱动硬件层的相关硬件,如显示屏、摄像头、传感器等。The hardware abstraction layer is the layer between hardware and software. The hardware abstraction layer can include display drivers, camera drivers, sensor drivers, etc., used to drive related hardware at the hardware layer, such as display screens, cameras, sensors, and so on.
以上的拍摄方法实施例可以在具有上述硬件结构/软件结构的手机上实现。The above embodiments of the shooting method can be implemented on a mobile phone having the above hardware structure/software structure.
本申请实施例还提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时实现可实现上述各个方法实施例中的步骤。The embodiments of the present application also provide a computer-readable storage medium, where the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, the steps in each of the foregoing method embodiments can be realized.
本申请实施例提供了一种计算机程序产品,当计算机程序产品在移动终端上运行时,使得移动终端执行时实现可实现上述各个方法实施例中的步骤。The embodiments of the present application provide a computer program product. When the computer program product runs on a mobile terminal, the steps in the foregoing method embodiments can be realized when the mobile terminal is executed.
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请实现上述实施例方法中的全部或部分流程,可以通过计算机程序来指令相关的硬件来完成,所述的计算机程序可存储于一计算机可读存储介质中,该计算机程序在被处理器执行时,可实现上述各个方法实施例的步骤。其中,所述计算机程序包括计算机程序代码,所述计算机程序代码可以为源代码形式、对象代码形式、可执行文件或某些中间形式等。所述计算机可读介质至少可以包括:能够将计算机程序代码携带到拍照装置/移动终端的任何实体或装置、记录介质、计算机存储器、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、电载波信号、电信信号以及软件分发介质。例如U盘、移动硬盘、磁碟或者光盘等。在某些司法管辖区,根据立法和专利实践,计算机可读介质不可以是电载波信号和电信信号。If the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium. Based on this understanding, the implementation of all or part of the processes in the above-mentioned embodiment methods in the present application can be accomplished by instructing relevant hardware through a computer program. The computer program can be stored in a computer-readable storage medium. When executed by the processor, the steps of the foregoing method embodiments can be implemented. Wherein, the computer program includes computer program code, and the computer program code may be in the form of source code, object code, executable file, or some intermediate forms. The computer-readable medium may at least include: any entity or device capable of carrying the computer program code to the camera device/mobile terminal, recording medium, computer memory, read-only memory (ROM, Read-Only Memory), and random access memory (RAM, Random Access Memory), electric carrier signal, telecommunications signal and software distribution medium. For example, U disk, mobile hard disk, floppy disk or CD-ROM, etc. In some jurisdictions, according to legislation and patent practices, computer-readable media cannot be electrical carrier signals and telecommunication signals.
本申请实施例还提供了一种芯片系统,其特征在于,所述芯片系统包括处理器,所述处理器与存储器耦合,所述处理器执行存储器中存储的计算机程序,以实现本申请任一儿童遗留车辆内的报警方法的步骤。所述芯片系统可以为单个芯片,或者多个芯片组成的芯片模组。An embodiment of the present application also provides a chip system, wherein the chip system includes a processor, the processor is coupled to a memory, and the processor executes a computer program stored in the memory to implement any The steps of an alarm method in a vehicle left behind by a child. The chip system may be a single chip or a chip module composed of multiple chips.
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述或记 载的部分,可以参见其它实施例的相关描述。In the above-mentioned embodiments, the description of each embodiment has its own emphasis. For parts that are not described in detail or recorded in an embodiment, reference may be made to related descriptions of other embodiments.
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及方法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。A person of ordinary skill in the art may be aware that the units and method steps of the examples described in combination with the embodiments disclosed herein can be implemented by electronic hardware or a combination of computer software and electronic hardware. Whether these functions are executed by hardware or software depends on the specific application and design constraint conditions of the technical solution. Professionals and technicians can use different methods for each specific application to implement the described functions, but such implementation should not be considered beyond the scope of this application.
以上所述实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围,均应包含在本申请的保护范围之内。The above-mentioned embodiments are only used to illustrate the technical solutions of the present application, not to limit them; although the present application has been described in detail with reference to the foregoing embodiments, a person of ordinary skill in the art should understand that it can still implement the foregoing The technical solutions recorded in the examples are modified, or some of the technical features are equivalently replaced; these modifications or replacements do not cause the essence of the corresponding technical solutions to deviate from the spirit and scope of the technical solutions of the embodiments of the application, and should be included in Within the scope of protection of this application.

Claims (14)

  1. 一种拍摄方法,其特征在于,包括:A photographing method, characterized in that it comprises:
    获取移动终端的摄像头采集的原始图像;Obtain the original image collected by the camera of the mobile terminal;
    对所述原始图像进行抖动校正处理和目标跟踪处理,获得处理图像以及所述处理图像中包含跟踪目标的预览区域;Performing jitter correction processing and target tracking processing on the original image to obtain a processed image and a preview area containing the tracking target in the processed image;
    将所述预览区域中的图像作为预览画面进行显示。The image in the preview area is displayed as a preview screen.
  2. 如权利要求1所述的拍摄方法,其特征在于,所述对所述原始图像进行抖动校正处理和目标跟踪处理,获得处理图像以及所述处理图像中包含跟踪目标的预览区域,包括:The shooting method according to claim 1, wherein the performing shake correction processing and target tracking processing on the original image to obtain a processed image and a preview area of the processed image containing the tracking target comprises:
    对所述原始图像进行抖动校正处理,获得处理图像以及所述处理图像中的校正区域;Performing jitter correction processing on the original image to obtain a processed image and a correction area in the processed image;
    从所述原始图像中找出跟踪目标,并基于所述跟踪目标在所述原始图像中的位置,在所述原始图像中确定跟踪区域;Finding a tracking target from the original image, and determining a tracking area in the original image based on the position of the tracking target in the original image;
    基于所述校正区域和所述跟踪区域对所述处理图像进行联合裁切,获得所述处理图像中的预览区域。Jointly crop the processed image based on the correction area and the tracking area to obtain a preview area in the processed image.
  3. 如权利要求1所述的拍摄方法,其特征在于,所述对所述原始图像进行抖动校正处理和目标跟踪处理,获得处理图像以及所述处理图像中包含跟踪目标的预览区域,包括:The shooting method according to claim 1, wherein the performing shake correction processing and target tracking processing on the original image to obtain a processed image and a preview area of the processed image containing the tracking target comprises:
    对所述原始图像进行抖动校正处理,获得处理图像以及所述处理图像中的校正区域;Performing jitter correction processing on the original image to obtain a processed image and a correction area in the processed image;
    从所述校正区域中找出跟踪目标,基于所述跟踪目标在所述校正区域的位置,在所述校正区域中确定跟踪区域,并将所述跟踪区域作为预览区域。Find a tracking target from the correction area, determine a tracking area in the correction area based on the position of the tracking target in the correction area, and use the tracking area as a preview area.
  4. 如权利要求1所述的拍摄方法,其特征在于,所述对所述原始图像进行抖动校正处理和目标跟踪处理,获得处理图像以及所述处理图像中包含跟踪目标的预览区域,包括:The shooting method according to claim 1, wherein the performing shake correction processing and target tracking processing on the original image to obtain a processed image and a preview area of the processed image containing the tracking target comprises:
    从所述原始图像中找出跟踪目标,并基于所述跟踪目标在所述原始图像的位置,在所述原始图像中确定跟踪区域;Finding a tracking target from the original image, and determining a tracking area in the original image based on the position of the tracking target in the original image;
    对所述跟踪区域内的图像进行抖动校正处理,获得处理图像以及所述处理图像中的校正区域,并将所述校正区域作为预览区域。Performing a shake correction process on the image in the tracking area to obtain a processed image and a correction area in the processed image, and use the correction area as a preview area.
  5. 如权利要求1至4任一项所述的拍摄方法,其特征在于,所述抖动校正处理包括:The photographing method according to any one of claims 1 to 4, wherein the shake correction processing comprises:
    获取待校正处理的图像,并获取所述待校正处理的图像采集时刻所述移动终端的抖动信息;Acquiring an image to be corrected, and acquiring jitter information of the mobile terminal at the time when the image to be corrected is collected;
    对所述待校正处理的图像进行第一裁切处理,获得边缘图像和中间图像;Performing a first cropping process on the image to be corrected to obtain an edge image and an intermediate image;
    基于所述抖动信息和所述边缘图像对所述中间图像进行补偿,获得处理图像,其中,所述中间图像对应的区域为校正区域。Compensating the intermediate image based on the shaking information and the edge image to obtain a processed image, wherein the area corresponding to the intermediate image is a correction area.
  6. 如权利要求1至4任一项所述的拍摄方法,其特征在于,所述目标跟踪处理包括:The shooting method according to any one of claims 1 to 4, wherein the target tracking processing comprises:
    基于注意力机制从待跟踪处理的图像和/或待跟踪处理的图像之前的N帧图像中,找到跟踪目标,其中,N≥1;Find the tracking target from the image to be tracked and/or N frames before the image to be tracked based on the attention mechanism, where N≥1;
    基于所述跟踪目标在所述待跟踪处理的图像中的位置,进行第二裁切处理,以在待跟踪处理的图像中确定出跟踪区域。Based on the position of the tracking target in the image to be tracked, a second cropping process is performed to determine a tracking area in the image to be tracked.
  7. 如权利要求1至4任一项所述的拍摄方法,其特征在于,在将所述预览区域中的图像作为预览画面进行显示之前,还包括:The shooting method according to any one of claims 1 to 4, wherein before displaying the image in the preview area as a preview screen, the method further comprises:
    对所述处理图像中的预览区域进行平滑处理,获得平滑处理后的预览区域;Smoothing the preview area in the processed image to obtain a smoothed preview area;
    相应的,所述将所述预览区域中的图像作为预览画面进行显示包括:Correspondingly, the displaying the image in the preview area as a preview screen includes:
    将所述平滑处理后的预览区域中的图像作为预览画面进行显示。The image in the preview area after the smoothing process is displayed as a preview screen.
  8. 如权利要求7所述的拍摄方法,其特征在于,所述对所述预览区域进行平滑处理包括:8. The shooting method according to claim 7, wherein said smoothing said preview area comprises:
    通过至少两个滤波器分别对所述处理图像中的预览区域进行平滑处理,获得每个滤波器对应的平滑子区域;Smoothing the preview area in the processed image by using at least two filters, respectively, to obtain a smooth sub-area corresponding to each filter;
    通过决策器判定每个滤波器的权重;Determine the weight of each filter through the decision maker;
    基于每个滤波器的权重,对每个滤波器对应的平滑子区域进行融合处理,获得平滑处理后的预览区域。Based on the weight of each filter, the smooth subregion corresponding to each filter is fused to obtain a smoothed preview region.
  9. 如权利要求1至4任一项所述的拍摄方法,其特征在于,在获得处理图像以及所述处理图像中包含跟踪目标的预览区域之后,还包括:The photographing method according to any one of claims 1 to 4, wherein after obtaining the processed image and the processed image includes a preview area of the tracking target, the method further comprises:
    基于所述预览区域中的跟踪目标在所述原始图像中的大小,对所述摄像头进行变焦处理,其中,变焦处理后的摄像头用于采集下一帧原始图像。Perform zoom processing on the camera based on the size of the tracking target in the preview area in the original image, where the zoomed camera is used to collect the next frame of the original image.
  10. 如权利要求9所述的拍摄方法,其特征在于,所述移动终端上设置的摄像头的个数为至少两个;9. The shooting method of claim 9, wherein the number of cameras provided on the mobile terminal is at least two;
    在对所述摄像头进行变焦处理之后,还包括:After performing zoom processing on the camera, the method further includes:
    从变焦处理后的摄像头中选取一个作为主摄像头,其中,所述主摄像头为采集下一帧原始图像的摄像头。One of the cameras after zoom processing is selected as the main camera, where the main camera is a camera that collects the next frame of original image.
  11. 如权利要求10所述的拍摄方法,其特征在于,所述从变焦处理后的摄像头中选取一个作为主摄像头包括:10. The shooting method of claim 10, wherein the selecting one of the zoomed cameras as the main camera comprises:
    根据所述预览区域中的跟踪目标在所述原始图像中的位置,从所述变焦处理后的摄像头中选取一个作为主摄像头。According to the position of the tracking target in the preview area in the original image, one of the cameras after the zoom processing is selected as the main camera.
  12. 一种拍摄装置,其特征在于,包括:A photographing device, characterized in that it comprises:
    图像获取单元,用于获取移动终端的摄像头采集的原始图像;The image acquisition unit is used to acquire the original image collected by the camera of the mobile terminal;
    图像处理单元,用于对所述原始图像进行抖动校正处理和目标跟踪处理,获得处理图像以及所述处理图像中包含跟踪目标的预览区域;An image processing unit, configured to perform jitter correction processing and target tracking processing on the original image to obtain a processed image and a preview area containing the tracking target in the processed image;
    图像显示单元,用于将所述预览区域中的图像作为预览画面进行显示。The image display unit is used to display the image in the preview area as a preview picture.
  13. 一种移动终端,其特征在于,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,其特征在于,所述处理器执行所述计算机程序时实现如权利要求1至11任一项所述方法的步骤。A mobile terminal, characterized in that it includes a memory, a processor, and a computer program stored in the memory and running on the processor, wherein the processor executes the computer program as follows: The steps of the method according to any one of claims 1 to 11.
  14. 一种芯片系统,其特征在于,所述芯片系统包括处理器,所述处理器与存储器耦合,所述处理器执行存储器中存储的计算机程序,以实现如权利要求1至11任一 项所述方法的步骤。A chip system, characterized in that the chip system includes a processor, the processor is coupled with a memory, and the processor executes a computer program stored in the memory to implement any one of claims 1 to 11 Method steps.
PCT/CN2021/084589 2020-05-15 2021-03-31 Photographic method and apparatus, and mobile terminal and chip system WO2021227693A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010417818.9A CN113676655B (en) 2020-05-15 2020-05-15 Shooting method and device, mobile terminal and chip system
CN202010417818.9 2020-05-15

Publications (1)

Publication Number Publication Date
WO2021227693A1 true WO2021227693A1 (en) 2021-11-18

Family

ID=78526385

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/084589 WO2021227693A1 (en) 2020-05-15 2021-03-31 Photographic method and apparatus, and mobile terminal and chip system

Country Status (2)

Country Link
CN (1) CN113676655B (en)
WO (1) WO2021227693A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114928694A (en) * 2022-04-25 2022-08-19 深圳市慧鲤科技有限公司 Image acquisition method and apparatus, device, and medium
CN116074620A (en) * 2022-05-27 2023-05-05 荣耀终端有限公司 Shooting method and electronic equipment
CN117177066A (en) * 2022-05-30 2023-12-05 荣耀终端有限公司 Shooting method and related equipment

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114286001A (en) * 2021-12-28 2022-04-05 维沃移动通信有限公司 Image processing circuit, device and method, electronic equipment, image processing chip and main control chip

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH09102903A (en) * 1995-10-05 1997-04-15 Hitachi Ltd Image pickup device
CN104065876A (en) * 2013-03-22 2014-09-24 卡西欧计算机株式会社 Image processing device and image processing method
CN105959567A (en) * 2016-06-21 2016-09-21 维沃移动通信有限公司 Photographing control method and mobile terminal
CN107786812A (en) * 2017-10-31 2018-03-09 维沃移动通信有限公司 A kind of image pickup method, mobile terminal and computer-readable recording medium
CN107809590A (en) * 2017-11-08 2018-03-16 青岛海信移动通信技术股份有限公司 A kind of photographic method and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH09102903A (en) * 1995-10-05 1997-04-15 Hitachi Ltd Image pickup device
CN104065876A (en) * 2013-03-22 2014-09-24 卡西欧计算机株式会社 Image processing device and image processing method
CN105959567A (en) * 2016-06-21 2016-09-21 维沃移动通信有限公司 Photographing control method and mobile terminal
CN107786812A (en) * 2017-10-31 2018-03-09 维沃移动通信有限公司 A kind of image pickup method, mobile terminal and computer-readable recording medium
CN107809590A (en) * 2017-11-08 2018-03-16 青岛海信移动通信技术股份有限公司 A kind of photographic method and device

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114928694A (en) * 2022-04-25 2022-08-19 深圳市慧鲤科技有限公司 Image acquisition method and apparatus, device, and medium
CN116074620A (en) * 2022-05-27 2023-05-05 荣耀终端有限公司 Shooting method and electronic equipment
CN116074620B (en) * 2022-05-27 2023-11-07 荣耀终端有限公司 Shooting method and electronic equipment
CN117177066A (en) * 2022-05-30 2023-12-05 荣耀终端有限公司 Shooting method and related equipment

Also Published As

Publication number Publication date
CN113676655A (en) 2021-11-19
CN113676655B (en) 2022-12-27

Similar Documents

Publication Publication Date Title
WO2021227693A1 (en) Photographic method and apparatus, and mobile terminal and chip system
US11831977B2 (en) Photographing and processing method and electronic device
CN111083380B (en) Video processing method, electronic equipment and storage medium
CN108377342B (en) Double-camera shooting method and device, storage medium and terminal
CN106937039B (en) Imaging method based on double cameras, mobile terminal and storage medium
CN114205522B (en) Method for long-focus shooting and electronic equipment
WO2019104705A1 (en) Image processing method and device
US20170272659A1 (en) Apparatus and method for positioning image area using image sensor location
KR20140104806A (en) Method for synthesizing valid images in mobile terminal having multi camera and the mobile terminal therefor
WO2021013147A1 (en) Video processing method, device, terminal, and storage medium
CN111669507A (en) Photographing method and device and electronic equipment
CN111064895B (en) Virtual shooting method and electronic equipment
JP7371264B2 (en) Image processing method, electronic equipment and computer readable storage medium
CN110196673B (en) Picture interaction method, device, terminal and storage medium
CN113747085A (en) Method and device for shooting video
CN111083371A (en) Shooting method and electronic equipment
CN113542600B (en) Image generation method, device, chip, terminal and storage medium
CN113747199A (en) Video editing method, video editing apparatus, electronic device, storage medium, and program product
US10911677B1 (en) Multi-camera video stabilization techniques
WO2022156672A1 (en) Photographing method and apparatus, electronic device and readable storage medium
WO2022166371A1 (en) Multi-scene video recording method and apparatus, and electronic device
WO2024051556A1 (en) Wallpaper display method, electronic device and storage medium
CN111275607B (en) Interface display method and device, computer equipment and storage medium
CN110992268A (en) Background setting method, device, terminal and storage medium
CN115134527A (en) Processing method, intelligent terminal and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21804253

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21804253

Country of ref document: EP

Kind code of ref document: A1