CN113676655A - Shooting method and device, mobile terminal and chip system - Google Patents

Shooting method and device, mobile terminal and chip system Download PDF

Info

Publication number
CN113676655A
CN113676655A CN202010417818.9A CN202010417818A CN113676655A CN 113676655 A CN113676655 A CN 113676655A CN 202010417818 A CN202010417818 A CN 202010417818A CN 113676655 A CN113676655 A CN 113676655A
Authority
CN
China
Prior art keywords
image
area
tracking
preview
processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010417818.9A
Other languages
Chinese (zh)
Other versions
CN113676655B (en
Inventor
刘宏马
张雅琪
张超
陈艳花
吴文海
贾志平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202010417818.9A priority Critical patent/CN113676655B/en
Priority to PCT/CN2021/084589 priority patent/WO2021227693A1/en
Publication of CN113676655A publication Critical patent/CN113676655A/en
Application granted granted Critical
Publication of CN113676655B publication Critical patent/CN113676655B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/682Vibration or motion blur correction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/681Motion detection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/682Vibration or motion blur correction
    • H04N23/683Vibration or motion blur correction performed by a processor, e.g. controlling the readout of an image memory

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

The application is applicable to the technical field of video shooting, and provides a shooting method, a shooting device, a mobile terminal and a chip system, wherein the shooting method comprises the following steps: the method comprises the steps of obtaining an original image collected by a camera of a mobile terminal, carrying out shake correction processing and target tracking processing on the original image, obtaining a processed image and a preview area containing a tracking target in the processed image, displaying the image in the preview area as a preview picture, carrying out shake correction processing on a low-power original image collected by the camera through the method, avoiding the shake of the picture, carrying out target tracking processing, avoiding the shake of the tracking target in the picture, displaying the obtained preview area as a certain area in the processed image corresponding to the low-power original image after the shake correction processing and the target tracking processing, and displaying the image in the certain area as the preview picture, so that a stable high-power shooting effect can be presented.

Description

Shooting method and device, mobile terminal and chip system
Technical Field
The present application relates to the field of video shooting, and in particular, to a shooting method, an apparatus, a mobile terminal, and a chip system.
Background
With the development of the camera technology, more and more mobile terminals are integrated with cameras, and a user can take pictures anytime and anywhere by carrying the mobile terminal.
When a user takes a picture of a shot object through a camera on the mobile terminal, the picture shaking phenomenon often occurs, and particularly, when the user takes the picture of the shot object under a high-power scene, the picture shaking phenomenon is more serious, and even the shot object which is interested is difficult to capture appears.
Disclosure of Invention
The embodiment of the application provides a shooting method, a shooting device, a mobile terminal and a chip system, and solves the problem that an interested shooting object cannot be captured due to shaking of pictures in a high-power scene.
In order to achieve the purpose, the technical scheme is as follows:
in a first aspect, an embodiment of the present application provides a shooting method, including:
acquiring an original image acquired by a camera of a mobile terminal;
carrying out shake correction processing and target tracking processing on the original image to obtain a processed image and a preview area containing a tracking target in the processed image;
and displaying the image in the preview area as a preview picture.
In a possible implementation manner of the first aspect, the performing a shake correction process and a target tracking process on the original image to obtain a processed image and a preview area containing a tracking target in the processed image includes:
carrying out shake correction processing on the original image to obtain a processed image and a correction area in the processed image;
finding out a tracking target from the original image, and determining a tracking area in the original image based on the position of the tracking target in the original image;
and carrying out joint cropping on the processed image based on the correction area and the tracking area to obtain a preview area in the processed image.
In a possible implementation manner of the first aspect, the performing a shake correction process and a target tracking process on the original image to obtain a processed image and a preview area containing a tracking target in the processed image includes:
carrying out shake correction processing on the original image to obtain a processed image and a correction area in the processed image;
finding out a tracking target from the correction area, determining a tracking area in the correction area based on the position of the tracking target in the correction area, and taking the tracking area as a preview area.
In a possible implementation manner of the first aspect, the performing a shake correction process and a target tracking process on the original image to obtain a processed image and a preview area containing a tracking target in the processed image includes:
finding out a tracking target from the original image, and determining a tracking area in the original image based on the position of the tracking target in the original image;
and carrying out shake correction processing on the image in the tracking area to obtain a processed image and a correction area in the processed image, and taking the correction area as a preview area.
In one possible implementation manner of the first aspect, the shake correction processing includes:
acquiring an image to be corrected, and acquiring jitter information of the mobile terminal at the image acquisition moment to be corrected;
performing first cutting processing on the image to be corrected to obtain an edge image and a middle image;
and compensating the intermediate image based on the jitter information and the edge image to obtain a processed image, wherein the area corresponding to the intermediate image is a correction area.
In a possible implementation manner of the first aspect, the target tracking processing includes:
finding a tracking target from an image to be tracked and/or an N frame image before the image to be tracked based on an attention mechanism, wherein N is more than or equal to 1;
and performing second cropping processing on the basis of the position of the tracking target in the image to be tracked so as to determine a tracking area in the image to be tracked.
In one possible implementation manner of the first aspect, before displaying the image in the preview area as a preview screen, the method further includes:
performing smoothing processing on the preview area in the processed image to obtain the smoothed preview area;
correspondingly, the displaying the image in the preview area as a preview screen includes:
and displaying the image in the preview area after the smoothing processing as a preview picture.
In a possible implementation manner of the first aspect, the smoothing of the preview area includes:
respectively smoothing the preview area in the processed image through at least two filters to obtain a smoothing subarea corresponding to each filter;
determining, by a decision maker, a weight for each filter;
and based on the weight of each filter, performing fusion processing on the smoothing subareas corresponding to each filter to obtain a preview area after smoothing processing.
In a possible implementation manner of the first aspect, after obtaining the processed image and the preview area containing the tracking target in the processed image, the method further includes:
and zooming the camera based on the size of the tracking target in the preview area in the original image, wherein the zoomed camera is used for collecting the next frame of original image.
In a possible implementation manner of the first aspect, the number of the cameras arranged on the mobile terminal is at least two;
after zooming the camera, the method further comprises:
and selecting one of the cameras after zooming as a main camera, wherein the main camera is a camera for collecting the next frame of original image.
In a possible implementation manner of the first aspect, the selecting one of the cameras after the zoom processing as a main camera includes:
and selecting one of the cameras after the zooming processing as a main camera according to the position of the tracking target in the preview area in the original image.
In a second aspect, an embodiment of the present application provides a shooting device, including:
the image acquisition unit is used for acquiring an original image acquired by a camera of the mobile terminal;
the image processing unit is used for carrying out shake correction processing and target tracking processing on the original image to obtain a processed image and a preview area containing a tracking target in the processed image;
and an image display unit configured to display the image in the preview area as a preview screen.
In a third aspect, a mobile terminal is provided, including: memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor implements the steps of the method according to any of the first aspect of the present application when executing the computer program.
In a fourth aspect, a chip system is provided, comprising: a processor coupled to the memory, the processor executing a computer program stored in the memory to perform the steps of the method of any of the first aspects of the present application.
In a fifth aspect, there is provided a computer readable storage medium storing a computer program which, when executed by one or more processors, performs the steps of the method of any one of the first aspects of the present application.
In a sixth aspect, the present application provides a computer program product, which when run on a mobile terminal, causes the mobile terminal to execute the method of any one of the first aspect.
According to the shooting method, the original image is collected through the camera, the original image is the low-power image, the shaking of the image is avoided in the process of carrying out shaking correction processing, the shaking of the tracking target in the image is avoided in the process of carrying out target tracking processing, the processed image after shaking correction processing can be obtained after the shaking correction processing and the target tracking processing are carried out, meanwhile, a certain area in the processed image can be obtained, the size of the area is smaller than that of the original image, and finally, the image in the area is displayed as a preview image, so that a stable high-power shooting effect can be presented; in other words, the image shake is avoided through the cut compensation in the shake correction processing and the target tracking processing in the embodiment of the application, the shaking of the tracking target in the image is avoided through the cutting, and high-magnification scene shooting is also achieved through the cutting.
It is understood that the beneficial effects of the second to sixth aspects can be seen from the description of the first aspect, and are not described herein again.
Drawings
Fig. 1 is a schematic view of an application scenario of a shooting method according to an embodiment of the present application;
fig. 2 is a schematic flowchart of a shooting method according to an embodiment of the present disclosure;
fig. 3 is a schematic diagram of a correction area after a shake correction process and a tracking area after a target tracking process according to an embodiment of the present application;
fig. 4 is a schematic flowchart of another shooting method provided in the embodiment of the present application;
FIG. 5 is a schematic diagram of a correction area, a tracking area and a preview area corresponding to the shooting method provided by the embodiment shown in FIG. 4;
fig. 6 is a schematic flowchart of another shooting method provided in the embodiment of the present application;
fig. 7 is a schematic diagram of a correction area, a tracking area and a preview area corresponding to the shooting method provided by the embodiment shown in fig. 6;
fig. 8 is a schematic flowchart of another shooting method provided in the embodiment of the present application;
fig. 9 is a schematic diagram of a correction area, a tracking area, and a preview area corresponding to a shooting method provided in an embodiment of the present application;
fig. 10 is a schematic diagram of a smoothing process in the shooting method according to the embodiment of the present application;
fig. 11 is a schematic flowchart of another shooting method provided in the embodiment of the present application;
fig. 12 is a schematic block diagram of a shooting device according to an embodiment of the present application;
fig. 13 is a schematic block diagram of a mobile terminal according to an embodiment of the present application;
fig. 14 is a schematic diagram of a software structure of a mobile terminal according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that in the embodiments of the present application, "one or more" means one, two, or more than two; "and/or" describes the association relationship of the associated objects, indicating that three relationships may exist; for example, a and/or B, may represent: a alone, both A and B, and B alone, where A, B may be singular or plural. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to" determining "or" in response to detecting ". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
Furthermore, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used for distinguishing between descriptions and not necessarily for describing or implying relative importance.
Reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise.
In order to explain the technical solution described in the present application, the following description will be given by way of specific examples.
Fig. 1 is a schematic view illustrating an application scenario of the shooting method provided by the embodiment of the present application, as shown in the figure, a user (person 1) holds a mobile phone to shoot a shooting object (person 2), the person 2 may be walking, running, and the like, the person 1 may also need to walk, run, and the like in order to perform tracking shooting on the person 2, during the walking or running of the user, the mobile phone may shake along with the walking or running of the user, and in a high-power shooting scenario, the shake may be amplified, which causes a blur phenomenon to occur to the person 2 in the mobile phone; in addition, when the person 2 runs or jumps, the user may need to run to be able to follow the person 2, and as the person 2 jumps or the person 1 runs while shooting, the shaking of the person 2 in the picture is also amplified in the high-power shooting scene, so that the person 2 in the mobile phone appears on the left side of the mobile phone picture for a while, for example, (a) in fig. 1; will appear on the right side of the cell phone screen, or even only a part of the screen of person 2, e.g. fig. 1 (b). In order to solve the above problem in the high power scene, a shooting method is provided, and specific reference may be made to the following description of the shooting method.
Fig. 2 shows a schematic flowchart of a shooting method provided in an embodiment of the present application, which may be applied to a mobile terminal by way of example and not limitation.
Step S201, an original image collected by a camera of the mobile terminal is obtained.
In this embodiment, the mobile terminal may be a mobile device with a camera function, for example, a mobile phone, a camera, a video camera, a tablet computer, a monitoring camera, a notebook, and the like, and may perform shooting through a camera provided on the mobile terminal or a camera additionally connected to the mobile terminal. The original image represents an image before subsequent shake correction processing and target tracking processing, and the original image may be an original large image acquired by the camera or an image obtained by preprocessing the original large image acquired by the camera. The original large graph can be an image generated by information collected by a sensor when all pixel points in the sensor of the camera work.
Step S202, carrying out shake correction processing and target tracking processing on the original image, and obtaining a processed image and a preview area containing a tracking target in the processed image.
In this application embodiment, because at the in-process of tracking shooting, along with following the removal of shooing the object, the user also can remove the health, leads to the handheld mobile terminal of user to appear trembling, in addition, even through certain exercise or some better postures of stability when the user is handheld mobile terminal, also hardly avoid appearing "hand trembles" phenomenon.
When shooting is carried out in a low-power scene, the influence of the shaking phenomenon of the mobile terminal on the collected image is small because the visual field range of the mobile terminal when the mobile terminal collects the image is large. However, when shooting is performed in a high-power scene, the field of view of the mobile terminal when capturing an image is small, and the shake phenomenon of the mobile terminal may have a relatively large influence on the captured image, such as picture shake in a video. Therefore, in a high power scene, the shake correction process needs to be performed.
At present, the way of the shake correction process includes: optical anti-shake, electronic anti-shake, body sensor anti-shake, and the like.
Optical anti-shake is the correction of "optical axis shift" by a floating lens of the lens. The principle is that a gyroscope in a lens detects tiny movement, then a signal is transmitted to a microprocessor, the microprocessor immediately calculates the displacement required to be compensated, and then compensation is carried out according to the shaking direction and the displacement of the lens through a compensation lens group; thereby effectively overcoming the image blur caused by the vibration of the camera.
The anti-shake technical principle of the sensor is almost the same as that of a lens, the sensor is mainly installed on a support capable of floating freely, the direction and the amplitude of shake of a camera are sensed by the gyroscope in a matched mode, and then the sensor is controlled to perform corresponding displacement compensation. The main reason why so many correction directions are involved is to reduce irregular hand shake during photographing.
The optical anti-shake and the sensor anti-shake both need to participate in hardware, and the electronic anti-shake can be realized without hardware assistance, and images on the sensor are mainly analyzed and processed through software. When the image is blurred, the middle blurred part is compensated by utilizing the edge image, so that the anti-shake effect is realized, and the anti-shake principle is more like performing post-processing on the picture.
After the electronic anti-shake system is turned on, the viewfinder frame is cut obviously, however, the cut part is not the sensor of the edge and stops working, but the data of the cut part is used for shake compensation by the electronic anti-shake system. It can also be understood that the image is cut into two parts, the outermost part (edge part) being used to compensate for the inner part (middle blurred part). Still can bring certain anti-shake effect under the prerequisite that does not have the hardware anti-shake.
The embodiment of the present application can adopt electronic anti-shake, and after the adopted electronic anti-shake is performed with shake correction processing, the middle compensated region (middle image) in the obtained processed image can become clearer, and this region can become a correction region, and at the same time, a correction detection frame can be generated, and the coordinates of the correction detection frame can also be obtained, see fig. 3, and fig. 3 is a schematic diagram of the correction region after the shake correction processing and the tracking region after the target tracking processing provided by the embodiment of the present application, as shown in (a) in fig. 3, assuming that a is an original image, the original image is subjected to the shake correction processing to obtain a processed image a ', the size of the processed image a ' is the same as that of the original image a, except that the middle image of the processed image a ' is clearer than that of the original image a, and B is a correction area, and after the shake correction processing, the coordinates of four corners of the correction area, the coordinates of the center point of the correction area and the distance between the coordinates of the center point of the correction area and the side length are obtained.
Although the change of the image sharpness after the shake correction processing is not shown in fig. 3, only the positional relationship between the correction area and the processed image (original image) is shown, in the embodiment of the present application, although the correction area is described with emphasis, it is undeniable that the generation process of the correction area is performed along with the shake correction processing, that is, the shake correction processing is performed to compensate the middle blurred portion of the original image first, and the processed image is obtained after the compensation.
In the tracking shooting process, a user can also move a body along with the movement of a follow-up shooting object (namely a tracking target), when the steps of the tracking target and the user are inconsistent, or the shooting angle and direction are changed, the position of the tracking target in a shot picture can frequently shake, and under a severe condition, the tracking target even jumps out of the shot picture.
For example, when a user shoots a running dog with a mobile phone, the user needs to run following the dog, however, a running route of the dog is not fixed and random, and the user cannot even predict a running direction of the dog, which causes that the position of the dog in a shot picture shakes all the time in the process of shooting with the user, and may be within a few seconds, the dog appears in the middle of the shot picture, appears on the left side of the shot picture, and appears on the lower right corner of the shot picture. Thus, the viewing effect is very poor when performing a review.
In view of the above, the embodiment of the present application further adds a target tracking process. When the target tracking processing is performed, a tracking target needs to be determined. For example, in the process of the user following shooting, a certain area is selected on a touch screen of the mobile terminal (for example, an area in a preset range around a clicked position is selected in a clicking mode, or a certain area is selected in a circling mode), and a target in the area is used as a target to be tracked; foreground images in the picture can be detected, and the foreground images are used as tracking targets; the Attention algorithm can also be adopted to find a salient object in the current image as a tracking target.
After the tracking target is determined, the tracking target may be obtained as an area, for example, coordinates of a tracking detection frame obtained after target tracking processing, as shown in (b) in fig. 3, where a is an original image, C is a tracking area, and after target tracking processing, coordinates of four corners of the tracking area, or coordinates of a center point of the tracking area and a distance between the center point and a side length of the tracking area may be obtained.
In the embodiment of the present application, the shake correction processing may use the same logic, for example, the principle of minimum variation of scenes in the obtained correction detection frame. The target tracking processing procedure may also adopt the same logic, for example, the principle of obtaining the maximum matching degree between the position and the size of the tracking target in the tracking detection frame. Therefore, the larger picture jitter and the larger tracking target position jitter of the preview picture sequence in the preview area corresponding to the original image sequence are avoided.
For easier understanding of the scheme, by illustrating the principle of minimum scene change in the obtained correction detection frame, after the ith frame image is subjected to the shake correction processing, the picture in the obtained correction detection frame is BiThe image of the (i + 1) th frame is subjected to the shake correction processing to ensure that the picture B in the correction detection frame is obtainedi+1And BiThe difference of the middle background image is minimum, namely the background image change between the continuous image frames (the pictures in the correction detection frame corresponding to the continuous original image) is small, so that the smooth viewing effect is realized.
By illustrating the principle that the matching degree of the position and the size of the tracking target in the obtained tracking detection frame is maximum, after the ith frame of image is subjected to target tracking processing, the obtained picture in the tracking detection frame is CiThe target tracking processing process of the i +1 th frame image is to ensure the obtained picture C in the tracking detection framei+1And CiThe difference between the position and the size of the tracking target is minimum, namely the change of the position and the size of the tracking target between continuous image frames (pictures in tracking detection frames corresponding to continuous original images) is minimum, and a smooth watching effect is realized.
For convenience of description, a correction detection frame obtained after the shake correction processing is performed may be referred to as a correction area, a tracking detection frame obtained after the target tracking processing is performed may be referred to as a tracking area, and an area obtained after the shake correction processing and the target tracking processing are performed on the acquired original image may be referred to as a preview area.
In step S203, the image in the preview area is displayed as a preview screen.
In the embodiment of the present application, the image in the preview area may be input to a display and displayed as a preview screen. When video shooting is carried out, the preview picture is an image frame in a displayed video stream.
According to the method and the device, the original image is collected through the camera, the original image is the low-power image, then the shaking of the image is avoided through the shaking correction processing, the shaking of the tracking target in the image is avoided through the target tracking processing, after the shaking correction processing and the target tracking processing, a certain area in the processed image corresponding to the original image is obtained, and finally the processed image in the area is displayed as the preview image, so that the stable high-power effect can be presented. In other words, the shake correction processing and the target tracking processing in the embodiment of the present application avoid image shake and shake of the tracking target in the image by means of cropping, and define a certain region in the original image by means of cropping, where the processed image corresponding to the region is displayed as a preview image, and high-magnification scene shooting is simultaneously achieved.
Fig. 4 shows a schematic flowchart of a shooting method provided in an embodiment of the present application, which may be applied to a mobile terminal by way of example and not limitation.
Step S401, acquiring an original image acquired by a camera of the mobile terminal.
In the embodiment of the present application, the contents of step S401 and step S201 are the same, and the description of step S201 may be specifically referred to, and is not repeated herein.
Step S402, performing a shake correction process on the original image to obtain a processed image and a correction area in the processed image.
In the embodiment of the present invention, the shake correction process is performed by using an Electronic Image Stabilization (EIS) algorithm, and blurring is avoided by using a frame cropping compensation method.
Referring to fig. 5, fig. 5 is a schematic flow chart of a shake correction process and a target tracking process provided in the embodiment of the present application; as shown in (a) of fig. 5, after the original image a is subjected to the shake correction processing, a processed image a 'is obtained, the processed image a' is in accordance with the original image a in size, and the correction area B is an area corresponding to the intermediate blurred image cut during the correction processing.
Step S403, finding out a tracking target from the original image, and determining a tracking area in the original image based on a position of the tracking target in the original image.
In the embodiment of the present application, when performing the target tracking processing, the target tracking processing may be performed based on the original image, for example, the tracking target is found from the original image, and the tracking area may be determined in the original image according to the position of the tracking target in the original image.
The principle of determining the tracking area in the original image may also be determined according to the position and size of the tracking target in the original image and a composition model corresponding to the tracking target.
As shown in fig. 5 (b), the original image a is subjected to target tracking processing, and a tracking area C is obtained.
Step S404, carrying out joint cropping on the processed image based on the correction area and the tracking area to obtain a preview area in the processed image.
In the embodiment of the present application, the joint cropping process is not to crop the processed image, but to finally determine an area, which is denoted as a preview area, where a picture corresponding to the preview area in the processed image may be used as a preview picture. Meanwhile, the correction area, the tracking area, and the preview area may be representations of coordinates. The coordinates corresponding to the correction area can be mapped into the original image or the processed image; similarly, the coordinates corresponding to the tracking area may be mapped to the original image or the processed image; the coordinates corresponding to the preview area may be mapped to the original image or to the processed image. The corresponding screen in the processed image on which the preview area is mapped is a screen within a range represented by coordinates corresponding to the preview area in the processed image.
When performing the joint cropping, it is necessary to perform fusion according to a center point (which may also include the side length of the correction region) of the correction region corresponding to the processed image (which is a processed image corresponding to the original image of the current frame) and a center point (which may also include the side length of the tracking region) of the tracking region, so as to obtain a preview region in the processed image. It can also be understood that the correction area and the tracking area are both frames with coordinates, and after the two frames are fused, the obtained coordinates of the fused frames are mapped into the processed image, which is the preview area. As can also be understood from the above description, the shake correction processing, in addition to the processing of the sharpness of the image itself, actually obtains the correction area as coordinates with respect to the original image or coordinates with respect to the processed image (which coincides in size with the original image).
In order to make the preview area corresponding to the current frame original image and the preview area corresponding to the previous frame original image have a smoother visual effect, the preview picture corresponding to the previous frame original image may be referred to, for example, the center point of the preview picture corresponding to the previous frame original image, or the center point of the correction area corresponding to the previous frame original image, and the center point of the tracking area corresponding to the previous frame original image.
For example, when a preview area obtained by combining a center point of a correction area corresponding to a current frame original image, a center point of a correction area corresponding to a previous frame original image, and a center point of a tracking area corresponding to the current image and a center point of a tracking area corresponding to the previous frame original image is taken as a parameter, the preview area can be obtained by the following formula:
Figure BDA0002494729410000081
where F is the preview area, alpha and beta are constants,
Figure BDA0002494729410000082
is the center point of the correction area corresponding to the original image of the previous frame
Figure BDA0002494729410000083
Center point of correction area corresponding to current frame original image
Figure BDA0002494729410000084
As a function of the variables,
Figure BDA0002494729410000085
is the central point of the tracking area corresponding to the original image of the previous frame
Figure BDA0002494729410000086
The central point of the tracking area corresponding to the original image of the current frame
Figure BDA0002494729410000087
As a function of the variables.
It can also be seen from the above formula that the positions of the correction area and the tracking area corresponding to the preview area and the current frame original image during the joint cropping are related, and at the same time, the positions of the correction area and the tracking area corresponding to the previous frame original image (or the preview area corresponding to the previous frame original image) are related.
The position of the correction area and the tracking area corresponding to the original image of the previous frame is related to each other, so that the obtained preview area of the current frame can have a more stable preview effect with the preview area corresponding to the original image of the previous frame.
As shown in fig. 5 (C), the preview area D is obtained by cutting the correction area B and the tracking area C in combination, as shown in fig. 5 (D), for comparing the positions of the correction area B and the tracking area C.
In step S404, the image in the preview area is displayed as a preview screen.
In the embodiment of the present application, the contents of step S404 and step S203 are the same, and reference may be made to the description of step S203, which is not described herein again.
According to the method and the device, the processed image and the correction area are obtained by performing shake correction processing on the original image, the tracking area is obtained by performing target tracking processing on the original image, and finally the processed image is jointly cropped based on the correction area and the tracking area, so that a stable and smooth video picture can be obtained in a high-magnification scene.
Fig. 6 shows a schematic flowchart of a shooting method provided in an embodiment of the present application, which may be applied to the mobile terminal described above by way of example and not limitation.
Step S601, acquiring an original image acquired by a camera of the mobile terminal.
In the embodiment of the present application, the contents of step S601 and step S201 are the same, and specific reference may be made to the description of step S201, which is not described herein again.
Step S602, performing a shake correction process on the original image to obtain a processed image and a correction area in the processed image.
In the embodiment of the present application, the contents of step S602 are the same as those of step S402, and reference may be specifically made to the description of step S402, which is not described herein again.
Step S603, finding out a tracking target from the correction area, determining a tracking area in the correction area based on the position of the tracking target in the correction area, and taking the tracking area as a preview area.
In the embodiment of the present application, the target tracking processing performed in this step is different from step S403 in the embodiment shown in fig. 4 in that: in the embodiment shown in fig. 4, the tracking area is determined from the original image, whereas in the embodiment shown in fig. 6 the tracking area is determined from the image within the correction area obtained in step S602, for example from the corresponding picture when the correction area is mapped in the processed image. Or from the corresponding picture when the correction area is mapped in the original image. Of course, when determining the tracking area, the position of the tracking target in the image within the correction area, or a composition model, etc. need to be taken into consideration.
Referring to fig. 7, fig. 7 is a schematic diagram of a correction area, a tracking area, and a preview area corresponding to a shooting method provided in the embodiment of the present application; as shown in fig. 7 (a), the original image a is subjected to the shake correction processing to obtain a processed image a' and a correction area B, and then the image in the correction area B (the corresponding screen when the correction area is mapped in the processed image) is subjected to the target tracking processing to obtain a tracking area C, which is the preview area. Since the target tracking processing requires a clipping process, the tracking area C is located within the correction area B. Of course, when determining the tracking area, the target tracking processing may also be performed on the corresponding picture when the correction area B is mapped in the original image, without limitation.
In step S604, the image in the preview area is displayed as a preview screen.
In the embodiment of the present application, the contents of step S604 and step S203 are the same, and reference may be made to the description of step S203, which is not described herein again.
According to the embodiment of the application, the original large image is subjected to the shake correction processing to obtain the processed image and the correction area, then the image in the correction area in the processed image is subjected to the target tracking processing to obtain the tracking area, the obtained tracking area is the preview area, and a stable and smooth video picture can be obtained under a high-magnification scene.
Fig. 8 shows a schematic flowchart of a shooting method provided in an embodiment of the present application, which may be applied to the mobile terminal described above by way of example and not limitation.
Step 801, acquiring an original image acquired by a camera of the mobile terminal.
In the embodiment of the present application, the contents of step S801 and step S201 are the same, and the description of step S201 may be specifically referred to, and will not be repeated herein.
Step S802, finding out a tracking target from the original image, and determining a tracking area in the original image based on the position of the tracking target in the original image;
in the embodiment of the present application, the contents of step S802 and step S403 are the same, and reference to the description of step S403 is not repeated herein.
Step S803, a shake correction process is performed on the image of the tracking area, a correction area is determined in the tracking area, and the correction area is taken as a preview area.
In the embodiment of the present application, the difference from the embodiments shown in fig. 4 and 6 is that: in the embodiment of the present application, target tracking processing is performed on an original image to obtain a tracking area, and then shake correction processing is performed on an image in the tracking area to obtain a processed image and a correction area.
Referring to fig. 9, fig. 9 is a schematic diagram of a correction area, a tracking area, and a preview area corresponding to a shooting method provided in the embodiment of the present application; as shown in fig. 9 (a), the original image a is subjected to the target tracking processing to obtain the tracking area C, and then the image in the tracking area C is subjected to the shake correction processing to obtain the processed image C' and the correction area B, which is the preview area. Since the shake correction processing requires a process of clipping, the correction area is located within the tracking area.
Step S804 is to display the image in the preview area as a preview screen.
In the embodiment of the present application, the contents of step S804 and step S203 are the same, and reference may be made to the description of step S203, which is not described herein again.
According to the embodiment of the application, target tracking processing is carried out on an original large image to obtain a tracking area, then image in the tracking area is subjected to shake correction processing to obtain a processed image and a correction area in the processed image, the obtained correction area is a preview area, and a stable and smooth video picture can be obtained under a high-magnification scene.
As another embodiment of the present application, the process of the shake correction process includes:
acquiring an image to be corrected, and acquiring jitter information of the mobile terminal at the image acquisition moment to be corrected;
performing first cutting processing on the image to be corrected to obtain an edge image and a middle image;
and compensating the intermediate image based on the jitter information and the edge image to obtain a processed image, wherein the area corresponding to the intermediate image is a correction area.
In the embodiment of the present application, the image to be subjected to the correction processing is an image to be subjected to the shake correction processing, for example, in the embodiments shown in fig. 4 and 6, the shake correction processing is performed on the original image, that is, the original image is the image to be subjected to the correction processing. In the embodiment shown in fig. 8, the image of the tracking area is subjected to the shake correction processing, i.e., the image in the tracking area is the image to be subjected to the correction processing.
In the process of performing the shake correction processing, the shake information of the mobile terminal needs to be acquired at the image acquisition time to be corrected (if the image to be corrected is an image in the tracking area, the acquisition time of the original image corresponding to the image in the tracking area), the shake information of the mobile terminal can be acquired by using a gyroscope arranged inside the mobile terminal, shake information of the mobile terminal corresponding to each time is generated, and shake information of the mobile terminal corresponding to the image acquisition time to be corrected is acquired from the shake information of the mobile terminal.
Based on the shake information, the image to be corrected can be subjected to reverse compensation, when the reverse compensation is performed, first trimming processing is required, that is, an edge image and an intermediate blurred image are trimmed, a compensation amount is calculated through the shake information, then the intermediate blurred image is subjected to reverse compensation through the edge image, after the shake correction processing is performed, the intermediate blurred image becomes clearer after the reverse compensation, and since the intermediate blurred image is trimmed from the original image, an area is obtained after the first trimming processing is performed, the image in the area is an image subjected to compensation, and the area can be recorded as a correction area. At the time of the shake correction processing, the ratio (for example, area ratio, length ratio, width ratio, or the like) of the correction region determined last to the image to be subjected to the shake correction processing may be set in advance.
As another embodiment of the present application, the target tracking process includes:
finding a tracking target from an image to be tracked and/or an N frame image before the image to be tracked based on an attention mechanism, wherein N is more than or equal to 1;
and performing second cropping processing based on the position of the tracking target in the image to be tracked so as to determine the position in the image to be tracked.
In the embodiment of the application, when the target tracking processing is performed on the image to be tracked, the tracking target needs to be determined in advance, the tracking target can be found out from the image to be tracked based on an attention mechanism, the tracking target can also be found out from N frames of images before the image to be tracked, and the tracking target can also be found out from the image to be tracked and the N frames of images before the image to be tracked.
When the target tracking processing is performed by the attention mechanism-based algorithm, a convolutional neural network model needs to be constructed, and the constructed convolutional neural network model has different degrees of concentration corresponding to different parts of input data or feature maps. For example, on each target scale of interest, a classified network and a network generating an Attribution Proxy (APN) are used. The APN can be composed of two fully connected layers, 3 parameters are output to represent the position of a box, and a classification network of the following scale extracts features from the newly generated box image for classification. During training, the classification result obtained by controlling the next scale through the loss function is better than the previous scale, so that the APN extracts a target part which is more beneficial to fine classification, and the APN focuses on a fine distinguishable part on the target more and more as the training is carried out.
In the process of specifically determining the tracking target, the current image to be tracked may be input into the convolutional neural network model, or N frames of original images (or preview images corresponding to the N frames of original images) before the original image corresponding to the image to be tracked may be input into the convolutional neural network model, or N frames of images (or preview images corresponding to the N frames of original images) before the original image corresponding to the image to be tracked and the current image to be tracked may be input into the convolutional neural network model, so as to output the tracking target, where the input image to be input into the convolutional neural network model is not limited.
If the target is the tracking target determined in the N frames of images before the image to be tracked, after the tracking target is found, the position of the tracking target in the image to be tracked needs to be determined, the image to be tracked is the image to be tracked, as shown in the embodiment shown in fig. 4 and 8, when the target tracking processing is performed, the image to be tracked is the original image, as shown in the embodiment shown in fig. 6, when the target tracking processing is performed, the image to be tracked is the image in the correction area (the correction area is mapped to the image in the original image or the image in the processing image). When the tracking area is output according to the position of the tracking target, the position and the size of the tracking area may be determined according to the position and the size of the tracking target, for example, the center point of the minimum circumscribed rectangle of the tracking target may be set as the center point of the tracking area, and the length and the width of the tracking area may be set in advance.
It should be noted that the process of obtaining the tracking area by the second cropping process described above is only an example, and does not limit the process of obtaining the tracking area, and in practical applications, other ways of obtaining the position and size of the tracking area may also be selected, and are not limited herein.
It should be noted here that the above-mentioned procedures of the shake correction processing and the target tracking processing are only examples, and in practical applications, the procedures of the shake correction processing and the target tracking processing in the embodiments of fig. 4, 6, and 8 may also be procedures distinguished from the above-mentioned shake correction processing and target tracking processing. The specific method of the shake correction processing and the target tracking processing is not limited herein.
As another embodiment of the present application, before displaying the image in the preview area as a preview screen, the method further includes:
performing smoothing processing on the preview area in the processed image to obtain the smoothed preview area;
correspondingly, the displaying the image in the preview area as a preview screen includes:
and displaying the image in the preview area after the smoothing processing as a preview picture.
In the embodiment of the present application, in order to enable a smoother preview effect between the multi-frame preview pictures, the image in the preview area may also be subjected to smoothing processing, so as to obtain a preview area after the smoothing processing. And displaying the image in the preview area after the smoothing processing as a preview picture. Since it is ultimately necessary to obtain an image after the shake correction processing, the preview screen obtained after the smoothing processing is a screen in which the preview area is mapped into the processed image.
The smoothing may be performed by a process in which the filter moves the position and size of the preview area with reference to the position and size of the tracking target in the preview screen of the previous frame.
As another embodiment of the present application, the smoothing processing on the preview area includes:
smoothing the preview area through at least two filters to obtain a smooth subarea corresponding to each filter;
determining, by a decision maker, a weight for each filter;
and based on the weight of each filter, performing fusion processing on the smoothing subareas corresponding to each filter to obtain a preview area after smoothing processing.
In the embodiment of the present application, referring to fig. 10, fig. 10 is a schematic diagram of a smoothing process in the shooting method provided in the embodiment of the present application, as shown in the drawing, a filter bank may be designed, a plurality of filters are in the filter bank, each filter employs a different filtering algorithm, and smoothing processing is performed on the preview area through each filter, so that a smooth sub-area processed by each filter may be obtained.
In order to fuse the smooth subregions respectively corresponding to the plurality of filters, a weight may be set for each filter, and the weight of each filter may be determined by a decider based on the position and size of the tracking target within the preview screen of consecutive M (M is greater than or equal to 1) frames and the position and size of the tracking target within the current preview region, for example. After the weight of each filter is obtained, the smoothing subregions corresponding to each filter may be subjected to fusion processing based on the weight of each filter, so as to obtain the preview region after smoothing processing. The position of the preview area after the smoothing process may be changed from the position of the preview area before the smoothing process.
Of course, after the smoothing process is performed on the preview area, other processes may be performed on the image of the preview area, and after the other processes are performed, the processed image of the preview area is displayed. For example, the image of the preview area after the smoothing processing may be subjected to a hyper-resolution algorithm to improve the quality of the image. The hyper-resolution algorithm is a bottom layer image processing task, and maps an image with low resolution to high resolution to achieve the effect of enhancing image details.
For example, a deep learning method may be used to perform super-resolution algorithm processing, and a large number of high-resolution images are accumulated and model learning is performed, for example, the quality of the high-resolution images is reduced according to a model for reducing the quality to generate a training model, then the images are blocked according to the correspondence between the low-frequency part and the high-frequency part of the high-resolution images, and a priori knowledge is obtained through learning to establish a learning model; and then inputting the low-resolution image into a model, and searching the highest matching high-frequency block in the established learning set according to the input low-resolution block so as to recover the low-resolution image, finally obtaining the high-frequency details of the image and improving the quality of the image.
Since the process of shake correction processing provided by the embodiment of the application is an electronic anti-shake method, the quality of an image may be reduced, and therefore, before preview display, the quality of the image is improved through a hyper-resolution algorithm, so that the problem of image quality reduction in a high-power shooting scene is avoided.
Fig. 11 shows a schematic flowchart of a shooting method provided in an embodiment of the present application, which includes, by way of example and not limitation:
step S1101, an original image collected by a camera of the mobile terminal is obtained.
Step S1102, performing a shake correction process and a target tracking process on the original image, to obtain a processed image and a preview area containing a tracking target in the processed image.
In step S1103, the image in the preview area is displayed as a preview screen.
In the embodiment of the present application, the contents of step S1101 to step S1103 are the same as the contents of step S201 to step S203, and the descriptions of step S201 to step S203 may be specifically referred to.
And step S1104, performing zoom processing on the camera based on the size of the tracking target in the preview area in the original image, where the camera after zoom processing is used to collect the next frame of original image.
In the embodiment of the present application, since the preview area is obtained through cropping, for example, a first cropping process may be performed during the shake correction process, and a second cropping process may be performed according to the found tracking target during the target tracking process. Therefore, the size of the preview area screen is smaller than the screen size of the original image. Zooming processing can be performed on the camera according to the position and the proportion of the picture size (or the tracking target) in the original image in the preview area, and the zooming processing process is to enable a preview picture with a good composition effect to be obtained after the next frame of original image collected by the camera is cut at least twice. Of course, the process of zoom processing may include optical zooming and/or digital zooming.
It should be noted that, in the process of the zoom processing, the position size of the tracking target in the next frame original image acquired by the camera after the zoom processing and the position size of the tracking target in the preview area corresponding to the current frame original image do not have a high matching degree, but a preview screen (the position and the size of the tracking target) obtained after the shake correction processing and the target tracking processing are performed on the next frame original image acquired by the camera after the zoom processing and a preview screen (the position and the size of the tracking target) obtained from the current frame original image have a high matching degree. Namely, the next frame of original image needs to be considered in the zoom processing and also needs to be subjected to the cropping processing.
When the number of the cameras arranged on the mobile terminal is at least two, one of the cameras after the zooming processing can be selected as a main camera, and the main camera is used for collecting the next frame of original image, which can refer to the description in step S1205.
Step S1105, selecting one of the cameras after the zoom processing as a main camera, where the main camera is a camera that collects the original image of the next frame.
In this embodiment of the present application, one of the cameras after the zoom processing may be selected as a main camera according to a position of a tracking target in the preview area in the original image.
Certainly, one of the cameras after the zoom processing may be selected as a main camera according to the composition model, for example, even though the camera acquiring the original image of the current frame is subjected to the zoom processing due to different adjustable focal length ranges of the cameras, the position and the size of the tracking target in the preview area corresponding to the estimated acquired original image of the next frame cannot be matched with the position and the size of the tracking target in the preview area corresponding to the original image of the current frame or the matching degree is low, so that the position and the size of the tracking target in the preview area corresponding to the acquired original image of the next frame and the position and the size of the tracking target in the preview area corresponding to the original image of the current frame need to be estimated by switching other cameras. In another case, since there may be some differences in the positions of the multiple cameras on the mobile terminal, there may be a difference in the position of the tracking target in the original image acquired by each camera, in order to make the picture in the preview area corresponding to the next frame of original image more match with the composition model, the camera may be switched to another camera, and the picture in the preview area corresponding to the next frame of original image acquired by the switched camera is more matched with the composition model.
It should be noted that, when performing zoom processing and camera switching processing, the frames in the next frame original image, the preview area corresponding to the next frame original image, and the preview area corresponding to the next frame original image are all obtained by calculation and estimation.
The acquisition process of the composition model may be: and generating a composition model based on the previous frame of preview picture and the position and the size of the tracking target in the preview picture, or selecting a composition model with the highest matching degree from preset composition models based on the previous frame of preview picture and the tracking area in the preview picture.
According to the embodiment of the application, through the operations of zooming and camera switching on the camera, a preview picture obtained after the acquired next frame of original image is subjected to shake correction processing and target tracking processing has a good composition.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Fig. 12 shows a block diagram of a configuration of an imaging apparatus according to an embodiment of the present application, which corresponds to the imaging method described in the above embodiment, and only a part related to the embodiment of the present application is shown for convenience of description.
Referring to fig. 12, the apparatus 12 includes:
an image obtaining unit 121, configured to obtain an original image collected by a camera of the mobile terminal;
an image processing unit 122, configured to perform shake correction processing and target tracking processing on the original image, and obtain a processed image and a preview area containing a tracking target in the processed image, where the preview area includes the tracking target;
and an image display unit 123 configured to display the image in the preview area as a preview screen.
As another embodiment of the present application, the image processing unit 122 includes:
a shake correction processing module 1221, configured to perform shake correction processing on the original image, so as to obtain a processed image and a correction area in the processed image;
a target tracking processing module 1222, configured to find a tracking target from the original image, and determine a tracking area in the original image based on a position of the tracking target in the original image;
a joint cropping module 1223, configured to perform joint cropping on the processed image based on the correction area and the tracking area, so as to obtain a preview area in the processed image.
As another embodiment of the present application, the image processing unit 122 includes:
the shake correction processing module is used for carrying out shake correction processing on the original image to obtain a processed image and a correction area in the processed image;
and the target tracking processing module is used for finding out a tracking target from the correction area, determining a tracking area in the correction area based on the position of the tracking target in the correction area, and taking the tracking area as a preview area.
As another embodiment of the present application, the image processing unit 122 includes:
the target tracking processing module is used for finding out a tracking target from the original image and determining a tracking area in the original image based on the position of the tracking target in the original image;
and the shake correction processing module is used for carrying out shake correction processing on the image of the tracking area to obtain a processed image and a correction area in the processed image, and taking the correction area as a preview area.
As another embodiment of the present application, the apparatus 12 includes:
a smoothing unit 124, configured to perform smoothing on the preview area before displaying the image in the preview area as a preview screen, so as to obtain a smoothed preview area;
correspondingly, the image display unit 123 is further configured to:
and displaying the image in the preview area after the smoothing processing as a preview picture.
As another embodiment of the present application, the smoothing unit 124 includes:
the smoothing module is used for respectively smoothing the preview area through at least two filters to obtain a smoothing subarea corresponding to each filter;
the weight generation module is used for judging the weight of each filter through the decision maker;
and the fusion module is used for performing fusion processing on the smooth sub-regions corresponding to each filter based on the weight of each filter to obtain the preview region after the smoothing processing.
As another embodiment of the present application, the apparatus 12 further includes:
and a zooming unit 125, configured to, after a preview area including a tracking target in the processed image is obtained, perform zooming processing on the camera based on a size of the tracking target in the preview area in the original image, where the camera after zooming processing is used to acquire a next frame of original image.
As another embodiment of the present application, the apparatus 12 further includes:
and a camera switching unit 126, configured to select one of the cameras after the zoom processing as a main camera after the zoom processing is performed on the cameras, where the main camera is a camera that captures an original image of a next frame.
As another embodiment of the present application, the camera switching unit 126 is further configured to:
and selecting one of the cameras after the zooming processing as a main camera according to the position of the tracking target in the preview area in the original image.
It should be noted that, for the information interaction, execution process, and other contents between the above-mentioned devices/units, the specific functions and technical effects thereof are based on the same concept as those of the embodiment of the method of the present application, and specific reference may be made to the part of the embodiment of the method, which is not described herein again.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
The shooting method provided by the embodiment of the application can be applied to mobile terminals such as mobile phones, video cameras, tablet computers, Augmented Reality (AR)/Virtual Reality (VR) devices, notebook computers and the like, and the embodiment of the application does not limit the specific types of the mobile terminals at all.
The mobile terminal provided by the embodiment of the application comprises a memory, a processor and a computer program which is stored in the memory and can run on the processor, wherein the processor executes the computer program to realize the steps of any shooting method provided by the embodiment of the application.
Take the mobile terminal as a mobile phone as an example. Fig. 13 is a block diagram illustrating a partial structure of a mobile phone according to an embodiment of the present application. Referring to fig. 13, the handset includes: radio Frequency (RF) circuitry 1310, memory 1320, input unit 1330, display unit 1340, sensor 1350, audio circuitry 1360, wireless fidelity (WiFi) module 1370, processor 1380, and power supply 1390. Those skilled in the art will appreciate that the handset configuration shown in fig. 13 is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
The following describes each component of the mobile phone in detail with reference to fig. 13:
RF circuit 1310 may be used for receiving and transmitting signals during a message transmission or call, and in particular, for processing received downlink information of a base station by processor 1380; in addition, the data for designing uplink is transmitted to the base station. Typically, the RF circuitry includes, but is not limited to, an antenna, at least one Amplifier, a transceiver, a coupler, a Low Noise Amplifier (LNA), a duplexer, and the like. In addition, RF circuit 1310 may also communicate with networks and other devices via wireless communication. The wireless communication may use any communication standard or protocol, including but not limited to Global System for Mobile communication (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE)), e-mail, Short Messaging Service (SMS), and the like.
The memory 1320 may be used to store software programs and modules, and the processor 1380 executes various functional applications and data processing of the mobile phone, such as processing images, by operating the software programs and modules stored in the memory 1320. The memory 1320 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program (such as an image playing function) required by at least one function, and the like; the storage data area may store data created according to the use of the mobile phone (such as the position of a correction area, a tracking area, or a preview area, etc.), and the like. Further, the memory 1320 may include high speed random access memory and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The input unit 1330 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the cellular phone. Specifically, the input unit 1330 may include a touch panel 1331 and other input devices 1332. Touch panel 1331, also referred to as a touch screen, can collect touch operations by a user (e.g., operations by a user on or near touch panel 1331 using any suitable object or accessory such as a finger, a stylus, etc.) and drive the corresponding connection device according to a preset program. Alternatively, the touch panel 1331 may include two portions of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, and sends the touch point coordinates to the processor 1380, where the touch controller can receive and execute commands sent by the processor 1380. In addition, the touch panel 1331 may be implemented by various types, such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. The input unit 1330 may include other input devices 1332 in addition to the touch panel 1331. In particular, other input devices 1332 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.).
The display unit 1340 may be used to display information input by a user or information provided to the user and various menus of the cellular phone. For example, a preview screen or the like is displayed. The Display unit 1340 may include a Display panel 1341, and optionally, the Display panel 1341 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like. Further, touch panel 1331 can overlay display panel 1341, and when touch panel 1331 detects a touch operation on or near touch panel 1331, processor 1380 can be configured to determine the type of touch event, and processor 1380 can then provide a corresponding visual output on display panel 1341 based on the type of touch event. Although in fig. 13, the touch panel 1331 and the display panel 1341 are two independent components to implement the input and output functions of the mobile phone, in some embodiments, the touch panel 1331 and the display panel 1341 may be integrated to implement the input and output functions of the mobile phone.
The handset may also include at least one sensor 1350, such as light sensors, motion sensors, and other sensors. Specifically, the light sensor may include an ambient light sensor that adjusts the brightness of the display panel 1341 according to the brightness of ambient light, and a proximity sensor that turns off the display panel 1341 and/or the backlight when the mobile phone is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally, three axes), can detect the magnitude and direction of gravity when stationary, and can be used for applications of recognizing the posture of a mobile phone (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration recognition related functions (such as pedometer and tapping), and the like; as for the mobile phone, other sensors such as a gyroscope (for acquiring shaking information of the mobile phone), a barometer, a hygrometer, a thermometer, and an infrared sensor may be further configured, which are not described herein again.
The audio circuit 1360, speaker 1361, microphone 1362 may provide an audio interface between the user and the handset. The audio circuit 1360 may transmit the electrical signal converted from the received audio data to the speaker 1361, and the electrical signal is converted into a sound signal by the speaker 1361 and output; on the other hand, the microphone 1362 converts the collected sound signal into an electric signal, converts the electric signal into audio data after being received by the audio circuit 1360, and then processes the audio data by the audio data output processor 1380, and then sends the audio data to, for example, another cellular phone via the RF circuit 1310, or outputs the audio data to the memory 1320 for further processing.
WiFi belongs to short-distance wireless transmission technology, and the mobile phone can help a user to receive and send e-mails, browse webpages, access streaming media and the like through the WiFi module 1370, and provides wireless broadband internet access for the user. Although fig. 13 shows the WiFi module 1370, it is understood that it does not belong to the essential constitution of the handset, and may be omitted entirely as needed within the scope not changing the essence of the invention.
The processor 1380 is a control center of the mobile phone, connects various parts of the entire mobile phone using various interfaces and lines, and performs various functions of the mobile phone and processes data by operating or executing software programs and/or modules stored in the memory 1320 and calling data stored in the memory 1320, thereby integrally monitoring the mobile phone. Optionally, processor 1380 may include one or more processing units; preferably, the processor 1380 may integrate an application processor, which handles primarily operating systems, user interfaces, application programs, etc., and a modem processor, which handles primarily wireless communications. It will be appreciated that the modem processor described above may not be integrated within processor 1380.
The handset also includes a power supply 1390 (e.g., a battery) to supply power to the various components, which may preferably be logically coupled to the processor 1380 via a power management system to manage charging, discharging, and power consumption management functions via the power management system.
Although not shown, the handset may also include a camera. Optionally, the position of the camera on the mobile phone may be front-located or rear-located, which is not limited in this embodiment of the present application.
Optionally, the mobile phone may include a single camera, a dual camera, or a triple camera, which is not limited in this embodiment.
For example, a cell phone may include three cameras, one being a main camera, one being a wide camera, and one being a tele camera.
Optionally, when the mobile phone includes a plurality of cameras, positions of the plurality of cameras may be set according to an actual situation, which is not limited in this embodiment of the present application.
In addition, although not shown, the mobile phone may further include a bluetooth module, etc., which will not be described herein.
Fig. 14 is a schematic diagram of a software structure of a mobile terminal (mobile phone) according to an embodiment of the present application. Taking a mobile phone operating system as an Android system as an example, in some embodiments, the Android system is divided into four layers, which are an application layer, an application Framework (FWK) layer, a system layer and a hardware abstraction layer, and the layers communicate with each other through a software interface.
As shown in fig. 14, the application layer may be a series of application packages, which may include short message, calendar, camera, video, navigation, gallery, call, and other applications.
The application framework layer provides an Application Programming Interface (API) and a programming framework for the application program of the application layer. The application framework layer may include some predefined functions, such as functions for receiving events sent by the application framework layer.
As shown in fig. 14, the application framework layer may include a window manager, a resource manager, and a notification manager, among others.
The window manager is used for managing window programs. The window manager can obtain the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like. The content provider is used to store and retrieve data and make it accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, phone books, etc.
The resource manager provides various resources for the application, such as localized strings, icons, pictures, layout files, video files, and the like.
The notification manager enables the application to display notification information in the status bar, can be used to convey notification-type messages, can disappear automatically after a short dwell, and does not require user interaction. Such as a notification manager used to inform download completion, message alerts, etc. The notification manager may also be a notification that appears in the form of a chart or scroll bar text at the top status bar of the system, such as a notification of a background running application, or a notification that appears on the screen in the form of a dialog window. For example, prompting text information in the status bar, sounding a prompt tone, vibrating the electronic device, flashing an indicator light, etc.
The application framework layer may further include:
a viewing system that includes visual controls, such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, the display interface including the short message notification icon may include a view for displaying text and a view for displaying pictures.
The telephone manager is used for providing the communication function of the mobile phone. Such as management of call status (including on, off, etc.).
The system layer may include a plurality of functional modules. For example: a sensor service module, a physical state identification module, a three-dimensional graphics processing library (such as OpenGL ES), and the like.
The sensor service module is used for monitoring sensor data uploaded by various sensors in a hardware layer and determining the physical state of the mobile phone;
the physical state recognition module is used for analyzing and recognizing user gestures, human faces and the like;
the three-dimensional graphic processing library is used for realizing three-dimensional graphic drawing, image rendering, synthesis, layer processing and the like.
The system layer may further include:
the surface manager is used to manage the display subsystem and provide fusion of 2D and 3D layers for multiple applications.
The media library supports a variety of commonly used audio, video format playback and recording, and still image files, among others. The media library may support a variety of audio-video encoding formats, such as MPEG4, h.264, MP3, AAC, AMR, JPG, PNG, and the like.
The hardware abstraction layer is a layer between hardware and software. The hardware abstraction layer may include a display driver, a camera driver, a sensor driver, etc. for driving the relevant hardware of the hardware layer, such as a display screen, a camera, a sensor, etc.
The above embodiment of the shooting method can be implemented on a mobile phone having the above hardware structure/software structure.
The embodiments of the present application further provide a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the computer program implements the steps in the above-mentioned method embodiments.
The embodiments of the present application provide a computer program product, which when running on a mobile terminal, enables the mobile terminal to implement the steps in the above method embodiments when executed.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, all or part of the processes in the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium and can implement the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer readable medium may include at least: any entity or device capable of carrying computer program code to a photographing device/mobile terminal, a recording medium, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, and software distribution medium. Such as a usb-disk, a removable hard disk, a magnetic or optical disk, etc. In certain jurisdictions, computer-readable media may not be an electrical carrier signal or a telecommunications signal in accordance with legislative and patent practice.
The embodiment of the present application further provides a chip system, where the chip system includes a processor, the processor is coupled with a memory, and the processor executes a computer program stored in the memory to implement the steps of the method for alarming in any child left-behind vehicle according to the present application. The chip system can be a single chip or a chip module consisting of a plurality of chips.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and method steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (14)

1. A photographing method, characterized by comprising:
acquiring an original image acquired by a camera of a mobile terminal;
carrying out shake correction processing and target tracking processing on the original image to obtain a processed image and a preview area containing a tracking target in the processed image;
and displaying the image in the preview area as a preview picture.
2. The photographing method according to claim 1, wherein the performing of the shake correction process and the target tracking process on the original image to obtain a processed image and a preview area containing a tracking target in the processed image comprises:
carrying out shake correction processing on the original image to obtain a processed image and a correction area in the processed image;
finding out a tracking target from the original image, and determining a tracking area in the original image based on the position of the tracking target in the original image;
and carrying out joint cropping on the processed image based on the correction area and the tracking area to obtain a preview area in the processed image.
3. The photographing method according to claim 1, wherein the performing of the shake correction process and the target tracking process on the original image to obtain a processed image and a preview area containing a tracking target in the processed image comprises:
carrying out shake correction processing on the original image to obtain a processed image and a correction area in the processed image;
finding out a tracking target from the correction area, determining a tracking area in the correction area based on the position of the tracking target in the correction area, and taking the tracking area as a preview area.
4. The photographing method according to claim 1, wherein the performing of the shake correction process and the target tracking process on the original image to obtain a processed image and a preview area containing a tracking target in the processed image comprises:
finding out a tracking target from the original image, and determining a tracking area in the original image based on the position of the tracking target in the original image;
and carrying out shake correction processing on the image in the tracking area to obtain a processed image and a correction area in the processed image, and taking the correction area as a preview area.
5. The photographing method according to any one of claims 1 to 4, wherein the shake correction process includes:
acquiring an image to be corrected, and acquiring jitter information of the mobile terminal at the image acquisition moment to be corrected;
performing first cutting processing on the image to be corrected to obtain an edge image and a middle image;
and compensating the intermediate image based on the jitter information and the edge image to obtain a processed image, wherein the area corresponding to the intermediate image is a correction area.
6. The photographing method according to any one of claims 1 to 4, wherein the target tracking process includes:
finding a tracking target from an image to be tracked and/or an N frame image before the image to be tracked based on an attention mechanism, wherein N is more than or equal to 1;
and performing second cropping processing on the basis of the position of the tracking target in the image to be tracked so as to determine a tracking area in the image to be tracked.
7. The photographing method according to any one of claims 1 to 4, further comprising, before displaying the image in the preview area as a preview screen:
performing smoothing processing on the preview area in the processed image to obtain the smoothed preview area;
correspondingly, the displaying the image in the preview area as a preview screen includes:
and displaying the image in the preview area after the smoothing processing as a preview picture.
8. The photographing method of claim 7, wherein the smoothing of the preview area comprises:
respectively smoothing the preview area in the processed image through at least two filters to obtain a smoothing subarea corresponding to each filter;
determining, by a decision maker, a weight for each filter;
and based on the weight of each filter, performing fusion processing on the smoothing subareas corresponding to each filter to obtain a preview area after smoothing processing.
9. The photographing method according to any one of claims 1 to 4, further comprising, after obtaining a processed image and a preview area containing a tracking target in the processed image:
and zooming the camera based on the size of the tracking target in the preview area in the original image, wherein the zoomed camera is used for collecting the next frame of original image.
10. The photographing method of claim 9, wherein the number of cameras provided on the mobile terminal is at least two;
after zooming the camera, the method further comprises:
and selecting one of the cameras after zooming as a main camera, wherein the main camera is a camera for collecting the next frame of original image.
11. The shooting method according to claim 10, wherein the selecting one of the zoom-processed cameras as a main camera includes:
and selecting one of the cameras after the zooming processing as a main camera according to the position of the tracking target in the preview area in the original image.
12. A camera, comprising:
the image acquisition unit is used for acquiring an original image acquired by a camera of the mobile terminal;
the image processing unit is used for carrying out shake correction processing and target tracking processing on the original image to obtain a processed image and a preview area containing a tracking target in the processed image;
and an image display unit configured to display the image in the preview area as a preview screen.
13. A mobile terminal comprising a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor implements the steps of the method according to any one of claims 1 to 11 when executing the computer program.
14. A chip system, characterized in that the chip system comprises a processor coupled with a memory, the processor executing a computer program stored in the memory to implement the steps of the method according to any of the claims 1 to 11.
CN202010417818.9A 2020-05-15 2020-05-15 Shooting method and device, mobile terminal and chip system Active CN113676655B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010417818.9A CN113676655B (en) 2020-05-15 2020-05-15 Shooting method and device, mobile terminal and chip system
PCT/CN2021/084589 WO2021227693A1 (en) 2020-05-15 2021-03-31 Photographic method and apparatus, and mobile terminal and chip system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010417818.9A CN113676655B (en) 2020-05-15 2020-05-15 Shooting method and device, mobile terminal and chip system

Publications (2)

Publication Number Publication Date
CN113676655A true CN113676655A (en) 2021-11-19
CN113676655B CN113676655B (en) 2022-12-27

Family

ID=78526385

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010417818.9A Active CN113676655B (en) 2020-05-15 2020-05-15 Shooting method and device, mobile terminal and chip system

Country Status (2)

Country Link
CN (1) CN113676655B (en)
WO (1) WO2021227693A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114286001A (en) * 2021-12-28 2022-04-05 维沃移动通信有限公司 Image processing circuit, device and method, electronic equipment, image processing chip and main control chip

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114928694A (en) * 2022-04-25 2022-08-19 深圳市慧鲤科技有限公司 Image acquisition method and apparatus, device, and medium
CN116074620B (en) * 2022-05-27 2023-11-07 荣耀终端有限公司 Shooting method and electronic equipment
CN117177066B (en) * 2022-05-30 2024-09-20 荣耀终端有限公司 Shooting method and related equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH09102903A (en) * 1995-10-05 1997-04-15 Hitachi Ltd Image pickup device
CN104065876A (en) * 2013-03-22 2014-09-24 卡西欧计算机株式会社 Image processing device and image processing method
CN105959567A (en) * 2016-06-21 2016-09-21 维沃移动通信有限公司 Photographing control method and mobile terminal
CN107786812A (en) * 2017-10-31 2018-03-09 维沃移动通信有限公司 A kind of image pickup method, mobile terminal and computer-readable recording medium
CN107809590A (en) * 2017-11-08 2018-03-16 青岛海信移动通信技术股份有限公司 A kind of photographic method and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH09102903A (en) * 1995-10-05 1997-04-15 Hitachi Ltd Image pickup device
CN104065876A (en) * 2013-03-22 2014-09-24 卡西欧计算机株式会社 Image processing device and image processing method
CN105959567A (en) * 2016-06-21 2016-09-21 维沃移动通信有限公司 Photographing control method and mobile terminal
CN107786812A (en) * 2017-10-31 2018-03-09 维沃移动通信有限公司 A kind of image pickup method, mobile terminal and computer-readable recording medium
CN107809590A (en) * 2017-11-08 2018-03-16 青岛海信移动通信技术股份有限公司 A kind of photographic method and device

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114286001A (en) * 2021-12-28 2022-04-05 维沃移动通信有限公司 Image processing circuit, device and method, electronic equipment, image processing chip and main control chip

Also Published As

Publication number Publication date
WO2021227693A1 (en) 2021-11-18
CN113676655B (en) 2022-12-27

Similar Documents

Publication Publication Date Title
CN113676655B (en) Shooting method and device, mobile terminal and chip system
US11831977B2 (en) Photographing and processing method and electronic device
EP3531689B1 (en) Optical imaging method and apparatus
US11451706B2 (en) Photographing method and mobile terminal
CN108377342B (en) Double-camera shooting method and device, storage medium and terminal
EP2770724B1 (en) Apparatus and method for positioning image area using image sensor location
US8416277B2 (en) Face detection as a metric to stabilize video during video chat session
WO2019091486A1 (en) Photographing processing method and device, terminal, and storage medium
CN111669507A (en) Photographing method and device and electronic equipment
CN108196755B (en) Background picture display method and device
WO2021013147A1 (en) Video processing method, device, terminal, and storage medium
CN110266957B (en) Image shooting method and mobile terminal
CN110196673B (en) Picture interaction method, device, terminal and storage medium
CN111031248A (en) Shooting method and electronic equipment
CN110769156A (en) Picture display method and electronic equipment
CN111754386A (en) Image area shielding method, device, equipment and storage medium
CN113891018A (en) Shooting method and device and electronic equipment
CN116188343B (en) Image fusion method and device, electronic equipment, chip and medium
CN110992268B (en) Background setting method, device, terminal and storage medium
CN114915745A (en) Multi-scene video recording method and device and electronic equipment
CN115134527A (en) Processing method, intelligent terminal and storage medium
CN112561798B (en) Picture processing method, mobile terminal and storage medium
KR101361691B1 (en) Method for obtaining partial image of portable terminal having touch screen
CN113613053A (en) Video recommendation method and device, electronic equipment and storage medium
CN113518171A (en) Image processing method, device, terminal equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant