CN116055871A - Video processing method and related equipment thereof - Google Patents

Video processing method and related equipment thereof Download PDF

Info

Publication number
CN116055871A
CN116055871A CN202211056015.0A CN202211056015A CN116055871A CN 116055871 A CN116055871 A CN 116055871A CN 202211056015 A CN202211056015 A CN 202211056015A CN 116055871 A CN116055871 A CN 116055871A
Authority
CN
China
Prior art keywords
image
target
video
electronic device
distance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211056015.0A
Other languages
Chinese (zh)
Other versions
CN116055871B (en
Inventor
邵涛
徐荣跃
王宁
林梦然
陈代挺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202211056015.0A priority Critical patent/CN116055871B/en
Publication of CN116055871A publication Critical patent/CN116055871A/en
Application granted granted Critical
Publication of CN116055871B publication Critical patent/CN116055871B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2628Alteration of picture size, shape, position or orientation, e.g. zooming, rotation, rolling, perspective, translation

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

The application provides a video processing method and related equipment thereof, relating to the field of image processing, wherein the video processing method is applied to electronic equipment and comprises the following steps: displaying a first image; under the condition that the target shooting object moves to a second position, acquiring a second image, wherein the second position and the first position are different positions; when the electronic equipment comprises a depth device, determining a corresponding object distance after the target shooting object moves to a second position by utilizing the depth device; determining a corresponding target zoom ratio from a preset zoom table according to the object distance; determining a cutting frame according to the target zoom ratio, and cutting a second image according to the cutting frame; and scaling the cut second image according to the target zoom ratio to obtain a third image comprising the target shooting object. According to the scheme, the accuracy of size fixation of the shooting object in video processing can be improved in a simple and quick mode.

Description

Video processing method and related equipment thereof
Technical Field
The present disclosure relates to the field of image processing, and in particular, to a video processing method and related devices.
Background
With the rapid development of image technology, users' demands for video recording functions are increasing. For example, video is recorded by a camera application, video is recorded during a video call, video is recorded in a monitored scene, and so on. Taking recorded video as an example, in the shooting process, if the shot object is required to be always positioned in the center of the picture, the shot object is required to be subjected to video tracking; if the size of the subject to be displayed is also substantially unchanged, digital zooming (i.e., zooming) is performed according to the size of the subject, and then cropping is performed such that the size of the subject is fixed.
However, when the position of the shooting object is changed, the size of the shooting object will also change along with the change of the position, and there is a risk that whether the ratio of the digital zoom can be changed accurately along with the change of the position, so that the effect of the shot video is difficult to be ensured, and therefore, how to fix the size of the shooting object in the acquired video by using a manner with higher reliability is a problem to be solved.
Disclosure of Invention
The application provides a video processing method and related equipment, and by the video processing method, the accuracy of size fixation of a shooting object in video processing can be improved in a simple and quick mode.
In a first aspect, a video processing method is provided, where the video processing method is applied to an electronic device, and includes:
displaying a first image, wherein the first image is an image frame of a target shooting object at a first position;
acquiring a second image when the target shooting object moves to a second position, wherein the second position and the first position are different, and the second image is an image frame acquired by the electronic equipment when the target shooting object moves to the second position;
When the electronic equipment comprises a depth device, determining a corresponding object distance after the target shooting object moves to the second position by utilizing the depth device;
determining a corresponding target zoom ratio from a preset zoom table according to the object distance, wherein the preset zoom table comprises a plurality of groups of object distances and target zoom ratios with one-to-one mapping relation;
determining a cutting frame according to the target zoom ratio, and cutting the second image according to the cutting frame;
and scaling the second image after clipping according to the target zoom ratio to obtain a third image comprising the target shooting object, wherein the size of the target shooting object in the first image is consistent with the size of the target shooting object in the third image.
It should be understood that the second image may refer to an image frame acquired in real time by the camera after the target photographing object moves; the target photographic subject may refer to part or all of the photographic subjects. For example, in the case of receiving an instruction of one user, the target photographic subject may be the current user, or may be one or more of the other users.
It should also be understood that the size of the target object in the first image is identical to the size of the target object in the third image, which means that the length, width, and height are respectively identical. In practical application, each item of length, width and height can be completely consistent respectively, or can have a slight error.
The embodiment of the application provides a video processing method, which can determine the final target zoom ratio by inquiring a preset zoom table according to the depth of a shooting object, and the depth information can more accurately determine the change of the shooting object, so that compared with the scheme of detecting and determining the zoom ratio by determining the size of the shooting object in the prior art, the video processing method can improve the reliability of the determined target ratio, thereby better ensuring that the size of the shooting object with a fixed size in the displayed video can be fixed.
In addition, the corresponding target zoom ratio is searched according to the preset zoom table to carry out subsequent processing, compared with the prior art, the method is simple and quick, the operation amount of calculating the zoom ratio by the electronic equipment can be saved, the processing efficiency is improved, the time is saved, and the power consumption of the electronic equipment is reduced.
With reference to the first aspect, in certain implementations of the first aspect, when the first image and the third image are displayed, a location where the electronic device is located is the same.
In the embodiment of the application, the electronic device may keep the position unchanged, and after the photographed target photographing object moves, the photographed target photographing object may keep fixed on the display size of the video display screen, so as to realize tracking and fixed-size display of the target object.
With reference to the first aspect, in certain implementation manners of the first aspect, when the second position is the same as the first position, the position where the electronic device is located is different when the first image and the third image are displayed.
In the embodiment of the application, the target shooting object can keep the position unchanged, and after the electronic equipment moves, the shot target shooting object can also keep the display size of the video display picture fixed, so that tracking fixed size display of the target object is realized.
With reference to the first aspect, in certain implementation manners of the first aspect, the method further includes:
when the electronic equipment does not comprise a depth device, acquiring an AF motor Hall parameter, wherein the AF motor Hall parameter is used for indicating the distance of an AF motor in a coil magnetic field generated by the Hall device;
determining the distance delta f of the AF motor for pushing the lens to move according to the Hall parameter of the AF motor;
and determining a corresponding target zoom ratio from the preset zoom table according to the distance delta f of the AF motor for pushing the lens to move, wherein the preset zoom table also comprises a plurality of groups of the distance delta f of the AF motor for pushing the lens to move and the target zoom ratio, wherein the groups of the distance delta f and the target zoom ratio have a one-to-one mapping relation.
The electronic equipment comprises an AF motor and a Hall device.
For example, by determining the distance of the AF motor in the coil magnetic field corresponding to the currently acquired image and the distance of the AF motor in the coil magnetic field corresponding to the image acquired next to the previous frame, the distance variation of the AF motor can be determined according to the two distances, and the distance variation of the AF motor is the distance Δf that the AF motor pushes the lens to move.
In the embodiment of the application, when the electronic equipment does not comprise the depth device, the electronic equipment can quickly and conveniently determine the distance deltaf of the AF motor pushing the lens to move by acquiring the internal AF motor Hall parameter.
With reference to the first aspect, in certain implementation manners of the first aspect, determining, by using the depth device, a corresponding object distance after the target shooting object moves to the second position includes:
acquiring a depth image by using the depth device;
and determining the object distance of the target shooting object according to the depth image.
In the embodiment of the application, the object distance of the shooting object is obtained by utilizing the depth device contained in the electronic equipment, so that on one hand, the data of the depth device can be multiplexed, and the calculated amount is reduced; on the other hand, as the operation accuracy of the depth device for determining the depth is higher, the determined object distance information is accurate, and thus the accuracy of the follow-up zooming can be improved.
With reference to the first aspect, in certain implementation manners of the first aspect, the method further includes:
detecting an operation indicating to run a camera application; or alternatively, the process may be performed,
an operation is detected that indicates to run the video telephony application.
In the embodiment of the application, the video processing method can be applied to the process of shooting the video by the camera application program; alternatively, the video processing method may be applied to a video call application.
In a second aspect, there is provided an electronic device comprising: one or more processors, memory, and a display screen; the memory is coupled with the one or more processors, the memory is for storing computer program code, the computer program code comprising computer instructions that the one or more processors call to cause the electronic device to perform:
displaying a first image, wherein the first image is an image frame of a target shooting object at a first position;
acquiring a second image when the target shooting object moves to a second position, wherein the second position and the first position are different, and the second image is an image frame acquired by the electronic equipment when the target shooting object moves to the second position;
When the electronic equipment comprises a depth device, determining a corresponding object distance after the target shooting object moves to the second position by utilizing the depth device;
determining a corresponding target zoom ratio from a preset zoom table according to the object distance, wherein the preset zoom table comprises a plurality of groups of object distances and target zoom ratios with one-to-one mapping relation;
determining a cutting frame according to the target zoom ratio, and cutting the second image according to the cutting frame;
and scaling the second image after clipping according to the target zoom ratio to obtain a third image comprising the target shooting object, wherein the size of the target shooting object in the first image is consistent with the size of the target shooting object in the third image.
With reference to the second aspect, in certain implementations of the second aspect, the electronic device is located in a same position when the first image and the third image are displayed.
With reference to the second aspect, in some implementations of the second aspect, when the second position is the same as the first position, the electronic device is located at a different position when the first image and the third image are displayed.
With reference to the second aspect, in certain implementations of the second aspect, the one or more processors invoke the computer instructions to cause the electronic device to further perform:
when the electronic equipment does not comprise a depth device, acquiring an AF motor Hall parameter, wherein the AF motor Hall parameter is used for indicating the distance of an AF motor in a coil magnetic field generated by the Hall device;
determining the distance delta f of the AF motor for pushing the lens to move according to the Hall parameter of the AF motor;
and determining a corresponding target zoom ratio from the preset zoom table according to the distance delta f of the AF motor for pushing the lens to move, wherein the preset zoom table also comprises a plurality of groups of the distance delta f of the AF motor for pushing the lens to move and the target zoom ratio, wherein the groups of the distance delta f and the target zoom ratio have a one-to-one mapping relation.
With reference to the second aspect, in certain implementations of the second aspect, the one or more processors invoke the computer instructions to cause the electronic device to further perform:
acquiring a depth image by using the depth device;
and determining the object distance of the target shooting object according to the depth image.
With reference to the second aspect, in certain implementations of the second aspect, the one or more processors invoke the computer instructions to cause the electronic device to further perform:
Detecting an operation indicating to run a camera application; or alternatively, the process may be performed,
an operation is detected that indicates to run the video telephony application.
It should be appreciated that the extensions, definitions, explanations and illustrations of the relevant content in the first aspect described above also apply to the same content in the second aspect.
In a third aspect, a video processing apparatus is provided, comprising means for performing any one of the video processing methods of the first aspect.
In one possible implementation, when the video processing apparatus is an electronic device, the processing unit may be a processor and the input unit may be a communication interface; the electronic device may further comprise a memory for storing computer program code which, when executed by the processor, causes the electronic device to perform any of the methods of the first aspect.
In a fourth aspect, there is provided a chip for application to an electronic device, the chip comprising one or more processors for invoking computer instructions to cause the electronic device to perform any of the video processing methods of the first aspect.
In a fifth aspect, there is provided a computer readable storage medium storing computer program code which, when executed by an electronic device, causes the electronic device to perform any one of the video processing methods of the first aspect.
In a sixth aspect, there is provided a computer program product comprising: computer program code which, when run by an electronic device, causes the electronic device to perform any of the video processing methods of the first aspect.
Drawings
FIG. 1 is a schematic illustration of an application scenario suitable for use in the present application;
FIG. 2 is a schematic illustration of another application scenario suitable for use in the present application;
FIG. 3 is a schematic illustration of one optical imaging provided by an embodiment of the present application;
FIG. 4 is a schematic illustration of another optical imaging provided by an embodiment of the present application;
fig. 5 is a schematic diagram of optical imaging in which an AF motor drives a lens to move according to an embodiment of the present application;
FIG. 6 is a preset scaling table provided in an embodiment of the present application;
FIG. 7 is a schematic flow chart of a video processing method provided in an embodiment of the present application;
FIG. 8 is a schematic diagram of a display interface for video processing according to an embodiment of the present application;
FIG. 9 is a schematic diagram of a hardware system suitable for use with the electronic device of the present application;
FIG. 10 is a schematic diagram of a software system suitable for use with the electronic device of the present application;
fig. 11 is a schematic structural diagram of a video processing apparatus provided in the present application;
Fig. 12 is a schematic structural diagram of a chip provided in the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the accompanying drawings.
In the description of the embodiments of the present application, unless otherwise indicated, "/" means or, for example, a/B may represent a or B; "and/or" herein is merely an association relationship describing an association object, and means that three relationships may exist, for example, a and/or B may mean: a exists alone, A and B exist together, and B exists alone. In addition, in the description of the embodiments of the present application, "plurality" means two or more than two.
The terms "first" and "second" are used below for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature.
First, some terms in the embodiments of the present application are explained for easy understanding by those skilled in the art.
1. Digital zoom (digital zoom), or digital zoom, refers to the amplification of a photographic subject by a processor within a camera that amplifies pixels of an image on an imaging component using an interpolation algorithm. Digital zooming does not actually change the focal length of the lens.
2. Optical zooming (optical zoom) refers to zooming by means of movement of a lens in a camera, thereby achieving enlargement and reduction of a photographic subject. The larger the optical zoom factor, the more far away the scene can be photographed.
3. Autofocus (AF) refers to the electronic device obtaining the highest image frequency component by adjusting the position of the focusing lens, so as to obtain a higher image contrast. The automatic focusing is a continuously accumulated process, and the electronic equipment compares the contrast of the images shot by the lens at different positions, so that the position of the lens when the contrast of the images is maximum is obtained, and the focal length of focusing is further determined.
The foregoing is a simplified description of the terminology involved in the embodiments of the present application, and is not described in detail below.
The video processing method provided by the embodiment of the application can be applied to various electronic devices.
In the embodiment of the present application, the electronic device 100 may be a mobile phone, a smart screen, a tablet computer, a wearable electronic device, a vehicle-mounted electronic device, an augmented reality (augmented reality, AR) device, a Virtual Reality (VR) device, a notebook computer, an ultra-mobile personal computer (UMPC), a netbook, a personal digital assistant (personal digital assistant, PDA), a projector, or the like, and the embodiment of the present application does not limit the specific type of the electronic device 100.
Fig. 1 is a schematic diagram of an application scenario suitable for the present application. The schematic diagram shown in fig. 1 may also be referred to as a "video tracking" scene.
In one example, illustrated with the electronic device being a tablet device, a video mode display interface as shown in fig. 1 (a); the display interface may include a photographing interface 10; the shooting interface 10 can comprise a view-finding frame 11 and a control 12 for indicating video; the preview image may be displayed in the viewfinder 11 before the user's click on the control 12 is detected.
After detecting the operation of clicking the control 12 by the user, responding to the operation of the user, the tablet device can perform video shooting; when a first photographic subject (for example, a child) is in a first position, the viewfinder 11 displays a first image as shown in fig. 1 (a); in the video shooting process, a first shooting object moves; for example, the first subject moves from the first position to the second position that is farther from the tablet device, and the second image shown in (b) of fig. 1 is displayed after the first subject moves from the position.
After the first subject moves, the first subject can be kept at the intermediate position in the viewfinder 11 all the time, and the size of the first subject can be kept substantially unchanged, which is the "video tracking" function. In other words, after the tablet device turns on the "video tracking" function, the position of the tablet device may remain unchanged, and after the subject moves, the subject may be always displayed in the middle position of the video display screen and remain unchanged in size.
Fig. 2 is a schematic diagram of another application scenario suitable for the present application. Fig. 2 shows a schematic diagram of what may also be referred to as a "zoom-in" scene.
In one example, the electronic device is a mobile phone, such as the video mode display interface shown in fig. 2 (a); the display interface may include a photographic interface 20; the shooting interface 20 may include a viewfinder 21 and a control 22 for indicating video recording; the preview image may be displayed in the viewfinder 21 before the user's click on the control 22 is detected.
After detecting the operation of clicking the control 22 by the user, responding to the operation of the user, and shooting the video by the mobile phone; when the cellular phone is in the third position, the viewfinder 21 displays a third image as shown in fig. 2 (a); in the video shooting process, the position of a shooting object is unchanged, and a user holds a mobile phone to move; for example, the user moves the mobile phone from the third position to the fourth position closer to the subject, and the mobile phone moves the position to display the fourth image as shown in fig. 2 (b).
In this case, the sphere in the object is the second object, and the other cones are the third object, the fourth object, and so on. After the mobile phone moves, the size of the sphere as the second subject can be kept constant in the viewfinder 21, and the position can be substantially constant, and the sizes of the other subjects such as the third subject and the fourth subject can be changed. The shooting function is a sliding zoom function. In other words, after the mobile phone is turned on and the "sliding zoom" function is performed, the user holds the mobile phone to push away the mobile phone in a direction away from the shooting object, or pulls the mobile phone in a direction close to the shooting object, and then the shooting object can be always displayed in the middle position of the video display screen and kept unchanged in size.
The above-mentioned scenes shown in fig. 1 and fig. 2 are described by way of example, and the video processing method provided in the embodiment of the present application may be applied, but is not limited to, in the following scenes:
video call, video conference application, long and short video application, video live broadcast application, video net class application, portrait intelligent fortune mirror application scene, shooting scene such as system camera video recording function video, video monitoring, intelligent cat eye, etc.
In the related art, taking the first scenario as an example, when video shooting is performed, a camera of the electronic device recognizes a shooting object in an acquired video image frame and then performs shooting object tracking to position the shooting object in real time; when the position of the shooting object moves, digital zooming can be performed according to the size of the shooting object, so that the size of the shooting object with fixed size in a final video is basically fixed; and then cutting to obtain an adaptive display specification, so that the display picture is adjusted in real time according to the position of the shooting object, and the effect of video tracking is achieved.
When the related technology is used for processing, after the position of the shooting object is changed, the size of the shooting object is changed due to the position change, and a certain risk exists in whether the proportion or the amplitude of the digital zoom can be changed accurately along with the position change, so that the shooting effect obtained later is difficult to ensure.
In view of this, the embodiment of the present application provides a video processing method, in which, when video shooting is performed, a camera of an electronic device performs large-resolution acquisition according to a fixed field of view, performs shooting object tracking after identifying a shooting object (for example, a face or a human body) in an acquired video image frame, and locates a shooting object position in real time; when the position of the shooting object moves, the corresponding target zoom ratio is determined from a preset zoom table according to the depth of the shooting object, so that the large-resolution video image frame is subjected to corresponding zoom processing. The depth information can more accurately reflect the change of the shooting object, so that the determined target zoom ratio has higher reliability, and the size of the shooting object with fixed size in the displayed video can be better ensured to be fixed when the zooming is carried out according to the depth information. Finally, clipping is carried out to obtain an adaptive display specification, and the display picture is adjusted in real time according to the position of the shooting object, so that the effect of video tracking is achieved.
In the video processing method provided by the application, since the target zoom ratio needs to be determined by using the preset zoom table, the imaging and focusing principle of the camera and the provided preset zoom table are described first with reference to the accompanying drawings, and then the video processing method provided by the application is described with reference to the preset zoom table.
Fig. 3 shows a schematic diagram of one path of optical imaging. With reference to fig. 3, the optical imaging method may be applied to a camera or a camera in an electronic device, the camera including at least a lens and an imaging part.
The lens may include one or more lenses, and this application illustrates one lens. The imaging component is for example a charge coupled image sensor. The lens coincides with the optical axis of the imaging component.
The lens position is fixed, and the focal length is f at the q1 position; the imaging component is fixed in position, and at the r1 position, the distance between the lens center and the imaging component can be referred to as the image distance, i.e., the image distance v is fixed; the size of the subject is L fixed, but the position can be moved, taking the above condition as an example. As shown in fig. 3 (a), in the optical imaging process, the imaging component may generate an image, and accordingly, when the photographing object moves to a certain position, light rays emitted from the photographing object are refracted by the lens, and then, corresponding imaging in the image may be generated on the imaging component.
It should be understood that the imaging corresponding to the subject refers to the subject in the video. The imaging corresponding to the photographic subject is local content in the image generated on the imaging section.
For example, if the subject is at the p1 position and the distance between the subject and the lens is u1, the corresponding size of the image corresponding to the subject in the image generated on the imaging means is I1 as shown by the image 1 in (b) of fig. 3.
If the subject moves in a direction approaching the lens and moves to the p2 position, the distance between the subject and the lens is shortened to u2, and the corresponding size of the image corresponding to the subject in the image generated on the imaging section is I2 as shown by the image 2 in (b) of fig. 3. Since the photographic subject is closer to the lens than the imaging 1, the size of the generated imaging 2 is relatively larger than that of the imaging 1.
If the subject continues to move in the direction approaching the lens and moves to the p3 position, the distance between the subject and the lens shortens to u3, and the corresponding size of the image generated on the imaging section is I3 as shown by the image 3 in (b) of fig. 3. Since the photographic subject is closer to the lens than the imaging 1 and the imaging 2, the size of the generated imaging 3 is, of course, much larger than the size of the imaging 1 with respect to the size larger than the imaging 2.
In combination with the above light path diagram, it can be seen from the similar triangle:
Figure BDA0003825352860000071
Wherein L is the size of the photographic subject, or referred to as the object size; u is the object distance, namely the distance between the shooting object and the center of the lens; i is the imaging size corresponding to the shooting object; f is the focal length of the lens, which is the distance from the center of the lens to the focal point of light collection, and is a fixed value for a fixed lens.
According to the above formula, when the lens position is fixed and the size of the photographing object is unchanged, the size of the image I corresponding to the photographing object is inversely proportional to the object distance u. The smaller the object distance u, the larger the corresponding imaged dimension I; conversely, the larger the object distance u, the smaller the corresponding imaged dimension I.
If the size of the corresponding image of the shooting object on the imaging component is to be kept unchanged under the condition that the object distance is changed, the corresponding image needs to be amplified when the shooting object is far away from the lens; when the lens is closer, the corresponding imaging is reduced. The scaling is known as digital zoom scaling.
In combination with the above, the corresponding digital zoom ratio at different object distances may be determined based on a ratio of the preset minimum object distance u' to the object distance u.
The shortest distance between the shooting object and the lens, indicated by the preset minimum object distance u', may be preset according to needs, which is not limited in the embodiment of the present application.
It should be noted that, although the closer the subject is to the lens, the larger the size of the corresponding image formed on the imaging device, but because the size of the imaging device is limited, when the image formed by the subject exceeds the size of the imaging device, the imaging cannot be completed, so that the subject cannot be limitlessly close to the lens to perform the image formation, therefore, the shortest distance of the subject before moving to the lens needs to be limited, and the preset minimum object distance u' needs to be set, so as to ensure the integrity of the image formation content corresponding to the subject.
It should be noted that the farther the subject is from the lens, the smaller the size of the corresponding image formed on the imaging section. Although the size of the image can be enlarged according to the determined digital zoom ratio, since the size of the image corresponding to the subject on the imaging section is too small when the subject is far from the lens to a certain distance, the number of pixels occupied in the whole image is extremely small, for example, less than 100 pixels. At this time, the electronic apparatus may not recognize the imaging content of the subject, and thus may not be able to perform the enlargement, and thus the maximum distance when the subject is far from the front of the lens is limited. For example, standard lenses are typically no more than 5 meters and tele lenses are typically no more than 20 meters.
By way of example, fig. 6 shows a preset scaling table.
Taking a standard lens as an example, as shown in fig. 6, the preset minimum object distance u' may be 300mm, and if the object distance u between the shooting object and the lens is also 300mm, the ratio of the preset minimum object distance to the object distance is 1, and the corresponding digital zoom ratio is 1. If the object distance u between the shooting object and the lens is 600mm, the ratio of the preset minimum object distance to the object distance is 0.5, and the corresponding digital zoom ratio is 1. The calculation process of the digital zoom ratio corresponding to other object distances is similar to that described above, and will not be repeated here. It should be understood that 5000mm is the furthest object distance to which the lens corresponds.
Fig. 4 shows a schematic diagram of another optical imaging. With reference to fig. 4 and 5, the optical imaging method may be applied to a camera or a camera in an electronic device, the camera including at least a lens and an imaging part.
From the principle of optical imaging it is known that:
Figure BDA0003825352860000081
where u is the object distance, v is the image distance, and f is the focal length.
As can be seen from the above optical imaging formula, when the distance between the photographic subject and the center of the lens, i.e., the object distance, is infinity, the image distance v and the focal length f are equal, and at this time, the position of the lens can be regarded as an initial position where the lens is disposed in the camera, for example, a q0 position as indicated in fig. 4.
On the basis, fig. 5 shows an optical imaging schematic diagram in which the AF motor drives the lens to move, and at this time, the camera further includes a focusing component. The focusing part may include an AF processing part and an AF motor.
The AF processing part is used for acquiring images formed on the imaging part and controlling the AF motor to rotate according to the definition degree of the acquired images. The AF motor rotates to drive the lens to move, so that the focusing function of the camera is realized. For example, the AF motor may indicate an optical image anti-shake (optical image stabilization, OIS) motor.
As shown in fig. 5, during focusing, the initial position of the lens is the position q0 where the image distance is equal to the focal length, and the distance between the photographing object and the lens, i.e., the object distance u, can be regarded as a fixed value. It should be appreciated that the fixed value is less than infinity. According to the optical imaging formula, focusing is the process of changing the image distance v by moving the position of the lens on the premise that the object distance u and the focal length f are fixed values, so that the image formed on the imaging component is clearer. In the focusing process, as shown in fig. 5, the lens is moved from the initial position q0 along the direction away from the imaging component, so that the image distance v can be changed, and the purpose of focusing is achieved.
From the above optical imaging formula:
Figure BDA0003825352860000082
from this, it can be deduced that the difference between the image distance and the focal length is:
Figure BDA0003825352860000083
the difference Δf is the distance that the AF motor needs to push the lens to move, and the unit is micrometers (um).
Meanwhile, the lens is pushed, so that the size of the image formed in the image on the imaging component is changed. This variation may be referred to as an AF zoom ratio. Wherein the size of the AF zoom ratio may be determined according to the following formula:
Figure BDA0003825352860000084
/>
for example, taking a standard lens as an example, when the object distance u is infinity, the difference Δf between the image distance v and the focal length f is 0, and at this time, the lens is not moved, and no change of imaging size is generated.
As shown in fig. 6, when the object distance u is 5000mm, the AF motor needs to push the lens to move a distance Δf of about 5.005um, approximately 5.01um, in a direction away from the imaging part when focusing can be determined according to the above difference formula. At this time, it can be determined that the corresponding AF zoom ratio is about 1.001, that is, the imaging size of the photographic subject in the image on the imaging section is hardly changed before and after the lens movement.
When the object distance u is 2000mm, it can be determined according to the above difference formula that the AF motor needs to push the lens to move a distance Δf of about 12.53um in a direction away from the imaging part. At this time, it is possible to determine that the corresponding AF zoom ratio is about 1.003, and the imaging size of the subject in the image on the imaging section is slightly changed before and after the lens movement.
When focusing at other object distances, the process of determining the distance that the AF motor needs to push the lens to move in the direction away from the imaging component and the AF zoom ratio are similar to the process described above, and the description is omitted here.
The above details the calculation principle of two zoom ratios, and it is known that when the object distance between the shooting object and the lens is large, the influence of the digital zoom ratio is mainly the influence of the AF motor pushing the lens is not great to the size of imaging; when the object distance between the shooting object and the lens is gradually reduced, the AF motor pushes the lens to change to generate a non-negligible effect on the imaging size, so if the imaging size is to be ensured to be constant, the digital zoom ratio and the AF zoom ratio need to be combined to determine the final zoom ratio. Here, the ratio of the digital zoom ratio to the AF zoom ratio may be determined as the final target zoom ratio.
For example, as shown in fig. 6, when the object distance u is 5000mm, the determined digital zoom ratio is 0.06, and the af zoom ratio is 1.001, and at this time, the corresponding target zoom ratio may be determined to be 0.06, which is the same as the digital zoom ratio.
When the object distance u is 2000mm, the determined digital zoom ratio is 0.15, and the AF zoom ratio is 1.003, and at this time, the corresponding target zoom ratio can be determined to be 0.15, which is the same as the digital zoom ratio in size.
And the other steps are analogized, when the object distance u is 300mm, the determined digital zoom ratio is 1, the AF zoom ratio is 1.017, at this time, the corresponding target zoom ratio can be determined to be 0.98, and compared with the digital zoom ratio, the final target zoom ratio is reduced by a certain amount to ensure the fixation of the imaging size.
It should be understood that fig. 6 is only a preset zoom table, and the focal length and object distance of the lens can be set and modified according to needs, which is not limited in any way in the embodiment of the present application.
It should also be appreciated that after the preset scaling table is determined, it may be stored for recall during the video processing provided subsequently.
The video processing method provided in the embodiment of the present application is described in detail below with reference to the predetermined scaling table determined above.
Fig. 7 is a schematic flowchart of a video processing method provided in an embodiment of the present application. The video processing method 10 as shown in fig. 7 may include the following S11 to S20, and these steps are described in detail below, respectively.
The video processing method provided by the embodiment of the application can be used for a video mode, wherein the video mode can refer to video shooting of electronic equipment; alternatively, the video mode may refer to the electronic device performing a video call.
In one possible implementation manner, a function of starting video tracking may be set in a setting interface of the electronic device, and after an application program for video call in the electronic device is run, the function of starting video tracking may be automatically performed to execute the video processing method of the embodiment of the present application.
In one possible implementation manner, a function of turning on "video tracking" may be set in a camera of the electronic device, and according to the setting, the function of turning on "video tracking" when recording video may be performed, so as to execute the video processing method of the embodiment of the present application.
S11, requesting to turn on the camera.
For example, an application in the electronic device issues an instruction requesting to turn on the camera; applications may include, but are not limited to: weChat video call applications, video conferencing applications, video live applications, video recording applications, camera applications, etc.
In one example, a camera application of an electronic device may request that the camera be turned on while recording video.
For example, as shown in fig. 8, it may be that the user requests to turn on the camera when clicking an icon 81 of the camera application program to take a video.
In one example, a WeChat video call application in an electronic device may request to turn on a camera when a video invitation is initiated or received.
For example, as shown in fig. 8, it may be referred to that a user requests to turn on a camera when clicking an icon 82 of a video application to make a video call.
S12, after the camera detects an instruction for requesting to open the camera, acquiring video image frames.
For example, the camera may refer to an image sensor in a camera module; the video image frame may refer to that the photographed object moves relative to the camera, and the distance between the photographed object and the camera increases or decreases, that is, when the z-position of the photographed object changes; or, when the camera moves relative to the shooting object, the distance between the camera and the shooting object increases or shortens, that is, when the z-position of the camera changes, the image frames acquired by the camera image sensor in real time.
Illustratively, the resolution size of the video image frames acquired by the camera may be full size (full size).
For example, if the maximum resolution supported by the lens in the camera module is 4096×2160, the resolution of the acquired full-size video image frame may be 4096×2160.
S13, detecting the video image frames, and determining and tracking the shooting object.
Currently, tracking in video display frames is typically achieved by human body detection of a photographic subject in a video image frame; human body detection generally adopts a human body detection tracking algorithm, namely, key points of a shooting object are detected; key points of a photographic subject may include, but are not limited to: head, shoulders, arms, hands, legs, feet, eyes, nose, mouth, clothing, etc.; however, the amount of computation of the human body detection and tracking algorithm for the shooting object is large, which results in high performance requirements for the electronic device.
For example, an existing face detection algorithm may be used to detect a face of a video image frame acquired by a camera, and determine that the detected face is a corresponding shooting object.
It should be appreciated that when multiple faces are identified, a primary subject may be recommended to the user through background calculations, or alternatively, the user may be shown to select a primary subject.
S14, determining whether the electronic equipment comprises a depth device, if yes, executing S15, otherwise, executing S16.
The depth device is used for acquiring a depth image, and the depth image is used for reflecting the depth of a shooting object. The depth devices may include, for example, laser sensors, time of flight (TOF) sensors, phase focus (phase detect auto focus, PDAF) sensors, dual cameras, etc., although other devices may be included, and the embodiments of the present application are not limited in this respect.
And S15, when the electronic equipment comprises a depth device, acquiring a depth image by using the depth device, and determining the depth, namely the object distance, of the shooting object according to the depth image.
When the electronic equipment comprises a depth device, the depth device can acquire a depth image, and the depth, namely the object distance, of different people or things in a shooting scene relative to the lens can be determined according to the depth image. The depth corresponding to the shooting object is also included. The method for obtaining the depth by the depth device can be determined by using the prior art, and the embodiment of the application does not limit the method.
It should be understood that, the object distance of the shooting object is obtained by using the depth device contained in the electronic equipment, so that on one hand, the data of the depth device can be multiplexed, and the calculated amount is reduced; on the other hand, as the operation accuracy of the depth device for determining the depth is higher, the determined object distance information is accurate, and thus the accuracy of the follow-up zooming can be improved.
S16, when the depth device is not included, acquiring an AF motor Hall parameter, and determining the distance Deltaf of the AF motor pushing the lens to move according to the AF motor Hall parameter.
Wherein the electronic device further comprises a hall device. The AF motor hall parameter is used to indicate the distance of the AF motor in the coil magnetic field generated by the hall device.
For example, by determining the distance of the AF motor in the coil magnetic field corresponding to the currently acquired image and the distance of the AF motor in the coil magnetic field corresponding to the image acquired next to the previous frame, the distance variation of the AF motor can be determined according to the two distances, and the distance variation of the AF motor is the distance Δf that the AF motor pushes the lens to move.
It should be appreciated that when the electronic device does not include a depth device, the electronic device may quickly and conveniently determine the distance Δf that the AF motor pushes the lens to move by acquiring the internal AF motor hall parameter.
S17, when the object distance of the shooting object is determined according to S15, inquiring the corresponding target zoom ratio from a preset zoom table according to the object distance; when determining the distance delta f that the AF motor pushes the lens to move according to S16, inquiring the corresponding target zoom ratio from a preset zoom table according to delta f.
For example, the preset zoom table may be a zoom table as shown in fig. 6, and when determining the object distance of the shooting object, the corresponding target zoom ratio may be queried from fig. 6 according to the object distance. When the distance Δf by which the AF motor pushes the lens to move is determined, the corresponding target zoom ratio may be queried from fig. 6 according to Δf.
S18, determining a cutting frame according to the target zoom ratio, and cutting the video image frame according to the cutting frame.
The coordinate information of the corresponding clipping frame can be determined according to the coordinate information of the shooting object of the video image frame and the target zoom ratio, and then clipping processing is carried out on the video image frame according to the coordinate information of the clipping frame, so that display content is obtained.
It should be appreciated that the larger the target zoom ratio, the smaller the size of the crop box; the smaller the target zoom ratio, the larger the size of the crop box.
On the basis of the above, for example, based on the coordinate information of the crop frame and the frame coordinate information of the video image frame, the coordinate information of the crop frame corresponding to the video image frame and the target zoom ratio may be adjusted according to the adjustment strategy of the N video image frames (for example, according to the smoothness requirement).
And S19, scaling the cut video image frame to a fixed resolution according to the target zoom ratio to obtain a target video image frame.
For example, the magnification is performed centering on the center of the clipped video image frame.
The cut video image frames are display contents, and the display contents can be scaled according to the target zoom ratio, so that the processed video image frames meet the display fixed resolution.
And S20, displaying the target video image frame in the application program.
For example, the video image frames after the cropping and scaling processing are transmitted to an application program, and the video image frames are displayed in the application program.
Illustratively, the resolution size of the target video image frame after clipping and scaling is 1920×1080 of the resolution size of the display screen of the electronic device; the processed target video image frames are then transmitted to the application program, and the display screen of the electronic device can display target video image frames that fit the display specification of the electronic device.
In an embodiment of the present application, a video image frame of a photographic subject is acquired after the photographic subject moves, and/or a video image frame of the photographic subject is acquired after an electronic device moves; detecting video image frames, and determining and tracking a shooting object; then, when the electronic equipment comprises a depth device, determining the depth of a shooting object by utilizing the depth device, and when the electronic equipment does not comprise the depth device, determining the distance delta f of the AF motor for pushing the lens to move by utilizing the Hall parameter of the AF motor; inquiring a preset scaling table according to the depth or delta f, determining a corresponding target zoom ratio, further determining a cutting frame according to the target zoom ratio, cutting a video image according to the cutting frame, and scaling to a fixed resolution according to the target zoom ratio to obtain display content. In the embodiment of the application, the final target zoom ratio can be determined by inquiring the preset zoom table according to the depth of the shooting object, and the change of the shooting object can be more accurately reflected by the depth information.
In addition, the corresponding target zoom ratio is searched according to the preset zoom table to carry out subsequent processing, compared with the prior art, the method is simple and quick, the operation amount of calculating the zoom ratio by the electronic equipment can be saved, the processing efficiency is improved, the time is saved, and the power consumption of the electronic equipment is reduced.
The video processing method, the optical principle and the applicable scenario according to the embodiments of the present application are described above in connection with fig. 1 to 8. The software system, hardware system, device and chip of the electronic apparatus to which the present application is applied will be described in detail below with reference to fig. 9 to 12. It should be understood that the software system, the hardware system, the device and the chip in the embodiments of the present application may perform the various methods in the embodiments of the present application, that is, the specific working processes of the various products below may refer to the corresponding processes in the embodiments of the methods described above.
Fig. 9 shows a hardware system suitable for use in the electronic device of the present application. The electronic device 100 may be used to implement the video processing method described in the method embodiments described above.
The electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (universal serial bus, USB) interface 130, a charge management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, keys 190, a motor 191, an indicator 192, a camera 193, a display 194, and a subscriber identity module (subscriber identification module, SIM) card interface 195, etc. The sensor module 180 may include a pressure sensor 180A, a gyro sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
The configuration shown in fig. 9 does not constitute a specific limitation on the electronic apparatus 100. In other embodiments of the present application, electronic device 100 may include more or fewer components than those shown in FIG. 9, or electronic device 100 may include a combination of some of the components shown in FIG. 9, or electronic device 100 may include sub-components of some of the components shown in FIG. 9. The components shown in fig. 9 may be implemented in hardware, software, or a combination of software and hardware.
Processor 110 may include one or more processing units. For example, the processor 110 may include at least one of the following processing units: application processors (application processor, AP), modem processors, graphics processors (graphics processing unit, GPU), image signal processors (image signal processor, ISP), controllers, video codecs, digital signal processors (digital signal processor, DSP), baseband processors, neural-Network Processors (NPU). The different processing units may be separate devices or integrated devices.
The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution.
A memory may also be provided in the processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby improving the efficiency of the system.
In some embodiments, the processor 110 may include one or more interfaces. For example, the processor 110 may include at least one of the following interfaces: inter-integrated circuit, I2C) interfaces, inter-integrated circuit audio (inter-integrated circuit sound, I2S) interfaces, pulse code modulation (pulse code modulation, PCM) interfaces, universal asynchronous receiver transmitter (universal asynchronous receiver/transmitter, UART) interfaces, mobile industry processor interfaces (mobile industry processor interface, MIPI), general-purpose input/output (GPIO) interfaces, SIM interfaces, USB interfaces.
The I2C interface is a bi-directional synchronous serial bus comprising a serial data line (SDA) and a serial clock line (derail clock line, SCL). The I2S interface may be used for audio communication. PCM interfaces may also be used for audio communication to sample, quantize and encode analog signals. The UART interface is a universal serial data bus for asynchronous communications. The bus may be a bi-directional communication bus. It converts the data to be transmitted between serial communication and parallel communication. The MIPI interface may be used to connect the processor 110 with peripheral devices such as the display 194 and camera 193. The MIPI interfaces include camera serial interfaces (camera serial interface, CSI), display serial interfaces (display serial interface, DSI), and the like.
In some embodiments, processor 110 and camera 193 communicate through a CSI interface to implement the photographing functions of electronic device 100. The processor 110 and the display 194 communicate via a DSI interface to implement the display functionality of the electronic device 100. The GPIO interface may be configured by software. The GPIO interface may be configured as a control signal interface as well as a data signal interface.
In some embodiments, a GPIO interface may be used to connect the processor 110 with the camera 193, the display 194, the wireless communication module 160, the audio module 170, and the sensor module 180. The GPIO interface may also be configured as an I2C interface, an I2S interface, a UART interface, or a MIPI interface.
The USB interface 130 is an interface conforming to the USB standard specification, and may be, for example, a Mini (Mini) USB interface, a Micro (Micro) USB interface, or a C-type USB (USB Type C) interface. The USB interface 130 may be used to connect a charger to charge the electronic device 100, to transfer data between the electronic device 100 and a peripheral device, and to connect a headset to play audio through the headset. The USB interface 130 may also be used to connect other electronic devices 100, such as AR devices.
The connection relationship between the modules shown in fig. 9 is merely illustrative, and does not limit the connection relationship between the modules of the electronic device 100. Alternatively, the modules of the electronic device 100 may also use a combination of the various connection manners in the foregoing embodiments.
The charge management module 140 is used to receive power from a charger. The charging management module 140 may also supply power to the electronic device 100 through the power management module 141 while charging the battery 142. The power management module 141 is used for connecting the battery 142, and the charge management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charge management module 140 to power the processor 110, the internal memory 121, the display 194, the camera 193, the wireless communication module 160, and the like. The power management module 141 may also be used to monitor parameters such as battery capacity, battery cycle times, and battery state of health (e.g., leakage, impedance). Alternatively, the power management module 141 may be provided in the processor 110, or the power management module 141 and the charge management module 140 may be provided in the same device.
The wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, the modem processor, the baseband processor, and the like. The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device 100 may be used to cover a single or multiple communication bands. Different antennas may also be multiplexed to improve the utilization of the antennas.
The mobile communication module 150 may provide a solution for wireless communication applied on the electronic device 100, such as at least one of the following: second generation (2) th generation, 2G) mobile communication solutions, third generation (3 th generation, 3G) mobile communication solution, fourth generation(4 th generation, 5G) mobile communication solution, fifth generation (5 th generation, 5G) mobile communication solution.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating the low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then transmits the demodulated low frequency baseband signal to the baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs sound signals through audio devices (e.g., speaker 170A, receiver 170B) or displays images or video through display screen 194. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be provided in the same device as the mobile communication module 150 or other functional module, independent of the processor 110.
Similar to the mobile communication module 150, the wireless communication module 160 may also provide wireless communication solutions applied on the electronic device 100, such as at least one of the following: wireless local area networks (wireless local area networks, WLAN), bluetooth (BT), bluetooth low energy (bluetooth low energy, BLE), ultra Wide Band (UWB), global navigation satellite system (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), near field communication (near field communication, NFC), infrared (IR) technologies.
In some embodiments, antenna 1 of electronic device 100 is coupled to mobile communication module 150 and antenna 2 of electronic device 100 is coupled to wireless communication module 160 such that electronic device 100 may communicate with networks and other electronic devices via wireless communication techniques.
The electronic device 100 may implement display functions through a GPU, a display screen 194, and an application processor. The GPU is a microprocessor for image processing, and is connected to the display 194 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
The display screen 194 may be used to display images or video. The display 194 includes a display panel. The display panel may employ a liquid crystal display (liquid crystal display, LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (AMOLED), a flexible light-emitting diode (FLED), a Mini light-emitting diode (Mini LED), a Micro light-emitting diode (Micro LED), a Micro OLED (Micro OLED), or a quantum dot LED (quantum dot light emitting diodes, QLED). In some embodiments, the electronic device 100 may include 1 or N display screens 194, N being a positive integer greater than 1.
The electronic device 100 may implement a photographing function through an ISP, a camera 193, a video codec, a GPU, a display screen 194, an application processor, and the like.
The ISP is used to process data fed back by the camera 193. For example, when photographing, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electric signal, and the camera photosensitive element transmits the electric signal to the ISP for processing and is converted into an image visible to naked eyes. The ISP can carry out algorithm optimization on noise, brightness and color of the image, and can optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in the camera 193.
The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image onto the photosensitive element. The photosensitive element may be a charge coupled device (charge coupled device, CCD) or a Complementary Metal Oxide Semiconductor (CMOS) phototransistor. The photosensitive element converts the optical signal into an electrical signal, which is then transferred to the ISP to be converted into a digital image signal. The ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into a standard Red Green Blue (RGB), YUV, etc. format image signal. In some embodiments, electronic device 100 may include 1 or N cameras 193, N being a positive integer greater than 1.
Illustratively, in embodiments of the present application, camera 193 may acquire video image frames, which may refer to acquired full-size image frames; the camera 193 may transmit the acquired video image frames to an ISP for processing the video image frames acquired by the camera 193; for example, the ISP may obtain parameters of the target zoom ratio and clipping process from the processor 110; ISP cuts full-size video image frames according to the cutting processing parameters, and scales the cut video image frames according to the target zoom ratio to obtain target video image frames with fixed resolution, wherein the target video image frames meet 194 resolution of the display screen; the target video image frames are transmitted to the application program and the display screen 194 displays the processed video image frames.
Illustratively, in embodiments of the present application, the detection and tracking calculations, clipping and scaling parameter calculations may be performed in the processor 110. It should be appreciated that the relevant steps of determining parameters in the video processing method of the present application may be performed in the processor 110; the ISP is configured to obtain relevant parameters for processing the video image frames, and process the video image frames according to the relevant parameters to obtain target video image frames suitable for display specifications of a display screen 194 of the electronic device.
The digital signal processor is used for processing digital signals, and can process other digital signals besides digital image signals. For example, when the electronic device 100 selects a frequency bin, the digital signal processor is used to fourier transform the frequency bin energy, or the like.
Video codecs are used to compress or decompress digital video. The electronic device 100 may support one or more video codecs. In this way, the electronic device 100 may play or record video in a variety of encoding formats, such as: dynamic picture experts group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, and MPEG4.
The external memory interface 120 may be used to connect an external memory card, such as a Secure Digital (SD) card, to enable expanding the memory capabilities of the electronic device 100. The external memory card communicates with the processor 110 through an external memory interface 120 to implement data storage functions. For example, files such as music, video, etc. are stored in an external memory card.
The internal memory 121 may be used to store computer executable program code including instructions. The internal memory 121 may include a storage program area and a storage data area.
The electronic device 100 may implement audio functions, such as music playing and recording, through an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, an application processor, and the like.
The audio module 170 is used to convert digital audio information into an analog audio signal output, and may also be used to convert an analog audio input into a digital audio signal. The audio module 170 may also be used to encode and decode audio signals.
The speaker 170A, also referred to as a horn, is used to convert audio electrical signals into sound signals. The electronic device 100 may listen to music or hands-free conversation through the speaker 170A. A receiver 170B, also referred to as an earpiece, converts the audio electrical signal into a sound signal.
In some embodiments, the pressure sensor 180A may be disposed on the display screen 194. The pressure sensor 180A may be of various types, such as a resistive pressure sensor, an inductive pressure sensor, or a capacitive pressure sensor. The capacitive pressure sensor may be a device comprising at least two parallel plates with conductive material, and when a force is applied to the pressure sensor 180A, the capacitance between the electrodes changes, and the electronic device 100 determines the strength of the pressure based on the change in capacitance. When a touch operation acts on the display screen 194, the electronic apparatus 100 detects the touch operation according to the pressure sensor 180A. The electronic device 100 may also calculate the location of the touch based on the detection signal of the pressure sensor 180A. In some embodiments, touch operations that act on the same touch location, but at different touch operation strengths, may correspond to different operation instructions. For example: executing an instruction for checking the short message when the touch operation with the touch operation intensity smaller than the first pressure threshold acts on the short message application icon; and executing the instruction of newly creating the short message when the touch operation with the touch operation intensity being larger than or equal to the first pressure threshold acts on the short message application icon.
The gyro sensor 180B may be used to determine a motion gesture of the electronic device 100. In some embodiments, the angular velocity of electronic device 100 about three axes (i.e., x-axis, y-axis, and z-axis) may be determined by gyro sensor 180B. The gyro sensor 180B may be used for photographing anti-shake. For example, when the shutter is pressed, the gyro sensor 180B detects the shake angle of the electronic device 100, calculates the distance to be compensated by the lens module according to the angle, and makes the lens counteract the shake of the electronic device 100 through the reverse motion, so as to realize anti-shake. The gyro sensor 180B can also be used for scenes such as navigation and motion sensing games.
The air pressure sensor 180C is used to measure air pressure. The magnetic sensor 180D includes a hall sensor. The electronic device 100 may detect the opening and closing of the flip cover using the magnetic sensor 180D.
The acceleration sensor 180E may detect the magnitude of acceleration of the electronic device 100 in various directions (typically, x-axis, y-axis, and z-axis). The magnitude and direction of gravity may be detected when the electronic device 100 is stationary. The acceleration sensor 180E may also be used to recognize the gesture of the electronic device 100 as an input parameter for applications such as landscape switching and pedometer.
The distance sensor 180F is used to measure a distance. The electronic device 100 may measure the distance by infrared or laser. In some embodiments, for example, in a shooting scene, the electronic device 100 may range using the distance sensor 180F to achieve fast focus.
The proximity light sensor 180G may include, for example, a light-emitting diode (LED) and a light detector, for example, a photodiode. The LED may be an infrared LED. The electronic device 100 emits infrared light outward through the LED. The electronic device 100 detects infrared reflected light from nearby objects using a photodiode. When the reflected light is detected, the electronic device 100 may determine that an object is present nearby. When no reflected light is detected, the electronic device 100 may determine that there is no object nearby. The electronic device 100 can use the proximity light sensor 180G to detect whether the user holds the electronic device 100 close to the ear for talking, so as to automatically extinguish the screen for power saving. The proximity light sensor 180G may also be used for automatic unlocking and automatic screen locking in holster mode or pocket mode.
The ambient light sensor 180L is used to sense ambient light level. The electronic device 100 may adaptively adjust the brightness of the display 194 based on the perceived ambient light level. The ambient light sensor 180L may also be used to automatically adjust white balance when taking a photograph. Ambient light sensor 180L may also cooperate with proximity light sensor 180G to detect whether electronic device 100 is in a pocket to prevent false touches.
The fingerprint sensor 180H is used to collect a fingerprint. The electronic device 100 may utilize the collected fingerprint feature to perform functions such as unlocking, accessing an application lock, taking a photograph, and receiving an incoming call.
The temperature sensor 180J is for detecting temperature. In some embodiments, the electronic device 100 performs a temperature processing strategy using the temperature detected by the temperature sensor 180J. For example, when the temperature reported by temperature sensor 180J exceeds a threshold, electronic device 100 performs a reduction in the performance of a processor located in the vicinity of temperature sensor 180J in order to reduce power consumption to implement thermal protection. In other embodiments, when the temperature is below another threshold, the electronic device 100 heats the battery 142 to avoid the low temperature causing the electronic device 100 to be abnormally shut down. In other embodiments, when the temperature is below a further threshold, the electronic device 100 performs boosting of the output voltage of the battery 142 to avoid abnormal shutdown caused by low temperatures.
The touch sensor 180K, also referred to as a touch device. The touch sensor 180K may be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, which is also called a touch screen. The touch sensor 180K is for detecting a touch operation acting thereon or thereabout. The touch sensor 180K may communicate the detected touch operation to the application processor to determine the touch event type. Visual output related to touch operations may be provided through the display 194. In other embodiments, the touch sensor 180K may also be disposed on the surface of the electronic device 100 and at a different location than the display 194.
The bone conduction sensor 180M may acquire a vibration signal. In some embodiments, bone conduction sensor 180M may acquire a vibration signal of a human vocal tract vibrating bone pieces. The bone conduction sensor 180M may also contact the pulse of the human body to receive the blood pressure pulsation signal.
The keys 190 include a power-on key and an volume key. The keys 190 may be mechanical keys or touch keys. The electronic device 100 may receive a key input signal and implement a function related to the case input signal.
The motor 191 may generate vibration. The motor 191 may be used for incoming call alerting as well as for touch feedback. The motor 191 may generate different vibration feedback effects for touch operations acting on different applications. The motor 191 may also produce different vibration feedback effects for touch operations acting on different areas of the display screen 194. Different application scenarios (e.g., time alert, receipt message, alarm clock, and game) may correspond to different vibration feedback effects. The touch vibration feedback effect may also support customization.
The indicator 192 may be an indicator light, which may be used to indicate a change in state of charge and charge, or may be used to indicate a message, missed call, and notification.
The SIM card interface 195 is used to connect a SIM card. The SIM card may be inserted into the SIM card interface 195 to make contact with the electronic device 100, or may be removed from the SIM card interface 195 to make separation from the electronic device 100.
The hardware system of the electronic device 100 is described in detail above, and the software system of the electronic device 100 is described below. The software system may employ a layered architecture, an event driven architecture, a microkernel architecture, a micro-service architecture, or a cloud architecture, and the embodiments of the present application illustratively describe the software system of the electronic device 100.
As shown in fig. 2, the software system using the hierarchical architecture is divided into several layers, each of which has a clear role and division. The layers communicate with each other through a software interface. In some embodiments, the software system may be divided into four layers, from top to bottom, an application layer, an application framework layer, an Zhuoyun row (Android run) and system libraries, and a kernel layer, respectively.
The application layer may include camera, gallery, calendar, conversation, map, navigation, WLAN, bluetooth, music, video, short message, etc. applications.
The video processing method of the embodiment of the application can be applied to a camera application program or a video application program; for example, the "video tracking" function may be turned on in the setting in the electronic device, and after the electronic device detects an instruction that the video application requests to turn on the camera, the "video tracking" function may be turned on; or, the function of starting the video tracking can be set in the camera application program, and the electronic equipment can start the function of starting the video tracking after detecting the instruction of the camera application program for requesting to start the camera; the "video tracking" function may be described with reference to fig. 7.
The application framework layer provides an application programming interface (application programming interface, API) and programming framework for application programs of the application layer. The application framework layer may include some predefined functions.
For example, the application framework layer includes a window manager, a content provider, a view system, a telephony manager, a resource manager, and a notification manager.
The window manager is used for managing window programs. The window manager may obtain the display screen size, determine if there are status bars, lock screens, and intercept screens.
The content provider is used to store and retrieve data and make such data accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, and phonebooks.
The view system includes visual controls, such as controls to display text and controls to display pictures. The view system may be used to build applications. The display interface may be composed of one or more views, for example, a display interface including a text notification icon may include a view displaying text and a view displaying a picture.
The telephony manager is used to provide communication functions of the electronic device 100, such as management of call status (on or off).
The resource manager provides various resources for the application program, such as localization strings, icons, pictures, layout files, and video files.
The notification manager allows the application to display notification information in a status bar, can be used to communicate notification type messages, can automatically disappear after a short dwell, and does not require user interaction.
Android run time includes a core library and virtual machines. Android run time is responsible for scheduling and management of the Android system.
The core library consists of two parts: one part is a function which needs to be called by java language, and the other part is a core library of android.
The application layer and the application framework layer run in a virtual machine. The virtual machine executes java files of the application program layer and the application program framework layer as binary files. The virtual machine is used for executing functions such as management of object life cycle, stack management, thread management, security and exception management, garbage collection and the like.
The system library may include a plurality of functional modules, such as: surface manager (surface manager), media Libraries (Media Libraries), three-dimensional graphics processing Libraries (e.g., open graphics library (open graphics library for embedded systems, openGL ES) for embedded systems) and 2D graphics engines (e.g., skia graphics library (skia graphics library, SGL)).
The surface manager is used to manage the display subsystem and provides a fusion of the 2D and 3D layers for the plurality of applications.
The media library supports playback and recording of multiple audio formats, playback and recording of multiple video formats, and still image files. The media library may support a variety of audio video coding formats such as MPEG4, h.264, moving picture experts group audio layer 3 (moving picture experts group audio layer III, MP 3), advanced audio coding (advanced audio coding, AAC), adaptive multi-rate (AMR), joint picture experts group (joint photographic experts group, JPG), and portable network graphics (portable network graphics, PNG).
Three-dimensional graphics processing libraries may be used to implement three-dimensional graphics drawing, image rendering, compositing, and layer processing.
The two-dimensional graphics engine is a drawing engine for 2D drawing.
The kernel layer is a layer between hardware and software. The kernel layer may include a display driver, a camera driver, an audio driver, a sensor driver, and the like.
The workflow of the software system and hardware system of the electronic device 100 is illustrated in connection with displaying a photo scene.
When a user performs a touch operation on the touch sensor 180K, a corresponding hardware interrupt is sent to the kernel layer, which processes the touch operation into a raw input event, for example, information including touch coordinates and a time stamp of the touch operation. The original input event is stored in the kernel layer, and the application framework layer acquires the original input event from the kernel layer, identifies a control corresponding to the original input event, and notifies an Application (APP) corresponding to the control. For example, the touch operation is a click operation, the APP corresponding to the control is a camera APP, and after the camera APP is awakened by the click operation, the camera APP may call the camera driver of the kernel layer through the API, and the camera driver controls the camera 193 to shoot.
Fig. 11 is a schematic structural diagram of a video processing apparatus according to an embodiment of the present application. The video processing apparatus 200 includes a display unit 210 and a processing unit 220.
The display unit 210 is configured to display a first image, where the first image is an image frame of the target photographic subject at a first position.
The processing unit 220 is configured to obtain a second image when the target shooting object moves to a second position, where the second position is different from the first position, and the second image is an image frame acquired by the electronic device when the target shooting object moves to the second position; when the electronic equipment comprises a depth device, determining a corresponding object distance after the target shooting object moves to a second position by utilizing the depth device; determining a corresponding target zoom ratio from a preset zoom table according to the object distance, wherein the preset zoom table comprises a plurality of groups of object distances and target zoom ratios with a one-to-one mapping relation; determining a cutting frame according to the target zoom ratio, and cutting a second image according to the cutting frame; and scaling the cut second image according to the target zoom ratio to obtain a third image comprising the target shooting object, wherein the size of the target shooting object in the first image is consistent with the size of the target shooting object in the third image.
Optionally, as an embodiment, when the first image and the third image are displayed, the electronic device is located at the same position.
Alternatively, as an embodiment, when the second position and the first position are the same position, the position where the electronic device is located is different when the first image and the third image are displayed.
Optionally, as an embodiment, the processing unit 220 is further configured to obtain the AF motor hall parameter when the electronic device does not include the depth device; determining the distance delta f of the AF motor for pushing the lens to move according to the Hall parameter of the AF motor; and determining a corresponding target zoom ratio from a preset zoom table according to the distance delta f of the AF motor for pushing the lens to move, wherein the preset zoom table also comprises a plurality of groups of the distance delta f of the AF motor for pushing the lens to move and the target zoom ratio, wherein the plurality of groups of the distance delta f of the AF motor for pushing the lens to move and the target zoom ratio have a one-to-one mapping relation.
Optionally, as an embodiment, the processing unit 220 is further configured to acquire a depth image with the depth device; and determining the object distance of the target shooting object according to the depth image.
Optionally, as an embodiment, the processing unit 220 is further configured to detect an operation indicating to run the camera application; alternatively, an operation is detected that indicates that the video telephony application is running.
Optionally, as an embodiment, the target shooting object includes at least one user.
The video processing apparatus 200 is embodied as a functional unit. The term "unit" herein may be implemented in software and/or hardware, without specific limitation.
For example, a "unit" may be a software program, a hardware circuit or a combination of both that implements the functions described above. The hardware circuitry may include application specific integrated circuits (application specific integrated circuit, ASICs), electronic circuits, processors (e.g., shared, proprietary, or group processors, etc.) and memory for executing one or more software or firmware programs, merged logic circuits, and/or other suitable components that support the described functions.
Thus, the elements of the examples described in the embodiments of the present application can be implemented in electronic hardware, or in a combination of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
Fig. 12 shows a schematic structural diagram of an electronic device provided in the present application. The dashed lines in fig. 12 indicate that the unit or the module is optional, and the electronic device 300 may be used to implement the video processing method described in the above method embodiments.
The electronic device 300 includes one or more processors 301, which one or more processors 302 may support the electronic device 300 to implement the methods in the method embodiments. Processor 301 may be a general purpose processor or a special purpose processor. For example, the processor 301 may be a central processing unit (central processing unit, CPU), digital signal processor (digital signal processor, DSP), application specific integrated circuit (application specific integrated circuit, ASIC), field programmable gate array (field programmable gate array, FPGA), or other programmable logic device such as discrete gates, transistor logic, or discrete hardware components.
The processor 301 may be configured to control the electronic device 300, execute a software program, and process data of the software program. The electronic device 300 may further comprise a communication unit 305 for enabling input (reception) and output (transmission) of signals.
For example, the electronic device 300 may be a chip, the communication unit 305 may be an input and/or output circuit of the chip, or the communication unit 305 may be a communication interface of the chip, which may be an integral part of a terminal device or other electronic device.
For another example, the electronic device 300 may be a terminal device, the communication unit 305 may be a transceiver of the terminal device, or the communication unit 305 may be a transceiver circuit of the terminal device.
The electronic device 300 may include one or more memories 302 having a program 304 stored thereon, the program 304 being executable by the processor 301 to generate instructions 303 such that the processor 301 performs the video processing method described in the above method embodiments according to the instructions 303.
Optionally, the memory 302 may also have data stored therein. Alternatively, the processor 301 may also read data stored in the memory 302, which may be stored at the same memory address as the program 304, or which may be stored at a different memory address than the program 304.
The processor 301 and the memory 302 may be separately provided or may be integrated together; for example, integrated on a System On Chip (SOC) of the terminal device.
Illustratively, the memory 302 may be used to store the related program 304 of the video processing method provided in the embodiments of the present application, and the processor 301 may be used to invoke the related program 304 of the video processing method stored in the memory 302 during video processing, to execute the video processing method of the embodiments of the present application; for example, the number of the cells to be processed,
The display unit 210 is configured to display a first image, where the first image is an image frame of the target photographic subject at a first position.
The processing unit 220 is configured to obtain a second image when the target shooting object moves to a second position, where the second position is different from the first position, and the second image is an image frame acquired by the electronic device when the target shooting object moves to the second position; when the electronic equipment comprises a depth device, determining a corresponding object distance after the target shooting object moves to a second position by utilizing the depth device; determining a corresponding target zoom ratio from a preset zoom table according to the object distance, wherein the preset zoom table comprises a plurality of groups of object distances and target zoom ratios with a one-to-one mapping relation; determining a cutting frame according to the target zoom ratio, and cutting a second image according to the cutting frame; and scaling the cut second image according to the target zoom ratio to obtain a third image comprising the target shooting object, wherein the size of the target shooting object in the first image is consistent with the size of the target shooting object in the third image.
The present application also provides a computer program product which, when executed by the processor 301, implements the video processing method according to any of the method embodiments of the present application.
The computer program product may be stored in the memory 302, for example, the program 304, and the program 304 is finally converted into an executable object file capable of being executed by the processor 301 through preprocessing, compiling, assembling, and linking.
The present application also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a computer, implements the video processing method according to any of the method embodiments of the present application. The computer program may be a high-level language program or an executable object program.
Optionally, the computer readable storage medium is, for example, memory 302. The memory 302 may be either volatile memory or nonvolatile memory, or the memory 302 may include both volatile and nonvolatile memory. The nonvolatile memory may be a read-only memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an electrically Erasable EPROM (EEPROM), or a flash memory. The volatile memory may be random access memory (random access memory, RAM) which acts as an external cache. By way of example, and not limitation, many forms of RAM are available, such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), enhanced SDRAM (ESDRAM), synchronous DRAM (SLDRAM), and direct memory bus RAM (DR RAM).
It will be clearly understood by those skilled in the art that, for convenience and brevity of description, specific working processes and technical effects of the apparatus and device described above may refer to corresponding processes and technical effects in the foregoing method embodiments, which are not described in detail herein.
In several embodiments provided in the present application, the disclosed systems, apparatuses, and methods may be implemented in other manners. For example, some features of the method embodiments described above may be omitted, or not performed. The above-described apparatus embodiments are merely illustrative, the division of units is merely a logical function division, and there may be additional divisions in actual implementation, and multiple units or components may be combined or integrated into another system. In addition, the coupling between the elements or the coupling between the elements may be direct or indirect, including electrical, mechanical, or other forms of connection.
It should be understood that, in various embodiments of the present application, the size of the sequence number of each process does not mean that the execution sequence of each process should be determined by its functions and internal logic, and should not constitute any limitation on the implementation process of the embodiments of the present application.
In addition, the terms "system" and "network" are often used interchangeably herein. The term "and/or" herein is merely one association relationship describing the associated object, meaning that there may be three relationships, e.g., a and/or B, may represent: a exists alone, A and B exist together, and B exists alone. In addition, the character "/" herein generally indicates that the front and rear associated objects are an "or" relationship.
In summary, the foregoing description is only a preferred embodiment of the technical solution of the present application, and is not intended to limit the scope of the present application. Any modification, equivalent replacement, improvement, etc. made within the spirit and principles of the present application should be included in the protection scope of the present application.

Claims (10)

1. A video processing method, wherein the video processing method is applied to an electronic device, and comprises:
displaying a first image, wherein the first image is an image frame of a target shooting object at a first position;
acquiring a second image when the target shooting object moves to a second position, wherein the second position and the first position are different, and the second image is an image frame acquired by the electronic equipment when the target shooting object moves to the second position;
When the electronic equipment comprises a depth device, determining a corresponding object distance after the target shooting object moves to the second position by utilizing the depth device;
determining a corresponding target zoom ratio from a preset zoom table according to the object distance, wherein the preset zoom table comprises a plurality of groups of object distances and target zoom ratios with one-to-one mapping relation;
determining a cutting frame according to the target zoom ratio, and cutting the second image according to the cutting frame;
and scaling the second image after clipping according to the target zoom ratio to obtain a third image comprising the target shooting object, wherein the size of the target shooting object in the first image is consistent with the size of the target shooting object in the third image.
2. The video processing method of claim 1, wherein the electronic device is in the same location when the first image and the third image are displayed.
3. The video processing method according to claim 1 or 2, wherein when the second position and the first position are the same position, the position where the electronic device is located is different when the first image and the third image are displayed.
4. A video processing method according to any one of claims 1 to 3, further comprising:
when the electronic equipment does not comprise a depth device, acquiring an AF motor Hall parameter, wherein the AF motor Hall parameter is used for indicating the distance of an AF motor in a coil magnetic field generated by the Hall device;
determining the distance delta f of the AF motor for pushing the lens to move according to the Hall parameter of the AF motor;
and determining a corresponding target zoom ratio from the preset zoom table according to the distance delta f of the AF motor for pushing the lens to move, wherein the preset zoom table also comprises a plurality of groups of the distance delta f of the AF motor for pushing the lens to move and the target zoom ratio, wherein the groups of the distance delta f and the target zoom ratio have a one-to-one mapping relation.
5. The video processing method according to any one of claims 1 to 4, wherein determining, with the depth device, a corresponding object distance after the target photographic subject moves to the second position, includes:
acquiring a depth image by using the depth device;
and determining the object distance of the target shooting object according to the depth image.
6. The video processing method according to any one of claims 1 to 5, further comprising:
detecting an operation indicating to run a camera application; or alternatively, the process may be performed,
An operation is detected that indicates to run the video telephony application.
7. The video processing method according to any one of claims 1 to 6, wherein the target photographic subject includes at least one user.
8. An electronic device comprising a processor and a memory;
the memory is used for storing a computer program capable of running on the processor;
the processor configured to perform the video processing method according to any one of claims 1 to 6.
9. A chip, comprising: a processor for calling and running a computer program from a memory, so that a device on which the chip is mounted performs the video processing method according to any one of claims 1 to 6.
10. A computer readable storage medium, characterized in that the computer readable storage medium stores a computer program comprising program instructions which, when executed by a processor, cause the processor to perform the video processing method according to any one of claims 1 to 6.
CN202211056015.0A 2022-08-31 2022-08-31 Video processing method and related equipment thereof Active CN116055871B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211056015.0A CN116055871B (en) 2022-08-31 2022-08-31 Video processing method and related equipment thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211056015.0A CN116055871B (en) 2022-08-31 2022-08-31 Video processing method and related equipment thereof

Publications (2)

Publication Number Publication Date
CN116055871A true CN116055871A (en) 2023-05-02
CN116055871B CN116055871B (en) 2023-10-20

Family

ID=86120624

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211056015.0A Active CN116055871B (en) 2022-08-31 2022-08-31 Video processing method and related equipment thereof

Country Status (1)

Country Link
CN (1) CN116055871B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20020018514A (en) * 2000-09-02 2002-03-08 김성헌 Apparatus for controling a zoom up/down function of camera system using laser distance measurement and method therefor
US20150350559A1 (en) * 2011-09-26 2015-12-03 Sony Corporation Image photography apparatus
CN113747050A (en) * 2020-05-30 2021-12-03 华为技术有限公司 Shooting method and equipment
WO2022151473A1 (en) * 2021-01-18 2022-07-21 深圳市大疆创新科技有限公司 Photographing control method, photographing control apparatus and gimbal assembly
CN114820296A (en) * 2021-01-27 2022-07-29 北京小米移动软件有限公司 Image processing method and device, electronic device and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20020018514A (en) * 2000-09-02 2002-03-08 김성헌 Apparatus for controling a zoom up/down function of camera system using laser distance measurement and method therefor
US20150350559A1 (en) * 2011-09-26 2015-12-03 Sony Corporation Image photography apparatus
US20200374464A1 (en) * 2011-09-26 2020-11-26 Sony Corporation Image photography apparatus
CN113747050A (en) * 2020-05-30 2021-12-03 华为技术有限公司 Shooting method and equipment
WO2022151473A1 (en) * 2021-01-18 2022-07-21 深圳市大疆创新科技有限公司 Photographing control method, photographing control apparatus and gimbal assembly
CN114820296A (en) * 2021-01-27 2022-07-29 北京小米移动软件有限公司 Image processing method and device, electronic device and storage medium

Also Published As

Publication number Publication date
CN116055871B (en) 2023-10-20

Similar Documents

Publication Publication Date Title
US11831977B2 (en) Photographing and processing method and electronic device
WO2021136050A1 (en) Image photographing method and related apparatus
WO2020168956A1 (en) Method for photographing the moon and electronic device
WO2020073959A1 (en) Image capturing method, and electronic device
CN111212235B (en) Long-focus shooting method and electronic equipment
US11949978B2 (en) Image content removal method and related apparatus
WO2021185250A1 (en) Image processing method and apparatus
CN113452898B (en) Photographing method and device
CN111103922B (en) Camera, electronic equipment and identity verification method
US20210409588A1 (en) Method for Shooting Long-Exposure Image and Electronic Device
US20240056683A1 (en) Focusing Method and Electronic Device
CN115272138B (en) Image processing method and related device
CN115967851A (en) Quick photographing method, electronic device and computer readable storage medium
CN116055871B (en) Video processing method and related equipment thereof
CN115633255B (en) Video processing method and electronic equipment
CN116723382B (en) Shooting method and related equipment
CN117479008B (en) Video processing method, electronic equipment and chip system
CN116055872B (en) Image acquisition method, electronic device, and computer-readable storage medium
CN113297875B (en) Video text tracking method and electronic equipment
CN115802144B (en) Video shooting method and related equipment
WO2024051684A1 (en) Voltage adjustment method and related apparatus
CN116664701A (en) Illumination estimation method and related equipment thereof
CN117714832A (en) Photographing method, electronic device and computer readable storage medium
CN117714860A (en) Image processing method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant