CN112995496A - Video recording method and mobile terminal - Google Patents

Video recording method and mobile terminal Download PDF

Info

Publication number
CN112995496A
CN112995496A CN201911312533.2A CN201911312533A CN112995496A CN 112995496 A CN112995496 A CN 112995496A CN 201911312533 A CN201911312533 A CN 201911312533A CN 112995496 A CN112995496 A CN 112995496A
Authority
CN
China
Prior art keywords
camera
mobile terminal
target
object distance
cameras
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911312533.2A
Other languages
Chinese (zh)
Other versions
CN112995496B (en
Inventor
冯坤
刘兆磊
高仁福
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hisense Mobile Communications Technology Co Ltd
Original Assignee
Hisense Mobile Communications Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hisense Mobile Communications Technology Co Ltd filed Critical Hisense Mobile Communications Technology Co Ltd
Priority to CN201911312533.2A priority Critical patent/CN112995496B/en
Publication of CN112995496A publication Critical patent/CN112995496A/en
Application granted granted Critical
Publication of CN112995496B publication Critical patent/CN112995496B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/45Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from two or more image sensors being of different type or operating in different modes, e.g. with a CMOS sensor for moving images in combination with a charge-coupled device [CCD] for still images
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/57Mechanical or electrical details of cameras or camera modules specially adapted for being embedded in other devices
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Studio Devices (AREA)

Abstract

The application provides a video recording method and a mobile terminal, and belongs to the technical field of terminals. The method comprises the following steps: controlling at least two cameras to focus according to different object distances; in the video recording process, responding to a focusing instruction carrying a target object distance, and acquiring an image collected by a target camera focusing according to the target object distance; and recording the video based on the image collected by the target camera. Because the mobile terminal can control each camera to focus according to different object distances in advance, in the video recording process, when the focus needs to be adjusted, only the source of the image for recording the video needs to be switched, and the camera does not need to be controlled to focus again, so that the time for switching the focus is effectively shortened, and the display effect of the recorded video is ensured.

Description

Video recording method and mobile terminal
Technical Field
The present application relates to the field of terminal technologies, and in particular, to a video recording method and a mobile terminal.
Background
A camera of a mobile terminal generally includes a lens, a light sensing element, and a driving motor.
In the related art, when a camera focuses, a shooting object to be focused can be determined according to a click operation of a user on a touch screen of a mobile terminal. The lens can be driven to move by the driving motor so as to adjust the distance between the lens and the imaging surface of the photosensitive element until the shooting object can be imaged clearly.
In the process of recording a video by using a camera, if a focused shooting object needs to be adjusted, refocusing is needed, but because the time length needed in the refocusing process is long, a recorded picture in the refocusing process is fuzzy, and the quality of the recorded video is poor.
Disclosure of Invention
The application provides a video recording method and a mobile terminal, which can solve the problem of fuzzy recording pictures caused by refocusing in the video recording process in the related art. The technical scheme is as follows:
in one aspect, a mobile terminal is provided, and the mobile terminal includes: at least two cameras, and a processor;
the at least two cameras are used for focusing according to different object distances;
the processor is configured to: in the video recording process, responding to a focusing instruction carrying a target object distance, and acquiring an image collected by a target camera focusing according to the target object distance; and recording the video based on the image collected by the target camera.
In another aspect, a video recording method is provided, which is applied to a mobile terminal, where the mobile terminal includes at least two cameras, and the method includes:
controlling the at least two cameras to focus according to different object distances;
in the video recording process, responding to a focusing instruction carrying a target object distance, and acquiring an image collected by a target camera focusing according to the target object distance;
and recording the video based on the image collected by the target camera.
In yet another aspect, a computer-readable storage medium having instructions stored thereon, which when run on a computer, cause the computer to perform a video recording method as described in the above aspect.
The beneficial effects that technical scheme that this application provided brought can include at least:
the embodiment of the application provides a video recording method and a mobile terminal, wherein the mobile terminal can control each camera to focus according to different object distances in advance, so that in the video recording process, when a shooting object needing to be focused (namely a focusing point) is adjusted, the mobile terminal can directly acquire an image collected by a target camera focused according to the target object distance, and record a video based on the image. That is, in the process of adjusting the focus, the mobile terminal only needs to switch the source of the image for recording the video, and does not need to control the camera to focus again, so that the time required by the focus switching is effectively shortened, and the display effect of the recorded video is ensured.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic structural diagram of a mobile terminal to which a video recording method according to an embodiment of the present disclosure is applied;
fig. 2 is a flowchart of a video recording method according to an embodiment of the present application;
fig. 3 is a flowchart of another video recording method according to an embodiment of the present application;
fig. 4 is a schematic diagram of image acquisition regions of two cameras provided in an embodiment of the present application;
fig. 5 is a schematic diagram for determining an object distance of a photographic subject according to an embodiment of the present disclosure;
fig. 6 is a schematic diagram of object distances of a plurality of photographic objects according to an embodiment of the present disclosure;
fig. 7 is a schematic view of object distances of a plurality of photographic objects according to an embodiment of the present disclosure;
FIG. 8 is a schematic diagram of object distances corresponding to respective regions in an overlapped image according to an embodiment of the present disclosure;
fig. 9 is a schematic diagram of an image acquired by a camera according to an embodiment of the present disclosure;
fig. 10 is a schematic view of an image acquired by another camera provided in the embodiment of the present application;
fig. 11 is a schematic diagram of an image acquired by another camera provided in the embodiment of the present application;
fig. 12 is a schematic structural diagram of another mobile terminal according to an embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
Fig. 1 is a schematic structural diagram of a mobile terminal applied to a video recording method according to an embodiment of the present disclosure, where the mobile terminal may be a mobile phone, a tablet computer, or a notebook computer. As shown in fig. 1, at least two cameras may be disposed in the mobile terminal 10. For example, two cameras 111 and 112 are shown in FIG. 1. Of course, three or more cameras may be provided in the mobile terminal 10.
Fig. 2 is a flowchart of a video recording method according to an embodiment of the present application, where the method may be applied to a mobile terminal having at least two cameras, for example, the method may be applied to the mobile terminal shown in fig. 1.
Referring to fig. 2, the method may include:
step 101, controlling at least two cameras to focus according to different object distances.
Wherein, two arbitrary cameras in these at least two cameras all focus according to different object distances. The object distance refers to a distance between a lens of the camera and a shooting object. That is, the mobile terminal can control at least two cameras to focus on the shot objects with different object distances.
And 102, in the video recording process, responding to a focusing instruction carrying the target object distance, and acquiring an image collected by a target camera focusing according to the target object distance.
In the embodiment of the application, in the video recording process, if a user needs to adjust a focused shooting object, a click operation can be executed in a display area of the shooting object to be focused. The mobile terminal can further determine a target shooting object to be focused based on the clicking operation, determine a target object distance of the target shooting object, and trigger generation of a focusing instruction. Then, the mobile terminal can determine a target camera focused according to the target object distance from the at least two cameras and acquire an image acquired by the target camera.
And 103, recording a video based on the image acquired by the target camera.
The mobile terminal can transmit the image collected by the target camera to a video encoder in the mobile terminal, and the video encoder can perform video encoding on the image collected by the target camera, so that video recording is realized.
To sum up, the embodiment of the present application provides a video recording method, in which a mobile terminal can control each camera to focus according to different object distances in advance, so that in a video recording process, when a shooting object (i.e., a focus) to be focused needs to be adjusted, the mobile terminal can directly obtain an image acquired by a target camera focused according to the target object distance, and record a video based on the image. That is, in the process of adjusting the focus, the mobile terminal only needs to switch the source of the image for recording the video, and does not need to control the camera to focus again, so that the time required by the focus switching is effectively shortened, and the display effect of the recorded video is ensured.
Fig. 3 is a flowchart of another video recording method provided in this embodiment, which may be applied to a mobile terminal having at least two cameras, for example, the mobile terminal shown in fig. 1. Referring to fig. 3, the method may include:
step 201, responding to a camera starting instruction, and displaying images collected by a main camera in at least two cameras in a display screen.
In this application embodiment, the mobile terminal can start at least two cameras after detecting the camera start instruction to can show the image that main camera gathered in at least two cameras in the display screen. The main camera may be set before the mobile terminal leaves the factory, or may be set by the user. And, the mobile terminal may be pre-recorded with an identifier of a main camera of the at least two cameras.
Step 202, a first image collected by a first camera of the at least two cameras and a second image collected by a second camera are obtained.
Wherein, this first camera and second camera can be two arbitrary cameras in these at least two cameras. Alternatively, the at least two cameras may be fixed cameras that are preset.
Step 203, determining a plurality of photographic subjects in the overlapped image of the first image and the second image.
Referring to fig. 4, it can be seen that, since the first camera 111 and the second camera 112 are arranged at different positions, the image capturing areas of the two cameras are different, and the captured images are not completely overlapped. The mobile terminal may acquire an overlapping portion (i.e., an overlapping image) of the images captured by the two cameras and determine a plurality of photographic subjects in the overlapping image.
Alternatively, the mobile terminal may arbitrarily determine a plurality of photographic subjects in the overlapped image. Alternatively, the mobile terminal may determine each photographic subject contained in the overlapped images. The shot object is different according to different images collected by the camera. For example, each photographic subject may be a person, an animal, a plant, food, or an object, or the like. Still alternatively, in order to improve the accuracy of determining the object distance, the mobile terminal may determine each pixel in the overlapped images as one photographic subject.
And step 204, respectively calculating the object distance of each shooting object.
The object distance refers to the vertical distance between a shooting object and the plane where the lens of the camera is located. In the embodiment of the application, for each shooting object, the mobile terminal can determine the object distance of the shooting object according to the physical parameters of the first camera and the second camera, the position of the shooting object in the first image and the position of the shooting object in the second image. Wherein, the physical parameters of the first camera and the second camera may include: the field angle of each camera, the distance between the two cameras, the resolution of each camera, the focal length of each camera, the installation angle and other parameters. The position of the photographic subject in the image may refer to a pixel coordinate of a central point of the photographic subject in the image.
For example, with reference to fig. 4 and fig. 5, assuming that the distance between the two cameras is L, for a shot object M in the overlapped images of the two cameras, the object distance H of the shot object M may be calculated according to the triangular pythagorean theorem, and the calculation process is as follows:
firstly, according to the sine theorem formula, the linear distance x between the shooting object M and the first camera 111 and the linear distance y between the shooting object M and the second camera 112 can be calculated and respectively satisfy:
Figure BDA0002324925260000051
Figure BDA0002324925260000052
Figure BDA0002324925260000053
where θ is 90- (α + β)/2, α is an angle between the light reflected by the object M to the first camera 111 and a plane where the lenses of the two cameras are located, and β is an angle between the light reflected by the object M to the second camera 112 and the plane where the lenses of the two cameras are located. The angle α and the angle β may be determined according to physical parameters of the two cameras (e.g., the field angle a of the first camera 111, the field angle B of the second camera 112), and the positions of the photographic subject M in the first image and the second image.
Further, the following can be derived from the area formula of the triangle:
Figure BDA0002324925260000054
H=x×y×sin(90-(α+β)/2)/L。
and step 205, determining at least two different object distances corresponding to the at least two cameras one by one from the object distances of the plurality of shooting objects.
After the mobile terminal determines the object distances of the plurality of photographic objects, at least two different object distances can be further screened out from the object distances of the plurality of photographic objects. Optionally, the number of the at least two different object distances determined by the mobile terminal may be equal to the number of the cameras included in the mobile terminal, and each object distance corresponds to one camera. Of course, the number of the at least two different object distances determined by the mobile terminal may also be greater than the number of the cameras included in the mobile terminal.
In the embodiment of the application, after the mobile terminal screens out at least two different object distances, the object distance corresponding to each camera can be randomly determined from the at least two different object distances. Alternatively, each camera may be assigned an appropriate object distance based on physical parameters (e.g., focal length) of the respective camera.
For example, as shown in fig. 6, assuming that the mobile terminal has three cameras, i.e., the camera 111, the camera 112, and the camera 113, the mobile terminal may determine object distances of three photographic objects, i.e., M1, M2, and M3. The object distance of the photographic subject M1 is H1, the object distance of the photographic subject M2 is H2, and the object distance of the photographic subject M3 is H3. The object distances of the three shooting objects satisfy: h1 > H3 > H2. And, wherein object distance H1 corresponds to camera 111, object distance H2 corresponds to camera 112, and object distance H3 corresponds to camera 113.
And step 206, controlling each camera to focus according to the corresponding object distance.
In the embodiment of the application, the mobile terminal can respectively control each camera to focus according to a corresponding object distance based on the determined at least two different object distances, so that each camera can focus according to different object distances.
Optionally, after the mobile terminal respectively controls each camera to focus, the object distance corresponding to each camera may be recorded in the object distance table. For example, the mobile terminal may control the camera 111 to focus according to the object distance H1, control the camera 112 to focus according to the object distance H2, and control the camera 113 to focus according to the object distance H3, and the mobile terminal may record the object distance corresponding to each camera in an object distance table as shown in table 1. For example, in table 1, the object distance corresponding to camera 111 identified as C1 is H1, the object distance corresponding to camera 112 identified as C2 is H2, and the object distance corresponding to camera 113 identified as C3 is H3.
TABLE 1
Camera head Object distance Coordinates of a photographic subject
C1 H1 (x1,y1)
C2 H2 (x2,y2)
C3 H3 (x3,y3)
Optionally, the mobile terminal may further record a shooting object located at an object distance corresponding to each camera. For example, for the object distance corresponding to each camera, the mobile terminal may record the coordinates of the object located at the object distance in the overlapped image, and obtain the corresponding relationship between the object distance and the coordinates of the object. The coordinates of the photographic subject may be coordinates of a central point of the photographic subject, or may be a coordinate range of an area where the photographic subject is located.
For example, as shown in fig. 1, the mobile terminal may record coordinates of a photographic subject M1 at an object distance H1 as (x1, y1), coordinates of a photographic subject M2 at an object distance H2 as (x2, y2), and coordinates of a photographic subject M3 at an object distance H3 as (x3, y 3).
It should be noted that, as shown in fig. 7, the process of focusing the camera is actually a process of adjusting the position of the lens in the camera, so that the photographic subject on a specific object distance plane can be clearly imaged. For example, referring to fig. 7, taking the camera 111 as an example, when the lens 1111 of the camera 111 is located at the position a, the photographic subject M1 having an object distance H1 (for example, H1 ═ 1M) can be clearly imaged. When the lens 1111 is located at the position b, the photographic subject R1 having an object distance H4 (for example, H4 — 5 m) can be clearly imaged. When the lens 1111 is located at the position c, the photographic subject B1 having an object distance H5 (for example, H5 — 20 m) can be clearly imaged.
Since the object distances of different subjects may be the same in the images captured by the cameras, for example, referring to fig. 8, the object distances of the car R1 and the car R2 may be the same. Therefore, the mobile terminal can layer (i.e. classify) the shot objects according to the difference of the object distances. As shown in fig. 8, the photographic subject may be divided into three layers, where the first layer is a person M1 with an object distance of H1, the second layer is a car R1 and a car R2 with an object distance of H4, and the third layer is a building B1 with an object distance of H5.
Further, the mobile terminal can divide the overlapped image into a plurality of areas according to the positions of the shot objects in the overlapped image and the object distances of the shot objects, and determine the object distance corresponding to each area. Namely, the mobile terminal can record the corresponding relation between the area and the object distance. The object distance corresponding to each region is the object distance of the shooting object included in the region, and each region may be a connected region or may include a plurality of unconnected sub-regions.
For example, assuming that the plurality of different object distances determined by the mobile terminal are H1, H4, and H5, the mobile terminal may divide the overlapped images into: the area a, the area B and the area C are three areas in total. The area a is an area where the person M1 is located, the area a is a connected area, and the object distance corresponding to the area a is H1. The area B is the area where the automobiles R1 and R2 are located, the area B comprises three unconnected sub-areas, and the object distance corresponding to the area B is H4. The area C is an area where the building B1 is located, the area C is a connected area, and the object distance corresponding to the area C is H5.
It should be further noted that, in the embodiment of the present application, the mobile terminal may execute the method shown in the above step 202 to step 206 after at least two cameras are activated in response to the camera activation instruction each time. That is, the mobile terminal may control each camera to focus before recording the video.
Or, the mobile terminal may further execute the methods shown in the above steps 202 to 206 when detecting that the variation of the image acquired by any camera is greater than the variation threshold in the video recording process. That is, the mobile terminal can control each camera to focus again when detecting that the shot picture changes greatly in the video recording process.
And step 207, receiving the click operation acted on the touch screen of the mobile terminal in the video recording process.
In the embodiment of the application, after the mobile terminal detects the click operation of the video recording button on the touch screen, video recording can be started. In the process of video recording, the mobile terminal can display the image collected by the main camera on the display screen of the mobile terminal.
In the process of recording the video, if the user needs to adjust the focused shooting object in the image, the click operation can be executed in the display area of the shooting object to be focused on the touch screen. The mobile terminal may then receive a click operation acting on a touch screen of the mobile terminal.
For example, suppose that the current mobile terminal is recording a video with an image captured by a camera identified as C2, the camera identified as C2 is focused at an object distance H2. Then, as shown in fig. 9, the display screen of the mobile terminal may currently display the image captured by the camera identified as C2, in which the photographic subject M2 with the object distance H2 can be clearly imaged, and the photographic subjects M1 and M3 are blurred. If the user wishes to focus on the photographic subject M3, a click operation may be performed on the display area of the photographic subject M3 in the display screen.
And step 208, determining the target object distance of the target shooting object at the action point of the click operation.
After the mobile terminal receives the click operation, the mobile terminal can determine the target shooting object displayed at the action point of the click operation and determine the object distance of the target shooting object. For example, the mobile terminal may determine the target object distance of the target photographic object at the action point from the correspondence between the object distance and the coordinates of the photographic object recorded in advance directly according to the coordinates of the target photographic object at the action point. Or, the mobile terminal may further determine the region where the action point is located, then determine an object distance corresponding to the region where the action point is located based on the correspondence between the region and the object distance, and determine the object distance as a target object distance of the target photographic object.
For example, assuming that the target photographic object at the action point of the click operation received by the mobile terminal is the photographic object M3, that is, the coordinates of the action point are (x3, y3), the mobile terminal may determine that the target object distance of the target photographic object M3 at the action point is H3 based on the correspondence shown in table 1.
It should be noted that, in the embodiment of the present application, in addition to determining the target object distance based on the click operation of the user, the mobile terminal may also determine the object distance of the changed shooting object when detecting that the shooting object located in the target area of the display screen changes, and determine the object distance as the target object distance. That is, in the process of recording the video, the mobile terminal can automatically switch the camera for providing the image according to the object distance of the shooting object in the target area.
Wherein the position, shape and size of the target area may be pre-configured in the mobile terminal or may be autonomously set by the user. For example, referring to fig. 9, the target area a may be a rectangular area, and the target area a may be located at a central position of the display screen.
Step 209 generates a focusing command based on the target object distance.
For example, the mobile terminal may generate a focusing instruction based on the target object distance H3.
And step 210, responding to the focusing instruction, and acquiring an image collected by a target camera focused according to the target object distance.
After the mobile terminal detects a focusing instruction carrying a target object distance, a target camera focusing according to the target object distance can be determined from the at least two cameras, and an image acquired by the target camera is acquired.
For example, assuming that the target object distance carried in the focusing command is H3, the mobile terminal may determine that the target camera is the camera identified as C3 according to the correspondence shown in table 1, and may obtain an image captured by the camera identified as C3.
And step 211, recording a video based on the image collected by the target camera.
The mobile terminal can transmit the image collected by the target camera to a video encoder in the mobile terminal, and the video encoder can encode the image collected by the target camera to record the video.
For example, the mobile terminal may record a video based on an image captured by the camera identified as C3, i.e., the mobile terminal may switch the source of the image used to record the video from the camera identified as C2 to the camera identified as C3.
And step 212, displaying the image acquired by the target camera in a display screen.
In the embodiment of the application, after the mobile terminal acquires the image acquired by the target camera, the image acquired by the target camera can be displayed on a display screen of the mobile terminal.
For example, as shown in fig. 10, the mobile terminal may display the image captured by the camera identified as C3 in the display screen. As can be seen from fig. 10, in the image captured by the camera denoted by C3, the target photographic subject M3 can be clearly imaged, while the photographic subjects M1 and M2 are blurred.
If the user again wants to focus on the object M1 during the video recording process, the user can perform a click operation on the display area of the object M1 on the touch screen as shown in fig. 10. After receiving the click operation, the mobile terminal may continue to record the video by using the image captured by the camera identified as C1 by performing the methods shown in steps 207 to 211, that is, adjusting the focus of the recorded video to the shot image M1. At this time, the image displayed in the display screen of the mobile terminal may be as shown in fig. 11.
In the video recording method provided by the embodiment of the application, in the process of adjusting the focusing point, the mobile terminal only needs to switch the camera for providing the image without controlling the camera to refocus, so that the phenomena of picture far and near stretching and picture blurring of the recorded video caused by refocusing of the camera can be avoided.
It should be noted that, as can be seen from fig. 9 to 11, when the display screen displays an image acquired by a certain camera, the mobile terminal may also display an identifier of the camera used for acquiring the currently displayed image on the display screen. For example, referring to fig. 9, if the currently displayed image on the display screen is captured by the camera identified as C2, the identification C2 of the camera may also be displayed on the display screen. Alternatively, referring to fig. 10, if the currently displayed image on the display screen is captured by the camera identified as C3, the identifier C3 of the camera may also be displayed on the display screen.
It should be further noted that the order of the steps of the video recording method provided in the embodiment of the present application may be appropriately adjusted, and the steps may also be increased or decreased according to the situation. For example, step 212 may be performed in synchronization with step 211, or prior to step 211. Any method that can be easily conceived by a person skilled in the art within the technical scope disclosed in the present application is covered by the protection scope of the present application, and thus the detailed description thereof is omitted.
To sum up, the embodiment of the present application provides a video recording method, in which a mobile terminal can control each camera to focus according to different object distances in advance, so that in a video recording process, when a shooting object (i.e., a focus) to be focused needs to be adjusted, the mobile terminal can directly obtain an image acquired by a target camera focused according to the target object distance, and record a video based on the image. That is, in the process of adjusting the focus, the mobile terminal only needs to switch the source of the image for recording the video, and does not need to control the camera to focus again, thereby effectively shortening the time required by the focus switching. Therefore, the phenomena of picture far and near stretching and picture blurring in the video recording process can be avoided, the phenomenon of picture water ripples caused by readjusting the exposure amount in the refocusing process can be avoided, and the display effect of the recorded video is effectively improved.
Fig. 12 is a schematic structural diagram of another mobile terminal provided in an embodiment of the present application, and as shown in fig. 12, the mobile terminal 100 may include: a processor 110, and at least two cameras. For example, two cameras 111 and 112 are shown in fig. 12.
The at least two cameras are used for focusing according to different object distances;
the processor 110 may be configured to: in the video recording process, responding to a focusing instruction carrying a target object distance, and acquiring an image collected by a target camera focusing according to the target object distance;
and recording the video based on the image collected by the target camera.
Optionally, the processor 110 may be configured to:
determining object distances of at least two shot objects according to a first image acquired by a first camera in the at least two cameras and a second image acquired by a second camera to obtain at least two different object distances corresponding to the at least two cameras one to one;
and respectively controlling each camera to focus according to the corresponding object distance.
Optionally, as shown in fig. 12, the mobile terminal further includes a display screen 122, and the processor 110 may be configured to:
in response to a camera start instruction, displaying images acquired by a main camera in the at least two cameras in the display screen 122;
and after acquiring the image captured by the target camera, displaying the image captured by the target camera in the display screen 122.
Optionally, the processor 110 may further be configured to: the identity of the camera used to capture the currently displayed image is displayed in the display screen 122.
As shown in fig. 12, the mobile terminal may further include: the touch screen 121, the processor 110 may further be configured to:
receiving a click operation acting on the touch screen 121; and determining the target object distance of the target shooting object at the action point of the clicking operation, and generating a focusing instruction based on the target object distance.
Optionally, the processor 110 may further be configured to: if it is detected that the photographic subject located in the target area of the display screen 122 changes, the object distance of the changed photographic subject is taken as the target object distance, and a focusing instruction is generated.
To sum up, the embodiment of the present application provides a mobile terminal, where the mobile terminal can control each camera to focus according to different object distances in advance, so that in a video recording process, when a shooting object (i.e., a focus) that needs to be focused is adjusted, the mobile terminal can directly obtain an image collected by a target camera that focuses according to the target object distance, and record a video based on the image. That is, in the process of adjusting the focus, the mobile terminal only needs to switch the source of the image for recording the video, and does not need to control the camera to focus again, thereby effectively shortening the time required by the focus switching. Therefore, the phenomena of picture far and near stretching and picture blurring in the video recording process can be avoided, the phenomenon of picture water ripples caused by readjusting the exposure amount in the refocusing process can be avoided, and the display effect of the recorded video is effectively improved.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the mobile terminal and each device described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
It should be understood that the mobile terminal 100 shown in fig. 12 is only one example, and that the mobile terminal 100 may have more or fewer components than shown in fig. 12, may combine two or more components, or may have a different configuration of components. The various components shown in the figures may be implemented in hardware, software, or a combination of hardware and software, including one or more signal processing and/or application specific integrated circuits.
A block diagram of a hardware configuration of the mobile terminal 100 according to an exemplary embodiment is exemplarily shown in fig. 12. As shown in fig. 12, the mobile terminal 100 may further include: a Radio Frequency (RF) circuit 180, a memory 130, a display unit 120, a sensor 150, an audio circuit 160, a Wireless Fidelity (Wi-Fi) module 170, a bluetooth module 181, and a power supply 190.
The RF circuit 180 may be used for receiving and transmitting signals during information transmission and reception or during a call, and may receive downlink data of a base station and then send the downlink data to the processor 110 for processing; the uplink data may be transmitted to the base station. Typically, the RF circuitry includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like.
Memory 130 may be used to store software programs and data. The processor 110 performs various functions of the mobile terminal 100 and data processing by executing software programs or data stored in the memory 130. The memory 130 may include high speed random access memory and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. The memory 130 stores an operating system that enables the mobile terminal 100 to operate. The memory 130 may store an operating system and various application programs, and may also store codes for executing the video recording method provided by the embodiment of the present application.
The display unit 120 may be used to receive input numeric or character information and generate signal input related to user settings and function control of the mobile terminal 100, and particularly, the display unit 120 may include a touch screen 121 disposed on the front surface of the mobile terminal 100 and may collect touch operations of a user thereon or nearby, such as clicking a button, dragging a scroll box, and the like.
The display unit 120 may also be used to display a Graphical User Interface (GUI) of information input by or provided to the user and various menus of the terminal 100. In particular, the display unit 120 may include a display screen 122 disposed on a front surface of the mobile terminal 100. The display screen 122 may be configured in the form of a liquid crystal display, a light emitting diode, or the like. The display unit 120 may be used to display various graphical user interfaces described herein.
The touch screen 121 may cover the display screen 122, or the touch screen 121 and the display screen 122 may be integrated to implement the input and output functions of the mobile terminal 100, and the integrated touch screen may be referred to as a touch display screen for short. The display unit 120 in the present application may display the application programs and the corresponding operation steps.
The cameras 111 and 112 may be used to capture still images or video. The object generates an optical image through the lens and projects the optical image to the photosensitive element. The photosensitive element may be a Charge Coupled Device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The light sensing elements convert the light signals into electrical signals which are then passed to the processor 110 for conversion into digital image signals.
The mobile terminal 100 may further include at least one sensor 150, such as an acceleration sensor 151, a distance sensor 152, a fingerprint sensor 153, a temperature sensor 154. The mobile terminal 100 may also be configured with other sensors such as a gyroscope, barometer, hygrometer, thermometer, infrared sensor, light sensor, motion sensor, and the like.
The audio circuitry 160, speaker 161, microphone 162 may provide an audio interface between a user and the mobile terminal 100. The audio circuit 160 may transmit the electrical signal converted from the received audio data to the speaker 161, and convert the electrical signal into a sound signal for output by the speaker 161. The mobile terminal 100 may also be provided with a volume button for adjusting the volume of the sound signal. On the other hand, the microphone 162 converts the collected sound signal into an electrical signal, converts the electrical signal into audio data after being received by the audio circuit 160, and then outputs the audio data to the RF circuit 180 to be transmitted to, for example, another terminal or outputs the audio data to the memory 130 for further processing. In this application, the microphone 162 may capture the voice of the user.
Wi-Fi belongs to a short-distance wireless transmission technology, and the mobile terminal 100 may help a user to receive and transmit e-mails, browse webpages, access streaming media, and the like through the Wi-Fi module 170, which provides a wireless broadband internet access for the user.
The processor 110 is a control center of the mobile terminal 100, connects various parts of the entire terminal using various interfaces and lines, and performs various functions of the mobile terminal 100 and processes data by running or executing software programs stored in the memory 130 and calling data stored in the memory 130. In some embodiments, processor 110 may include one or more processing units; the processor 110 may also integrate an application processor, which mainly handles operating systems, user interfaces, applications, etc., and a baseband processor, which mainly handles wireless communications. It will be appreciated that the baseband processor described above may not be integrated into the processor 110. In the present application, the processor 110 may run an operating system, an application program, a user interface display, a touch response, and the video recording method according to the embodiment of the present application. In addition, the processor 110 is coupled with the input unit 130 and the display unit 140.
And the bluetooth module 181 is configured to perform information interaction with other bluetooth devices having a bluetooth module through a bluetooth protocol. For example, the mobile terminal 100 may establish a bluetooth connection with a wearable electronic device (e.g., a smart watch) having a bluetooth module via the bluetooth module 181, so as to perform data interaction.
The mobile terminal 100 also includes a power supply 190 (e.g., a battery) that powers the various components. The power supply may be logically coupled to the processor 110 through a power management system, such that the power management system may manage charging, discharging, and power consumption. The mobile terminal 100 may also be configured with power buttons for powering the terminal on and off, and locking the screen.
The embodiment of the present application further provides a computer-readable storage medium, in which instructions are stored, and when the computer-readable storage medium runs on a computer, the computer is caused to execute the video recording method provided by the above method embodiment.
All other embodiments, which can be derived by a person skilled in the art from the exemplary embodiments shown in the present application without inventive effort, shall fall within the scope of protection of the present application. Moreover, while the disclosure herein has been presented in terms of exemplary one or more examples, it is to be understood that each aspect of the disclosure can be utilized independently and separately from other aspects of the disclosure to provide a complete disclosure.
It should be understood that the terms "first" and "second," and the like in the description and claims of this application and in the drawings described above, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used are interchangeable under appropriate circumstances and can be implemented in sequences other than those illustrated or otherwise described herein with respect to the embodiments of the application, for example.
Furthermore, the terms "comprises" and "comprising," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a product or device that comprises a list of elements is not necessarily limited to those elements explicitly listed, but may include other elements not expressly listed or inherent to such product or device.
The term "module," as used herein, refers to any known or later developed hardware, software, firmware, artificial intelligence, fuzzy logic, or combination of hardware and/or software code that is capable of performing the functionality associated with that element.
The above description is only exemplary of the present application and should not be taken as limiting the present application, and any modifications, equivalents, improvements and the like that are made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (10)

1. A mobile terminal, characterized in that the mobile terminal comprises: at least two cameras, and a processor;
the at least two cameras are used for focusing according to different object distances;
the processor is used for responding to a focusing instruction carrying a target object distance in the video recording process and acquiring an image collected by a target camera focusing according to the target object distance;
the processor is further configured to record the video based on the image acquired by the target camera.
2. The mobile terminal of claim 1, wherein the processor is configured to:
determining object distances of at least two shot objects according to a first image acquired by a first camera in the at least two cameras and a second image acquired by a second camera to obtain at least two different object distances corresponding to the at least two cameras one to one;
and respectively controlling each camera to focus according to the corresponding object distance.
3. The mobile terminal of claim 1, wherein the mobile terminal further comprises: a display screen; the processor is configured to:
responding to a camera starting instruction, and displaying images collected by a main camera in the at least two cameras in the display screen;
and after the image collected by the target camera is obtained, displaying the image collected by the target camera in the display screen.
4. The mobile terminal of claim 3, wherein the processor is further configured to:
and displaying the mark of the camera for acquiring the currently displayed image in the display screen.
5. The mobile terminal of any of claims 1 to 4, wherein the mobile terminal further comprises a touch screen, and wherein the processor is further configured to:
receiving a click operation acting on the touch screen;
determining a target object distance of a target shooting object at an action point of the click operation;
and generating a focusing instruction based on the target object distance.
6. The mobile terminal of any of claims 1 to 4, wherein the processor is further configured to:
and if the shot object positioned in the target area of the display screen is detected to be changed, generating a focusing instruction by taking the object distance of the changed shot object as the target object distance.
7. A video recording method is applied to a mobile terminal, wherein the mobile terminal comprises at least two cameras, and the method comprises the following steps:
controlling the at least two cameras to focus according to different object distances;
in the video recording process, responding to a focusing instruction carrying a target object distance, and acquiring an image collected by a target camera focusing according to the target object distance;
and recording the video based on the image collected by the target camera.
8. The method of claim 7, wherein the controlling the at least two cameras to focus at different object distances comprises:
determining object distances of at least two shot objects according to a first image acquired by a first camera in the at least two cameras and a second image acquired by a second camera to obtain at least two different object distances corresponding to the at least two cameras one to one;
and respectively controlling each camera to focus according to the corresponding object distance.
9. The method of claim 7, further comprising:
responding to a camera starting instruction, and displaying images collected by a main camera in the at least two cameras in a display screen;
after the acquiring of the image collected by the target camera focused according to the target object distance, the method further includes:
and displaying the image collected by the target camera in the display screen.
10. The method according to any one of claims 7 to 9, wherein before the acquiring an image captured by a target camera with a shooting focal length of the target focal length in response to a focusing instruction carrying the target focal length, the method further comprises:
receiving a click operation acting on a touch screen of the mobile terminal;
determining a target object distance of a target shooting object at an action point of the click operation;
and generating a focusing instruction based on the target object distance.
CN201911312533.2A 2019-12-18 2019-12-18 Video recording method and mobile terminal Active CN112995496B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911312533.2A CN112995496B (en) 2019-12-18 2019-12-18 Video recording method and mobile terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911312533.2A CN112995496B (en) 2019-12-18 2019-12-18 Video recording method and mobile terminal

Publications (2)

Publication Number Publication Date
CN112995496A true CN112995496A (en) 2021-06-18
CN112995496B CN112995496B (en) 2022-07-05

Family

ID=76344078

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911312533.2A Active CN112995496B (en) 2019-12-18 2019-12-18 Video recording method and mobile terminal

Country Status (1)

Country Link
CN (1) CN112995496B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115442526A (en) * 2022-08-31 2022-12-06 维沃移动通信有限公司 Video shooting method, device and equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016141627A1 (en) * 2015-03-11 2016-09-15 宇龙计算机通信科技(深圳)有限公司 Image acquisition method, image acquisition device and terminal
WO2017008353A1 (en) * 2015-07-10 2017-01-19 宇龙计算机通信科技(深圳)有限公司 Capturing method and user terminal
CN107277355A (en) * 2017-07-10 2017-10-20 广东欧珀移动通信有限公司 camera switching method, device and terminal
WO2018103299A1 (en) * 2016-12-09 2018-06-14 中兴通讯股份有限公司 Focusing method, and focusing device
CN109639974A (en) * 2018-12-20 2019-04-16 Oppo广东移动通信有限公司 Control method, control device, electronic device and medium
CN110099211A (en) * 2019-04-22 2019-08-06 联想(北京)有限公司 Video capture method and electronic equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016141627A1 (en) * 2015-03-11 2016-09-15 宇龙计算机通信科技(深圳)有限公司 Image acquisition method, image acquisition device and terminal
WO2017008353A1 (en) * 2015-07-10 2017-01-19 宇龙计算机通信科技(深圳)有限公司 Capturing method and user terminal
WO2018103299A1 (en) * 2016-12-09 2018-06-14 中兴通讯股份有限公司 Focusing method, and focusing device
CN107277355A (en) * 2017-07-10 2017-10-20 广东欧珀移动通信有限公司 camera switching method, device and terminal
CN109639974A (en) * 2018-12-20 2019-04-16 Oppo广东移动通信有限公司 Control method, control device, electronic device and medium
CN110099211A (en) * 2019-04-22 2019-08-06 联想(北京)有限公司 Video capture method and electronic equipment

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115442526A (en) * 2022-08-31 2022-12-06 维沃移动通信有限公司 Video shooting method, device and equipment

Also Published As

Publication number Publication date
CN112995496B (en) 2022-07-05

Similar Documents

Publication Publication Date Title
US10560624B2 (en) Imaging control device, imaging control method, camera, camera system, and program
CN106716989B (en) Imaging device, imaging method, and program
JP6267363B2 (en) Method and apparatus for taking images
KR102085766B1 (en) Method and Apparatus for controlling Auto Focus of an photographing device
KR20200019728A (en) Shooting mobile terminal
US9854151B2 (en) Imaging device and focusing control method
US20200167961A1 (en) Camera device, imaging system, control method, and program
CN104333701A (en) Method and device for displaying camera preview pictures as well as terminal
US9635235B2 (en) Communication apparatus and control method thereof
CN112840634B (en) Electronic device and method for obtaining image
CN104333702A (en) Method, device and terminal for automatic focusing
CN110913139A (en) Photographing method and electronic equipment
JP2017069618A (en) Electronic apparatus and imaging method
US20170045933A1 (en) Communication apparatus, communication method, and computer readable recording medium
CN112995496B (en) Video recording method and mobile terminal
CN104506770A (en) Method and device for photographing image
CN116055844B (en) Tracking focusing method, electronic equipment and computer readable storage medium
US20170328976A1 (en) Operation device, tracking system, operation method, and program
JP2014158158A (en) Electronic camera
KR101889702B1 (en) Method for correcting user’s hand tremor, machine-readable storage medium and imaging device
CN111610921A (en) Gesture recognition method and device
KR102458470B1 (en) Image processing method and apparatus, camera component, electronic device, storage medium
CN112149469A (en) Fingerprint identification method and device
CN111343375A (en) Image signal processing method and device, electronic device and storage medium
CN116709014B (en) Micro-distance mode identification method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder

Address after: 266071 Shandong city of Qingdao province Jiangxi City Road No. 11

Patentee after: Qingdao Hisense Mobile Communication Technology Co.,Ltd.

Address before: 266071 Shandong city of Qingdao province Jiangxi City Road No. 11

Patentee before: HISENSE MOBILE COMMUNICATIONS TECHNOLOGY Co.,Ltd.

CP01 Change in the name or title of a patent holder