CN109639983B - Photographing method, photographing device, terminal and computer-readable storage medium - Google Patents
Photographing method, photographing device, terminal and computer-readable storage medium Download PDFInfo
- Publication number
- CN109639983B CN109639983B CN201910013723.8A CN201910013723A CN109639983B CN 109639983 B CN109639983 B CN 109639983B CN 201910013723 A CN201910013723 A CN 201910013723A CN 109639983 B CN109639983 B CN 109639983B
- Authority
- CN
- China
- Prior art keywords
- image
- depth
- camera
- photographing
- mode
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/62—Control of parameters via user interfaces
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Human Computer Interaction (AREA)
- Studio Devices (AREA)
- Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
Abstract
The application belongs to the technical field of photographing, and particularly relates to a photographing method, a photographing device, a photographing terminal and a computer-readable storage medium, wherein the photographing method is applied to the photographing terminal, the photographing terminal is provided with two cameras, the two cameras comprise a 3D camera and an RGB camera, and the photographing method comprises the following steps: acquiring a current photographing mode of a terminal; if the photographing mode is a first preset mode, starting the RGB camera to acquire a first photographed image; if the photographing mode is a second preset mode, starting the 3D camera to obtain a second photographed image and a first depth image; the assembly process of the camera is simplified, and the assembly efficiency of the camera is improved; meanwhile, the problem that the resolution ratio of image shooting cannot meet the requirements of users due to the fact that the 3D camera is used for image shooting is avoided.
Description
Technical Field
The present application belongs to the field of photographing technologies, and in particular, to a photographing method, apparatus, terminal, and computer-readable storage medium.
Background
In a computer vision system, three-dimensional scene information provides more possibilities for various computer vision applications such as image segmentation, target detection, object tracking and the like, and a Depth image (Depth map) is widely applied as a general three-dimensional scene information expression mode. The gray value of each pixel point of the depth image can be used for representing the distance between a certain point in the scene and the camera.
At present, a binocular stereo vision method is usually adopted to obtain a depth image, two cameras need to be arranged in the method, the two cameras need to be separated by a certain distance, meanwhile, a support needs to be used for fixing, calibration needs to be carried out on the cameras, and the cameras are complex to assemble.
Disclosure of Invention
The embodiment of the application provides a photographing method, a photographing device, a photographing terminal and a computer-readable storage medium, which can solve the technical problem that a camera for acquiring depth information is complex to assemble.
The first aspect of the embodiment of the application provides a photographing method, which is applied to a terminal, wherein the terminal is provided with two cameras, the two cameras comprise a 3D camera and an RGB camera, and the photographing method comprises the following steps:
acquiring a current photographing mode of a terminal;
if the photographing mode is a first preset mode, starting the RGB camera to acquire a first photographed image;
and if the photographing mode is a second preset mode, starting the 3D camera to acquire a second photographed image and a first depth image.
The second aspect of the embodiment of the present application provides a photographing device, which is configured at a terminal, the terminal is provided with two cameras, the two cameras include a 3D camera and an RGB camera, the photographing device includes:
the terminal comprises an acquisition unit, a display unit and a control unit, wherein the acquisition unit is used for acquiring the current photographing mode of the terminal;
the first shooting unit is used for starting the RGB camera to acquire a first shot image if the shooting mode is a first preset mode;
and the second shooting unit is used for starting the 3D camera to acquire a second shooting image and a first depth image if the shooting mode is a second preset mode.
A third aspect of the embodiments of the present application provides a terminal, including two cameras, a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of the method when executing the computer program.
A fourth aspect of the embodiments of the present application provides a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a processor, the computer program implements the steps of the above method.
In the embodiment of the application, the 3D camera is arranged to directly acquire the depth image when the image is shot, and the two cameras are not required to be used for acquiring the depth image, so that the camera can be fixed without a support, and is calibrated, the assembly flow of the camera is simplified, and the assembly efficiency of the camera is improved. In addition, this application has still realized carrying out the acquirement of first shooting image through the RGB camera when the mode of shooing is first preset mode through setting up the RGB camera when setting up the 3D camera, has avoided utilizing the 3D camera to carry out image shooting, leads to the resolution ratio of image shooting can't satisfy user's demand's problem.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained from the drawings without inventive effort.
Fig. 1 is a schematic flowchart of a first implementation of a photographing method provided in an embodiment of the present application;
fig. 2 is a first schematic diagram of a photo preview interface provided in an embodiment of the present application;
fig. 3 is a schematic flowchart of a second implementation of a photographing method according to an embodiment of the present application;
fig. 4 is a schematic flowchart illustrating a specific implementation of step 105 of the photographing method provided in the embodiment of the present application;
fig. 5 is a second schematic diagram of a photo preview interface provided in the embodiment of the present application;
fig. 6 is a schematic structural diagram of a photographing device according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of a terminal according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application. Meanwhile, in the description of the present application, the terms "first", "second", and the like are used only for distinguishing the description, and are not to be construed as indicating or implying relative importance.
At present, when a depth image is obtained, a binocular stereo vision method is generally adopted for obtaining, two cameras are needed to be arranged in the method, two images of the same scene can be obtained simultaneously through the two cameras, corresponding pixel points in the two images are found through a stereo matching algorithm, parallax information is calculated according to a triangular principle, and the parallax information is converted into the depth image which can be used for representing the object distance in the scene.
In the method for obtaining the depth image, the cameras need to be assembled at certain intervals and fixed by the support, and need to be calibrated, so that the assembly of the cameras is complex. Particularly, in the selection of the bracket, if the metal bracket is used for fixing the camera, the transmission of radio frequency signals can be influenced; if fix the camera with non-metallic support, then probably need use the great support of thickness just can guarantee the higher stability of camera, this to the terminal that needs to realize frivolous design, will increase the design degree of difficulty of terminal structure undoubtedly.
In the embodiment of the application, the depth image is directly acquired by arranging the 3D camera when the image is shot, and the two cameras are not required to be used for acquiring the depth image, so that the camera can be fixed without a support and calibrated, the assembly flow of the camera is simplified, the assembly efficiency of the camera is improved, and the difficulty of the structural design of the terminal is not increased.
In addition, in practical application, when the 3D camera outputs a two-dimensional RGB image and a depth image carrying depth information, the higher the depth accuracy corresponding to the depth image is, the more the resolution of the two-dimensional RGB image is reduced obviously, and the requirement of a user on the image resolution cannot be met. To this, this application embodiment has still realized carrying out the acquirement of first shooting image through the RGB camera when the mode of shooing is first preset mode through setting up the RGB camera when setting up the 3D camera, has avoided utilizing the 3D camera to carry out image shooting, leads to the resolution ratio of image shooting can't satisfy the problem of user's demand.
Fig. 1 shows a schematic flow chart of a photographing method implemented by an embodiment of the present application, where the method is applied to a terminal, can be executed by a photographing device configured on the terminal, and is suitable for a situation where camera assembly efficiency needs to be improved, and includes steps 101 to 103.
The terminal comprises a terminal provided with a photographing device, such as a smart phone, a tablet computer and a learning machine. This terminal can install and take a picture and use to be provided with two cameras, two cameras include 3D camera and RGB camera.
The 3D camera can simultaneously output a two-dimensional RGB image and a depth image carrying depth information when shooting; the RGB camera can output a two-dimensional RGB image with high resolution ratio when shooting.
Optionally, in some embodiments of the present application, the current photographing mode of the terminal may be obtained when a camera start instruction is received.
For example, the camera start instruction includes a camera start instruction triggered by a user clicking a shooting application icon in a system desktop, a camera start instruction triggered by a user through a point physical key, a camera start instruction triggered by a user through voice, or a camera start instruction triggered by other ways.
Optionally, in some embodiments of the application, the current photographing mode of the terminal may also be obtained when the photographing instruction is received.
For example, as shown in fig. 2, when a photographing instruction triggered by a user clicking the photographing control 22 in the photographing preview interface 21 is received, the current photographing mode of the terminal is acquired; or when a photographing instruction triggered by clicking a physical key by a user is received, acquiring the current photographing mode of the terminal; or when a photographing instruction triggered by voice of a user is received, acquiring the current photographing mode of the terminal; here, the triggering manner of the photographing instruction is not limited in the embodiment of the present application.
Optionally, the photographing mode may include one or more of a first preset photographing mode, a second preset photographing mode, and a third preset photographing mode. The first preset shooting mode is a mode for starting an RGB camera to shoot; the second preset shooting mode is a mode for starting the 3D camera to shoot; the third preset shooting mode is a shooting mode for simultaneously starting the RGB camera and the 3D camera.
Optionally, the shooting mode of the terminal may be set according to a received setting instruction triggered by the user in the shooting preview interface.
For example, as shown in fig. 2, the shooting mode of the terminal may be a setting instruction of the first preset shooting mode triggered by selecting the selection control 25 in the pop-up window 24, a setting instruction of the second preset shooting mode triggered by selecting the selection control 26, and a setting instruction of the third preset shooting mode triggered by selecting the selection control 27, according to the received setting instruction of the first preset shooting mode triggered by the user clicking the setting control 23 in the shooting preview interface 21.
And step 102, if the photographing mode is a first preset mode, starting the RGB camera to acquire a first photographed image.
In the embodiment of the application, after the current photographing mode of the terminal is obtained, if the photographing mode is the first preset mode, it indicates that the user only needs to photograph the two-dimensional RGB image, and therefore, the requirement of the user on photographing the image with higher resolution can be met by independently starting the RGB camera to photograph the two-dimensional RGB image.
For example, when a user needs to perform self-photographing or normal picture photographing, the photographing mode may be set to a first preset mode, so that the RGB camera is turned on to obtain a first photographed image when the terminal receives a photographing instruction, or the RGB camera is turned on when image preview is performed, and the RGB camera is turned on to obtain the first photographed image when the photographing instruction is received.
In the embodiment of the application, when the user needs to acquire the two-dimensional RGB image and the depth image carrying the depth information at the same time, the photographing mode can be set to be the second preset mode, so that the photographing instruction is received at the terminal and then the 3D camera acquires the second photographed image and the depth image, or the 3D camera is started when the image is previewed and simultaneously acquires the second photographed image and the depth image when the photographing instruction is received.
In the embodiment of the application, through setting up the two cameras including 3D camera and RGB camera, can provide two kinds of independent shooting modes respectively for the user, promptly, first mode and the second mode of predetermineeing are preset. Under the two shooting modes, the two cameras work independently and are in complementary interference, when the 3D camera is started, the RGB camera is not started, and when the RGB camera is started, the 3D camera is not started; and the opening of the two cameras can be controlled through the shooting mode selected by the user, so that the automatic switching control system has the characteristic of flexible control mode and can realize the automatic switching between the two cameras according to the requirements of the user.
In addition, because the work of the two cameras is independent and does not influence each other, the two cameras can be fixed without a support, the distance between the two cameras is not limited, and meanwhile the two cameras do not need to be calibrated, so that the assembly process of the cameras is simplified, and the assembly efficiency of the cameras is improved.
Optionally, in step 103, after the acquiring the second captured image and the first depth image, the method may further include: and outputting a three-dimensional image according to the second shot image and the first depth image.
For example, when 3D makeup is performed, in order to blend the virtual makeup with the collected user image more naturally, the second captured image and the depth image may be acquired simultaneously by turning on the 3D camera, and a corresponding three-dimensional image may be output.
As an embodiment of the present application, as shown in fig. 3, after the current photographing mode of the terminal is obtained, steps 104 to 105 may further be included.
And step 104, if the photographing mode is a third preset mode, simultaneously starting the RGB camera and the 3D camera, and respectively acquiring the first photographed image, the second photographed image and the first depth image.
And 105, mapping the depth information carried by the first depth image to the first shot image to obtain the first shot image carrying the depth information.
Because the 3D camera itself can output two-dimensional RGB image and depth image simultaneously, consequently, when needing to acquire three-dimensional image, only need open a 3D camera and carry out image shooting can. However, in practical applications, when the 3D camera outputs a two-dimensional RGB image and a depth image carrying depth information, the higher the depth accuracy corresponding to the depth image, the more the resolution of the two-dimensional RGB image is reduced, and therefore, when it is necessary to acquire a three-dimensional image with high resolution and high depth accuracy, it is impossible to acquire a three-dimensional image by directly outputting the two-dimensional RGB image and the depth image using the 3D camera.
In the embodiment of the application, by integrating respective advantages of a 3D camera and the RGB camera, the RGB camera and the 3D camera are simultaneously started to obtain a first shot image, a second shot image and a first depth image, and then depth information carried by the first depth image is mapped into the first shot image to obtain the first shot image carrying the depth information, so that a three-dimensional image with higher resolution and higher depth precision can be obtained according to the first shot image carrying the depth information; the problem that the resolution ratio of image shooting cannot meet the requirements of users due to the fact that the 3D camera is used for image shooting is avoided.
In some embodiments of the present application, as shown in fig. 4, the step 105 of mapping the depth information carried by the first depth image to the first captured image to obtain the first captured image carrying the depth information may include: step 401 to step 403.
Step 401, performing alignment transformation and clipping processing on the first shot image and the second shot image to obtain a third shot image corresponding to the first shot image and a fourth shot image corresponding to the second shot image; and pixels in the third shot image correspond to pixels in the fourth shot image one to one.
For example, the first captured image and the second captured image are subjected to alignment transformation and cropping processing using an alignment algorithm so that pixels in the third captured image correspond one-to-one to pixels in the fourth captured image. The alignment algorithm may be a general alignment algorithm, such as a forward addition algorithm and an inverse combination algorithm.
And 402, performing the same alignment transformation and cutting processing as the second shot image on the first depth image to obtain a second depth image.
And step 403, mapping the depth information of the second depth image to the third shot image to obtain a third shot image carrying the depth information.
Since the first depth image and the second captured image are images output by the 3D camera at the same time, the captured contents are the same, and therefore, after the fourth captured image is obtained by performing the alignment transformation and cropping processing on the second captured image, the first depth image is subjected to the alignment transformation and cropping processing using the same alignment transformation and cropping processing as the second captured image, so that the second depth image can be obtained in one-to-one correspondence with the fourth captured image, that is, in one-to-one correspondence with the third captured image. Therefore, the first depth image needs to be subjected to the same alignment transformation and clipping processing as the second captured image to obtain a second depth image, so that the depth information of the second depth image can be mapped to the third captured image to obtain a third captured image carrying the depth information.
Optionally, before simultaneously turning on the RGB camera and the 3D camera, the method includes: and adjusting the depth precision of the 3D camera to a maximum value so as to obtain a three-dimensional image with higher depth precision as far as possible.
Optionally, in the embodiment described in fig. 1, in step 103, if the photographing mode is a second preset mode, turning on the 3D camera to obtain a second photographed image and a first depth image may include: and if the photographing mode is a second preset mode, after the depth precision of the 3D camera is adjusted to a preset precision value, the 3D camera is started to obtain a second photographed image and a first depth image.
When the photographing mode is the second preset mode, it cannot be determined whether the depth precision of the 3D camera is set to the maximum value or other depth precisions, and therefore, when the photographing mode is determined to be the second preset mode, the depth precision of the 3D camera needs to be adjusted to a preset precision value, and then the 3D camera is started to obtain the second photographed image and the first depth image.
The preset precision can be a depth precision when the depth precision and the resolution of the image reach an equilibrium state. For example, the preset depth accuracy may be obtained experimentally.
Optionally, in some embodiments of the present application, in step 103, turning on the 3D camera to obtain the second captured image and the first depth image may further include: and receiving a depth precision setting instruction, and acquiring a second shot image and a first depth image according to the depth precision corresponding to the depth precision setting instruction.
For example, as shown in fig. 5, when the photographing mode is the second preset mode, a depth precision setting bar 51 may be displayed on the photographing preview interface 21, a depth precision setting instruction generated according to a sliding operation of a user on a slider 52 on the depth precision setting bar 51 is received, and a second photographed image and a first depth image are obtained according to the depth precision corresponding to the depth precision setting instruction, so that dynamic adjustment between resolution and depth precision of a three-dimensional image output according to the second photographed image and the first depth image is realized, so as to meet different photographing requirements of the user.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Fig. 6 shows a schematic structural diagram of a photographing device 600 provided in an embodiment of the present application, the photographing device is configured in a terminal, the terminal is provided with two cameras, the two cameras include a 3D camera and an RGB camera, and the photographing device includes an obtaining unit 601, a first photographing unit 602, and a second photographing unit 603.
An obtaining unit 601, configured to obtain a current photographing mode of a terminal;
a first shooting unit 602, configured to, if the shooting mode is a first preset mode, start the RGB camera to obtain a first shot image;
the second shooting unit 603 is configured to, if the shooting mode is a second preset mode, start the 3D camera to obtain a second shot image and a first depth image.
In some embodiments of the present application, the photographing apparatus 600 may further include a third photographing unit, configured to, if the photographing mode is a third preset mode, simultaneously start the RGB camera and the 3D camera, and respectively obtain the first photographed image, the second photographed image, and the first depth image; and mapping the depth information carried by the first depth image to the first shot image to obtain the first shot image carrying the depth information.
Optionally, the third shooting unit is further configured to perform alignment transformation and clipping processing on the first shot image and the second shot image to obtain a third shot image corresponding to the first shot image and a fourth shot image corresponding to the second shot image; pixels in the third shot image correspond to pixels in the fourth shot image one to one; carrying out alignment transformation and cutting processing on the first depth image, which are the same as those of the second shot image, to obtain a second depth image; and mapping the depth information of the second depth image to the third shot image to obtain the third shot image carrying the depth information.
Optionally, the second shooting unit is further configured to adjust the depth precision of the 3D camera to a maximum value before the RGB camera and the 3D camera are simultaneously turned on.
Optionally, the second shooting unit is further configured to, if the shooting mode is a second preset mode, adjust the depth precision of the 3D camera to a preset precision value, and then start the 3D camera to obtain a second shot image and a first depth image.
Optionally, the second shooting unit is further configured to output a three-dimensional image according to the second shot image and the first depth image after the second shot image and the first depth image are acquired.
It should be noted that, for convenience and brevity of description, the specific working process of the photographing apparatus 600 described above may refer to the corresponding process of the method described in fig. 1 to fig. 5, and is not described herein again.
As shown in fig. 7, the present application provides a terminal for implementing the above-mentioned photographing method, where the terminal may be a terminal such as a smart phone, a tablet computer, a Personal Computer (PC), a learning machine, and includes: a processor 71, a memory 72, one or more input devices 73 (only one shown in fig. 7), one or more output devices 74 (only one shown in fig. 7), and dual cameras 75. The processor 71, memory 72, input device 73, output device 74 and dual cameras 75 are connected by a bus 76. The double cameras comprise a 3D camera and an RGB camera and are used for generating a first shot image, a second shot image and a first depth image according to collected external light signals.
It should be understood that, in the embodiment of the present Application, the Processor 71 may be a Central Processing Unit (CPU), and the Processor may also be other general processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, and the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The input device 73 may include a virtual keyboard, a touch pad, a fingerprint sensor (for collecting fingerprint information of a user and direction information of the fingerprint), a microphone, etc., and the output device 74 may include a display, a speaker, etc.
The memory 72 stores a computer program that is executable on the processor 71, for example, a program of a photographing method. The processor 71 implements the steps of the photographing method embodiment, such as the steps 101 to 103 shown in fig. 1, when executing the computer program. Alternatively, the processor 71, when executing the computer program, implements the functions of the modules/units in the device embodiments, for example, the functions of the units 601 to 603 shown in fig. 6.
The computer program may be divided into one or more modules/units, which are stored in the memory 72 and executed by the processor 71 to complete the present application. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, and the instruction segments are used for describing the execution process of the computer program in the terminal for taking pictures. For example, the computer program may be divided into an acquisition unit, a first photographing unit, and a second photographing unit, each of which functions specifically as follows: the terminal comprises an acquisition unit, a display unit and a control unit, wherein the acquisition unit is used for acquiring the current photographing mode of the terminal; the first shooting unit is used for starting the RGB camera to acquire a first shot image if the shooting mode is a first preset mode; and the second shooting unit is used for starting the 3D camera to acquire a second shooting image and a first depth image if the shooting mode is a second preset mode.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned functions may be distributed as different functional units and modules according to needs, that is, the internal structure of the apparatus may be divided into different functional units or modules to implement all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/terminal and method may be implemented in other ways. For example, the above-described embodiments of the apparatus/terminal are merely illustrative, and for example, the division of the above-described modules or units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units described above, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow in the method of the embodiments described above may be implemented by a computer program, which may be stored in a computer readable storage medium and used by a processor to implement the steps of the embodiments of the methods described above. The computer program includes computer program code, and the computer program code may be in a source code form, an object code form, an executable file or some intermediate form. The computer readable medium may include: any entity or device capable of carrying the above-described computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier signal, telecommunications signal, software distribution medium, and the like. It should be noted that the computer readable medium described above may include content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media that does not include electrical carrier signals and telecommunications signals in accordance with legislation and patent practice.
The above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.
Claims (6)
1. The photographing method is applied to a terminal, the terminal is provided with two cameras, the photographing method is characterized in that the two cameras comprise a 3D camera and an RGB camera, and the photographing method comprises the following steps:
acquiring a current photographing mode of a terminal;
if the photographing mode is a first preset mode, starting the RGB camera to acquire a first photographed image;
if the photographing mode is a second preset mode, the 3D camera is started to acquire a second photographed image and a first depth image, and the method comprises the following steps:
if the photographing mode is a second preset mode, after the depth precision of the 3D camera is adjusted to a preset precision value, the 3D camera is started to obtain a second photographed image and a first depth image, wherein the preset precision is the depth precision when the depth precision and the resolution of the images reach a balanced state;
if the photographing mode is a third preset mode, adjusting the depth precision of the 3D camera to a maximum value so as to obtain a three-dimensional image with higher depth precision as far as possible, simultaneously starting the RGB camera and the 3D camera, and respectively obtaining the first photographed image, the second photographed image and the first depth image;
and mapping the depth information carried by the first depth image to the first shot image to obtain the first shot image carrying the depth information.
2. The photographing method of claim 1, wherein the mapping the depth information carried by the first depth image into the first photographed image to obtain the first photographed image carrying the depth information comprises:
carrying out alignment transformation and cutting processing on the first shot image and the second shot image to obtain a third shot image corresponding to the first shot image and a fourth shot image corresponding to the second shot image; pixels in the third shot image correspond to pixels in the fourth shot image one to one;
carrying out alignment transformation and cutting processing on the first depth image, which are the same as those of the second shot image, to obtain a second depth image;
and mapping the depth information of the second depth image to the third shot image to obtain the third shot image carrying the depth information.
3. The photographing method according to claim 1, comprising, after the acquiring the second photographed image and the first depth image:
and outputting a three-dimensional image according to the second shot image and the first depth image.
4. The utility model provides a device of shooing, disposes in the terminal, the terminal is provided with two cameras, a serial communication port, two cameras include 3D camera and RGB camera, the device of shooing includes:
the terminal comprises an acquisition unit, a display unit and a control unit, wherein the acquisition unit is used for acquiring the current photographing mode of the terminal;
the first shooting unit is used for starting the RGB camera to acquire a first shot image if the shooting mode is a first preset mode;
the second shooting unit is used for if the mode of shooing is the second and presets the mode, then opens the 3D camera obtains the second and shoots image and first depth image, includes:
if the photographing mode is a second preset mode, after the depth precision of the 3D camera is adjusted to a preset precision value, the 3D camera is started to obtain a second photographed image and a first depth image, wherein the preset precision is the depth precision when the depth precision and the resolution of the images reach a balanced state;
the third shooting unit is used for simultaneously starting the RGB camera and the 3D camera and respectively acquiring the first shot image, the second shot image and the first depth image if the shooting mode is a third preset mode; and mapping the depth information carried by the first depth image to the first shot image to obtain the first shot image carrying the depth information.
5. A terminal comprising two cameras, a memory, a processor and a computer program stored in said memory and executable on said processor, characterized in that said two cameras comprise a 3D camera and an RGB camera, said processor when executing said computer program implementing the steps of the method according to any of claims 1 to 3.
6. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 3.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910013723.8A CN109639983B (en) | 2019-01-03 | 2019-01-03 | Photographing method, photographing device, terminal and computer-readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910013723.8A CN109639983B (en) | 2019-01-03 | 2019-01-03 | Photographing method, photographing device, terminal and computer-readable storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109639983A CN109639983A (en) | 2019-04-16 |
CN109639983B true CN109639983B (en) | 2020-09-04 |
Family
ID=66058057
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910013723.8A Active CN109639983B (en) | 2019-01-03 | 2019-01-03 | Photographing method, photographing device, terminal and computer-readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109639983B (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102467341A (en) * | 2010-11-04 | 2012-05-23 | Lg电子株式会社 | Mobile terminal and method of controlling an image photographing therein |
CN108764052A (en) * | 2018-04-28 | 2018-11-06 | Oppo广东移动通信有限公司 | Image processing method, device, computer readable storage medium and electronic equipment |
CN109005348A (en) * | 2018-08-22 | 2018-12-14 | Oppo广东移动通信有限公司 | The control method of electronic device and electronic device |
CN208257980U (en) * | 2018-06-04 | 2018-12-18 | 盎锐(上海)信息科技有限公司 | Filming apparatus based on 2D camera and 3D camera |
CN109040556A (en) * | 2018-08-22 | 2018-12-18 | Oppo广东移动通信有限公司 | Imaging device and electronic equipment |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9392262B2 (en) * | 2014-03-07 | 2016-07-12 | Aquifi, Inc. | System and method for 3D reconstruction using multiple multi-channel cameras |
US10009598B2 (en) * | 2015-05-01 | 2018-06-26 | Dell Products L.P. | Dynamic mode switching of 2D/3D multi-modal camera for efficient gesture detection |
-
2019
- 2019-01-03 CN CN201910013723.8A patent/CN109639983B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102467341A (en) * | 2010-11-04 | 2012-05-23 | Lg电子株式会社 | Mobile terminal and method of controlling an image photographing therein |
CN108764052A (en) * | 2018-04-28 | 2018-11-06 | Oppo广东移动通信有限公司 | Image processing method, device, computer readable storage medium and electronic equipment |
CN208257980U (en) * | 2018-06-04 | 2018-12-18 | 盎锐(上海)信息科技有限公司 | Filming apparatus based on 2D camera and 3D camera |
CN109005348A (en) * | 2018-08-22 | 2018-12-14 | Oppo广东移动通信有限公司 | The control method of electronic device and electronic device |
CN109040556A (en) * | 2018-08-22 | 2018-12-18 | Oppo广东移动通信有限公司 | Imaging device and electronic equipment |
Also Published As
Publication number | Publication date |
---|---|
CN109639983A (en) | 2019-04-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10311649B2 (en) | Systems and method for performing depth based image editing | |
EP3067746B1 (en) | Photographing method for dual-camera device and dual-camera device | |
EP3531418B1 (en) | Electronic device displaying interface for editing video data and method for controlling same | |
CN107395898B (en) | Shooting method and mobile terminal | |
KR101727169B1 (en) | Method and apparatus for generating image filter | |
EP2975835B1 (en) | Imaging device, image processing method, image processing program, and recording medium | |
US11582391B2 (en) | Electronic device capable of controlling image display effect, and method for displaying image | |
CN106454086B (en) | Image processing method and mobile terminal | |
WO2020103604A1 (en) | Method and device for photographing, terminal, and computer-readable storage medium | |
JP2016165101A (en) | Device with adaptive camera array | |
US9332195B2 (en) | Image processing apparatus, imaging apparatus, and image processing method | |
CN109040596B (en) | Method for adjusting camera, mobile terminal and storage medium | |
CN105578023A (en) | Image quick photographing method and device | |
CN111787224B (en) | Image acquisition method, terminal device and computer-readable storage medium | |
KR20200117695A (en) | Electronic device and method for controlling camera using external electronic device | |
CN113810604A (en) | Document shooting method and device | |
CN105574834B (en) | Image processing method and device | |
KR20200113522A (en) | Method for performing fucntion according to gesture input and electronic device performing thereof | |
CN109479087B (en) | Image processing method and device | |
CN111988526B (en) | Mobile terminal and image data processing method | |
CN109639983B (en) | Photographing method, photographing device, terminal and computer-readable storage medium | |
US12081879B2 (en) | Method for generating image and electronic device therefor | |
CN107682556B (en) | Information display method and equipment | |
CN118052752A (en) | Image processing method, device and storage medium | |
CN118250561A (en) | Image processing method and device, electronic equipment and computer readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |