CN113489903A - Shooting method, shooting device, terminal equipment and storage medium - Google Patents

Shooting method, shooting device, terminal equipment and storage medium Download PDF

Info

Publication number
CN113489903A
CN113489903A CN202110754835.6A CN202110754835A CN113489903A CN 113489903 A CN113489903 A CN 113489903A CN 202110754835 A CN202110754835 A CN 202110754835A CN 113489903 A CN113489903 A CN 113489903A
Authority
CN
China
Prior art keywords
real time
background image
target object
preview picture
preview
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110754835.6A
Other languages
Chinese (zh)
Inventor
刘少华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huizhou TCL Mobile Communication Co Ltd
Original Assignee
Huizhou TCL Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huizhou TCL Mobile Communication Co Ltd filed Critical Huizhou TCL Mobile Communication Co Ltd
Priority to CN202110754835.6A priority Critical patent/CN113489903A/en
Publication of CN113489903A publication Critical patent/CN113489903A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/272Means for inserting a foreground image in a background image, i.e. inlay, outlay

Abstract

The invention discloses a shooting method, a shooting device, a terminal device and a storage medium, wherein the shooting method can simultaneously acquire a target object in a first preview picture acquired by the terminal device and a background image acquired by the target device in real time by controlling the connection of the terminal device and the target device, and combine the target object and the background image in the first preview picture into a second preview picture displayed in real time.

Description

Shooting method, shooting device, terminal equipment and storage medium
Technical Field
The present invention relates to the field of electronic devices, and in particular, to a shooting method, an apparatus, a terminal device, and a storage medium.
Background
With the development of electronic technology, the photographing performance of terminal devices such as smart phones and tablet computers is better and better, terminal devices are more and more popular for photographing, and the processing capability of the current terminal devices is more and more powerful, so that some simple image processing work can be performed.
However, the current photographing software of the terminal device can only perform some preset and simpler image processing operations, such as performing preset-level facial beautification, adding a preset background, adding a preset pendant, and the like, and cannot complete a special synthetic photographing operation according to the requirements of a user. For example, when a friend of a user travels in a scenic spot, the user is not near the friend, but the user wants to take a picture of the user in the scenic spot, and in the conventional method, after the friend takes a picture of the scenery, the user is added to the picture of the scenery through a photo screen or other map trimming software, and the picture is trimmed, so that the picture taken by the user in the scenic spot is finally obtained. However, this method usually requires professional image retouching technicians to operate, and the synthesized picture cannot be obtained immediately, which reduces the shooting experience of the user.
Disclosure of Invention
The embodiment of the invention aims to provide a shooting method, a shooting device, terminal equipment and a storage medium, which can automatically blend an object shot by a user into a background image shot by target equipment to obtain a composite image displayed in real time, so that the shooting interest is improved.
In order to achieve the above object, in a first aspect, an embodiment of the present invention provides a shooting method applied to a terminal device, including the following steps:
controlling the terminal equipment to be connected with target equipment;
acquiring a first preview picture acquired by the terminal equipment in real time;
acquiring a background image acquired by the target equipment in real time;
synthesizing the first preview picture and the background picture in real time to obtain a second preview picture synthesized in real time and displaying the second preview picture;
and shooting the second preview picture when a shooting instruction is received.
Further, the step of synthesizing the first preview screen and the background image in real time to obtain a second preview screen synthesized in real time and displaying the second preview screen includes:
when detecting that a target object exists in the first preview picture, extracting the target object;
synthesizing the target object and the background image in real time to obtain a second preview picture synthesized in real time;
and displaying the second preview picture in real time.
Further, the step of synthesizing the target object and the background image in real time to obtain a second preview picture synthesized in real time includes:
dividing the background image into a plurality of image layers according to the depth information in the three-dimensional data information of the background image;
and synthesizing the target object and the plurality of image layers of the background image to obtain a second preview image synthesized in real time.
Further, the terminal device is provided with at least two cameras, each camera has a different shooting focal length, and the step of extracting the target object includes:
determining depth-of-field information of a target object in a first preview picture according to the first preview picture acquired by each camera;
and performing edge segmentation on the target object in any one of the first preview pictures according to the depth information so as to extract the target object.
Further, the step of synthesizing the target object and the background image in real time to obtain a second preview screen synthesized in real time includes:
replacing a first preview picture acquired by a camera with higher resolution in at least two cameras of the terminal equipment with a background image acquired by the target equipment;
according to the depth of field information, performing three-dimensional reconstruction on the target object extracted from a first preview picture acquired by a camera with lower resolution to obtain a three-dimensional target object;
and synthesizing the three-dimensional target object and the background image to obtain a real-time synthesized second preview picture.
Further, the step of obtaining the background image acquired by the target device in real time includes:
acquiring packaging format data transmitted by the target equipment in real time, wherein the packaging format data comprises an H264 bare code stream;
and decoding the packaging format data to obtain a background image.
Further, before the step of photographing the second preview screen, the photographing method includes:
and editing the target object on the second preview picture according to an editing instruction, wherein the editing instruction comprises at least one of layer switching, zooming, rotating and brightness adjusting.
In a second aspect, an embodiment of the present invention further provides a shooting apparatus, including: the device comprises a connecting module, a first obtaining module, a second obtaining module, a synthesizing module and a shooting module;
the connection module is used for controlling the connection between the terminal equipment and target equipment;
the first acquisition module is used for acquiring a first preview picture acquired by the terminal equipment in real time;
the second acquisition module is used for acquiring a background image acquired by the target equipment in real time;
the synthesis module is used for synthesizing the first preview picture and the background image in real time to obtain a second preview picture synthesized in real time and displaying the second preview picture;
and the shooting module is used for shooting the second preview picture when a shooting instruction is received.
In a third aspect, to solve the same technical problem, an embodiment of the present invention further provides a terminal device, including a processor, a memory, and a computer program stored in the memory and configured to be executed by the processor, where the memory is coupled to the processor, and the processor executes the computer program to implement any of the above-mentioned shooting methods.
In a fourth aspect, in order to solve the same technical problem, an embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored, where the computer program, when running, controls an apparatus in which the computer-readable storage medium is located to execute any one of the above-mentioned shooting methods.
The embodiment of the invention provides a shooting method, a shooting device, terminal equipment and a storage medium, wherein the shooting method can simultaneously acquire a first preview picture acquired by the terminal equipment and a background image acquired by the target equipment in real time by controlling the connection of the terminal equipment and the target equipment, and combine the first preview picture and the background image into a second preview picture displayed in real time, and can shoot the second preview picture displayed in real time when a shooting instruction is received, so that the first preview picture shot by a user is automatically merged into the background image shot by the target equipment to obtain a combined image displayed in real time, and the shooting pleasure is improved.
Drawings
Fig. 1 is a schematic flow chart of a shooting method according to an embodiment of the present invention;
fig. 2a is a schematic diagram of an extraction process for extracting a target object according to an embodiment of the present invention;
fig. 2b is a schematic process diagram of a map layer of a background image according to an embodiment of the present invention;
fig. 2c is a schematic diagram illustrating a target object and a background image according to an embodiment of the present invention;
fig. 3 is another schematic flow chart of a shooting method according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a photographing device according to an embodiment of the present invention;
fig. 5 is a schematic diagram of a first structure of a terminal device according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of a second terminal device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, fig. 1 is a schematic flow chart of a shooting method according to an embodiment of the present invention, and as shown in fig. 1, the shooting method according to the embodiment of the present invention includes steps S101 to S105;
and step S101, controlling the terminal equipment to be connected with the target equipment.
In this embodiment, when a control instruction is received, the terminal device is controlled to be connected with the device, where the control instruction includes an operation instruction for performing a touch operation on the terminal device or performing a key operation to control the terminal device to be connected with the target device, and a voice instruction for controlling the terminal device to be connected with the target device according to voice recognition.
As an alternative embodiment, the connection modes of the terminal device and the target device include a mobile network communication connection, a bluetooth connection, and a wireless local area connection. For example, when the distance between the target device and the terminal device is very close (less than 10m), the terminal device and the target device are connected in a bluetooth connection mode; when the distance between the target equipment and the terminal equipment is short (less than 50m), connecting the terminal equipment and the target equipment by adopting a wireless local area network; and when the distance between the target equipment and the terminal equipment is far (more than 50m), connecting the terminal equipment and the target equipment by adopting a mobile network communication connection mode.
As another alternative embodiment, the terminal device provided in the embodiment of the present invention has multiple shooting modes, for example, the shooting modes include a portrait shooting module, a video shooting mode, a slow motion shooting mode, a compound shooting mode, and the like. Therefore, the control instruction further includes an instruction to control the shooting mode of the terminal device to perform mode conversion, such as an instruction to control the shooting mode of the terminal device to convert to the composite shooting mode.
And step S102, acquiring a first preview picture acquired by the terminal equipment in real time.
In this embodiment, after receiving the control instruction, the camera function of the terminal device is started, and a first preview picture acquired by a camera of the terminal device is acquired in real time.
And step S103, acquiring a background image acquired by the target equipment in real time.
In this embodiment, after the first preview picture is obtained, the package format data transmitted by the target device in real time is obtained in a wireless transmission manner, and then the package format data is decoded to obtain the background image, where the package format data includes an H264 bare bit stream. The background image data collected by the target equipment is an H264 bare code stream obtained by encoding in an H264 format, and the H264 bare code stream corresponding to the background image is transmitted to the terminal equipment, so that the transmission time can be shortened, and the transmission efficiency can be improved.
And step S104, synthesizing the first preview picture and the background image in real time to obtain and display a second preview picture synthesized in real time.
It should be noted that, after the data sent by the target device is the H264 bare code stream, the control terminal device performs decoding through the H264 decoder to obtain the background image corresponding to the H264 bare code stream. If the data received by the terminal device and sent by the target device is the background image, the data sent by the target device does not need to be decoded.
In this embodiment, step S104 specifically includes: when the target object is detected to exist in the first preview picture, the target object is extracted, the target object and the background image are synthesized in real time to obtain a second preview picture which is synthesized in real time, and the second preview picture is displayed in real time.
When the user shoots by adopting the shooting method provided by the embodiment of the invention, the target object required to be synthesized can be selected in advance, so that the terminal equipment can automatically identify and extract the target object to synthesize the target object and the background image. For example, when the target object selected by the user in advance is the user, the terminal device automatically extracts the user object from the first preview screen, and then synthesizes the user object with the background image to obtain a second preview screen synthesized in real time, wherein the background image and the user object in the second preview screen are both dynamically displayed in real time according to the dynamic background transmitted in real time and the dynamic behavior of the user.
In one embodiment, the method for synthesizing the target object and the background image in real time to obtain the second preview picture synthesized in real time includes: and dividing the background image into a plurality of layers according to the depth information in the three-dimensional data information of the background image, and synthesizing the target object and the plurality of layers of the background image to obtain a second preview picture synthesized in real time.
The target device is provided with the plurality of cameras, so that the target device can acquire an image with three-dimensional data information, the background image acquired by the target device and having the three-dimensional data information can be divided into the plurality of image layers according to the distance information of different scenes in the background image, and then the target object can be placed in the target image layer selected by a user, so that the real-time dynamic synthesis of the target object and the background image is realized.
The target layer comprises any one of a plurality of layers or a newly-built layer on any one of the plurality of layers.
In an embodiment of the present invention, when the terminal device is provided with a camera, the target object extracted from the first preview picture acquired by the terminal device is a two-dimensional target object image, so that the synthesizing method in this embodiment is to take the two-dimensional target object image as a new layer, then synthesize the new layer with a plurality of layers of the background image, and specifically, place the new layer in, above, below, or between any two adjacent layers in any one of the plurality of layers of the background image.
In another embodiment of the present invention, when the terminal device is provided with a plurality of cameras, the target object extracted from the first preview screen acquired by the terminal device is a target object containing three-dimensional data information, and therefore, the synthesizing method in this embodiment is to perform three-dimensional modeling according to the three-dimensional data information of the target object to obtain a three-dimensional target object, and then synthesize the three-dimensional target object with the background image, specifically, place the three-dimensional target object in, above, below, or between any two adjacent layers in any of the plurality of layers of the background image.
In step S105, when a shooting instruction is received, a second preview screen is shot.
In this embodiment, when the user is satisfied with the effect of combining the target object with the background image, that is, satisfied with the current second preview screen, the user performs a shooting operation to enable the terminal device to generate a shooting instruction, and controls the terminal device to shoot the current second preview screen according to the shooting instruction, so as to obtain a static composite image, where the composite image is consistent with the second preview screen during shooting.
For example, referring to fig. 2a to 2c, fig. 2a to 2c are schematic views of a scene of a shooting method in an application scenario according to the present embodiment. Application scenarios: when no trees are planted in user a's yard, and user a wants to purchase trees sold at user B. The user B thinks that the shooting method provided by the embodiment of the present invention is adopted, so that the user a feels what the courtyard effect is after planting trees, and thus the user a can determine which commodities of the user B are purchased, the user B controls the terminal device B to connect with the target device a of the user a through a control instruction, and simultaneously starts the camera function on the terminal device B, as shown in fig. 2a, obtains a first preview screen 10 containing trees, and makes the terminal device B extract a selected target object 11 (tree) from the first preview screen 10 according to a target object selection operation, and then as shown in fig. 2B, after the terminal device B receives the background image 20 (courtyard) transmitted by the target device a, since the target device a of the user a is provided with a plurality of cameras (e.g. a two-camera mobile phone), the image collected by the target device a has three-dimensional data information (e, distance information) of the background image 20 (courtyard), therefore, the terminal device B divides the background image into 3 layers (layer 21, layer 22, layer 23) according to the distance information of each scenery (house, open space, front yard enclosure) in the background image, then as shown in fig. 2c, places the target object 11 (tree) in the layer 22 corresponding to the open space, thereby synthesizing the target object 11 (tree) and the background image 20 (courtyard) in real time to obtain a second preview screen 30 (courtyard containing the tree) displayed in real time, and then the user B takes the second preview screen 30 through a shooting instruction to obtain a courtyard picture containing the tree and sends the picture to the user a for viewing, thereby enabling the user a to feel the courtyard effect after the tree is planted.
The above embodiment mainly implements the synthesis of the target object and the background image by using multiple cameras of the target device, and the following embodiment implements the synthesis of the target object and the background image by using multiple cameras of the terminal device, please refer to fig. 3, fig. 3 is another flow diagram of the shooting method according to the embodiment of the present invention, as shown in fig. 3, the shooting method according to the embodiment includes steps S301 to S309;
and step S301, controlling the terminal device to be connected with the target device.
Step S302, a first preview picture collected by the terminal equipment is obtained in real time.
In this embodiment, since the terminal device has a plurality of cameras (at least two cameras), the first preview screen acquired by the terminal device is a screen having three-dimensional data information.
Step S303, a background image acquired by the target device in real time is acquired.
Step S304, when the target object is detected to exist in the first preview picture, determining depth information of the target object in the first preview picture according to the first preview picture acquired by each camera of at least two cameras of the terminal equipment.
In this embodiment, since the terminal device has at least two cameras, and the shooting focal length of each camera is different, the depth information of the target object can be obtained according to the difference of the depth of field of the first preview picture shot by each camera, that is, the data difference existing between the first preview pictures shot by each camera.
Specifically, different distance information of the target object can be obtained by at least twice different focusing, and then the depth information of the target object can be determined according to the different distance information.
In step S305, edge segmentation is performed on the target object in any of the first preview pictures according to the depth information to extract the target object.
In this embodiment, since the terminal device is provided with at least two cameras, the terminal device can record depth information completely by using the two cameras and the double-camera algorithm, and then can quickly and accurately identify the target object and the background by using the depth information, so that the edge segmentation of the target object and the background can be quickly completed.
It should be noted that the first preview images captured by different cameras are the same, and the difference is that the focal length of each camera is different and the image resolution is different, so when the first preview image is selected for edge segmentation to extract the target object, any one of the first preview images captured by the multiple cameras may be selected for edge segmentation.
And step S306, replacing the first preview picture acquired by the camera with higher resolution in the at least two cameras of the terminal equipment with the background image acquired by the target equipment.
As an optional embodiment of the present invention, when the terminal device has at least two cameras, the terminal device is usually a camera having at least two different functions, that is, a first camera capable of taking a high-resolution picture and a second camera capable of taking a low-resolution picture, so that data acquired by the camera with higher resolution is replaced by a background image acquired by the target device, and the data with higher resolution is removed, thereby avoiding the low synthesis efficiency caused by the high-resolution factor of the data in the subsequent synthesis process.
Step S307, according to the depth of field information, performing three-dimensional reconstruction on the target object extracted from the first preview picture acquired by the camera with the lower resolution to obtain a three-dimensional target object.
Because the first preview pictures shot by different cameras are the same, namely the target object in each first preview picture is the same, and the difference is that the resolution of each first preview picture is different, when the target object is selected to perform three-dimensional reconstruction, the target object extracted from the first preview picture with low resolution is selected to perform three-dimensional reconstruction, so that the speed of three-dimensional reconstruction can be increased, and the synthesis efficiency of the subsequent synthesis process is increased.
Step S308, the three-dimensional target object is synthesized with the background image to obtain a second preview screen synthesized in real time.
Because the two paths of data acquired by the two cameras are automatically synthesized by the existing double-shooting mobile phone to obtain an image which can be checked by a final user, the embodiment replaces one path of data with the three-dimensional target object and the other path of data with the background image, so that the terminal device automatically synthesizes the three-dimensional target object and the background image to obtain a second preview picture synthesized in real time, the synthesis efficiency of the target object and the background image can be effectively improved, other image repairing software is not needed, such as Photoshop and other image repairing software, so that the time for obtaining the final synthesized image is too long, and the shooting experience of the user is influenced.
In an embodiment, the target device is provided with only one camera, an image acquired by the target device is a two-dimensional image, and since the target object is a three-dimensional object, and the background image is a two-dimensional image, after the second preview picture is displayed, the shooting method provided in this embodiment further includes: and editing the target object on the second preview picture according to an editing instruction, wherein the editing instruction comprises at least one of layer switching, zooming, rotating and brightness adjusting.
Through the editing instruction, the position or the form of the target object in the second preview picture can be adjusted, so that the display effect of the second preview picture meets the requirements of the user, and therefore, through the editing instruction, the shooting pleasure of the shooting method provided by the embodiment of the invention can be further improved, and the user experience is greatly improved.
In another embodiment, the target device is provided with a plurality of cameras for acquiring data, where the acquired data is three-dimensional data, and the target object is also a three-dimensional object, so that, in order to make the three-dimensional target object and the three-dimensional background image more naturally fused, the shooting method provided in this embodiment further includes: and editing the target object or the background image on the second preview screen according to an editing instruction, wherein the editing instruction comprises at least one of layer switching, zooming, rotating and brightness adjusting.
Through the editing instruction, the positions or the forms of the target object and the background image in the second preview picture can be adjusted simultaneously, so that the display effect of the second preview picture further meets the requirements of the user, and therefore, through the editing instruction, the shooting pleasure of the shooting method provided by the embodiment of the invention can be further improved, and the user experience is improved to a great extent.
In step S309, when the photographing instruction is received, the second preview screen is photographed.
In summary, according to the shooting method provided by the embodiment of the present invention, by controlling the connection between the terminal device and the target device, the target object in the first preview picture acquired by the terminal device and the background image acquired by the target device in real time can be simultaneously acquired in real time, and the target object and the background image in the first preview picture are synthesized into the second preview picture displayed in real time.
According to the method described in the foregoing embodiment, the embodiment will be further described from the perspective of a camera, which may be specifically implemented as a stand-alone entity, or may be implemented by being integrated in an electronic device, such as a terminal, where the terminal may include a mobile phone, a tablet computer, and the like.
Referring to fig. 4, fig. 4 is a schematic structural diagram of an embodiment of a shooting apparatus according to an embodiment of the present invention, and as shown in fig. 4, the shooting apparatus includes a connection module 401, a first obtaining module 402, a second obtaining module 403, a combining module 404, and a shooting module 405, where the connection module 401 is used to control a terminal device to connect with a target device; a first obtaining module 402, configured to obtain a first preview picture acquired by a terminal device in real time; a second obtaining module 403, configured to obtain a background image acquired by the target device in real time; a synthesis module 404, configured to synthesize the first preview image and the background image in real time to obtain a second preview image synthesized in real time and display the second preview image; and a shooting module 405, configured to shoot the second preview screen when the shooting instruction is received.
In this embodiment, the second obtaining module 403 is further configured to: and acquiring packaging format data transmitted by the target equipment in real time, wherein the packaging format data comprises an H264 bare code stream, and decoding the packaging format data to obtain a background image.
In this embodiment, the synthesis module 404 is specifically configured to: when the target object is detected to exist in the first preview picture, the target object is extracted, the target object and the background image are synthesized in real time to obtain a second preview picture which is synthesized in real time, and the second preview picture is displayed in real time.
In an embodiment, when the terminal device is provided with at least two cameras, and the shooting focal lengths of the cameras are different, the combining module 404 is further configured to: and dividing the background image into a plurality of layers according to the depth information in the three-dimensional data information of the background image, and synthesizing the target object and the plurality of layers of the background image to obtain a second preview picture synthesized in real time.
In another embodiment, when the terminal device is provided with at least two cameras, and the shooting focal length of each camera is different, the combining module 404 is further specifically configured to: determining depth-of-field information of a target object in a first preview picture according to the first preview picture acquired by each camera; and according to the depth information, performing edge segmentation on the target object in any first preview picture to extract the target object.
In this embodiment, since the terminal device is provided with at least two cameras, and the shooting focal length of each camera is different, the at least two cameras of the terminal device are usually cameras with at least two different functions, that is, a first camera capable of shooting a high-resolution picture and a second camera capable of shooting a low-resolution picture, and therefore, in order to improve the efficiency of three-dimensional reconstruction, the synthesizing module 404 is further specifically configured to: replacing a first preview picture acquired by a camera with higher resolution in at least two cameras of the terminal equipment with a background image acquired by the target equipment; according to the depth of field information, performing three-dimensional reconstruction on a target object extracted from a first preview picture acquired by a camera with lower resolution to obtain a three-dimensional target object; and synthesizing the three-dimensional target object and the background image to obtain a real-time synthesized second preview picture. The target object with lower resolution is adopted for three-dimensional reconstruction, so that the time required by three-dimensional reconstruction synthesis can be effectively reduced, and the subsequent synthesis efficiency is improved.
In an embodiment of the present invention, please continue to refer to fig. 4, the photographing apparatus according to this embodiment further includes an editing module 406, where the editing module 406 is configured to edit the target object on the second preview screen according to an editing instruction, where the editing instruction includes at least one of layer switching, scaling, rotating, and brightness adjusting.
In a specific implementation, each of the modules and/or units may be implemented as an independent entity, or may be implemented as one or several entities by any combination, where the specific implementation of each of the modules and/or units may refer to the foregoing method embodiment, and specific achievable beneficial effects also refer to the beneficial effects in the foregoing method embodiment, which are not described herein again.
In addition, referring to fig. 5, fig. 5 is a schematic view of a first structure of a terminal device according to an embodiment of the present invention, where the terminal device may be a mobile terminal such as a smart phone, a tablet computer, and the like. As shown in fig. 5, the terminal device 500 includes a processor 501 and a memory 502. The processor 501 is electrically connected to the memory 502.
The processor 501 is a control center of the terminal device 500, connects various parts of the entire electronic device with various interfaces and lines, executes various functions of the terminal device 500 and processes data by running or loading an application program stored in the memory 502 and calling data stored in the memory 502, thereby monitoring the terminal device 500 as a whole.
In this embodiment, the processor 501 in the terminal device 500 loads instructions corresponding to processes of one or more application programs into the memory 502 according to the following steps, and the processor 501 runs the application programs stored in the memory 502, thereby implementing various functions:
the control terminal equipment is connected with the target equipment;
acquiring a first preview picture acquired by terminal equipment in real time;
acquiring a background image acquired by target equipment in real time;
synthesizing the first preview picture and the background picture in real time to obtain a second preview picture synthesized in real time and displaying the second preview picture;
and when a shooting instruction is received, shooting a second preview picture.
The terminal device 500 may implement the steps in any embodiment of the shooting method provided in the embodiment of the present invention, and therefore, beneficial effects that can be achieved by any shooting method provided in the embodiment of the present invention can be achieved, which are detailed in the foregoing embodiments and will not be described herein again.
Referring to fig. 6, fig. 6 is a schematic diagram of a second structure of a terminal device according to an embodiment of the present invention, and as shown in fig. 6, fig. 6 is a block diagram of a specific structure of the terminal device according to the embodiment of the present invention, where the terminal device may be used to implement the shooting method provided in the foregoing embodiment. The terminal device 600 may be a mobile terminal such as a smart phone or a notebook computer.
The RF circuit 610 is used for receiving and transmitting electromagnetic waves, and performs interconversion between the electromagnetic waves and electrical signals, thereby communicating with a communication network or other devices. RF circuit 610 may include various existing circuit elements for performing these functions, such as an antenna, a radio frequency transceiver, a digital signal processor, an encryption/decryption chip, a Subscriber Identity Module (SIM) card, memory, and so forth. The RF circuit 610 may communicate with various networks such as the internet, an intranet, a wireless network, or with other devices over a wireless network. The wireless network may comprise a cellular telephone network, a wireless local area network, or a metropolitan area network. The Wireless network may use various Communication standards, protocols, and technologies, including, but not limited to, Global System for Mobile Communication (GSM), Enhanced Data GSM Environment (EDGE), Wideband Code Division Multiple Access (WCDMA), Code Division Multiple Access (CDMA), Time Division Multiple Access (TDMA), Wireless Fidelity (Wi-Fi) (e.g., Institute of Electrical and Electronics Engineers (IEEE) standard IEEE802.11 a, IEEE802.11 b, IEEE802.11g, and/or IEEE802.11 n), Voice over Internet Protocol (VoIP), world wide mail Access (Microwave Access for micro), wimax-1, other suitable short message protocols, and any other suitable Protocol for instant messaging, and may even include those protocols that have not yet been developed.
The memory 620 may be used to store software programs and modules, such as program instructions/modules corresponding to the shooting method in the above-described embodiments, and the processor 680 executes various functional applications and data processing by running the software programs and modules stored in the memory 620, so as to implement the following functions:
the control terminal equipment is connected with the target equipment;
acquiring a first preview picture acquired by terminal equipment in real time;
acquiring a background image acquired by target equipment in real time;
synthesizing the first preview picture and the background picture in real time to obtain a second preview picture synthesized in real time and displaying the second preview picture;
and when a shooting instruction is received, shooting a second preview picture.
The memory 620 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 620 can further include memory located remotely from the processor 680, which can be connected to the terminal device 600 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The input unit 630 may be used to receive input numeric or character information and generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control. In particular, the input unit 630 may include a touch sensitive surface 631 as well as other input devices 632. The touch sensitive surface 631, also referred to as a touch display screen or a touch pad, may collect touch operations by a user (e.g., operations by a user on the touch sensitive surface 631 or near the touch sensitive surface 631 using any suitable object or attachment such as a finger, a stylus, etc.) on or near the touch sensitive surface 631 and drive the corresponding connection device according to a predetermined program. Alternatively, the touch sensitive surface 631 may comprise two parts, a touch detection means and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 680, and can receive and execute commands sent by the processor 680. In addition, the touch sensitive surface 631 may be implemented using various types of resistive, capacitive, infrared, and surface acoustic waves. The input unit 630 may include other input devices 632 in addition to the touch-sensitive surface 631. In particular, other input devices 632 may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like.
The display unit 640 may be used to display information input by or provided to a user and various graphic user interfaces of the terminal apparatus 600, which may be configured by graphics, text, icons, video, and any combination thereof. The Display unit 640 may include a Display panel 641, and optionally, the Display panel 641 may be configured in the form of an LCD (Liquid Crystal Display), an OLED (Organic Light-Emitting Diode), or the like. Further, the touch-sensitive surface 631 may overlay the display panel 641, and when the touch-sensitive surface 631 detects a touch operation thereon or nearby, the touch operation is transmitted to the processor 680 to determine the type of the touch event, and then the processor 680 provides a corresponding visual output on the display panel 641 according to the type of the touch event. Although in the figure, the touch-sensitive surface 631 and the display panel 641 are shown as two separate components to implement input and output functions, in some embodiments, the touch-sensitive surface 631 and the display panel 641 may be integrated to implement input and output functions.
The terminal device 600 may also include at least one sensor 650, such as a light sensor, a motion sensor, and other sensors. Specifically, the light sensor may include an ambient light sensor that may adjust the brightness of the display panel 641 according to the brightness of ambient light, and a proximity sensor that may generate an interrupt when the folder is closed or closed. As one of the motion sensors, the gravity acceleration sensor can detect the magnitude of acceleration in each direction (generally, three axes), can detect the magnitude and direction of gravity when the mobile phone is stationary, and can be used for applications of recognizing the posture of the mobile phone (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration recognition related functions (such as pedometer and tapping), and the like; as for other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which can be configured in the terminal device 600, further description is omitted here.
Audio circuit 660, speaker 661, and microphone 662 can provide an audio interface between a user and terminal device 600. The audio circuit 660 may transmit the electrical signal converted from the received audio data to the speaker 661, and convert the electrical signal into an audio signal through the speaker 661 for output; on the other hand, the microphone 662 converts the collected sound signal into an electrical signal, which is received by the audio circuit 660 and converted into audio data, which is then processed by the audio data output processor 680 and then passed through the RF circuit 610 to be transmitted to, for example, another terminal, or output to the memory 620 for further processing. The audio circuit 660 may also include an earbud jack to provide communication of peripheral headphones with the terminal device 600.
The terminal device 600, which may assist the user in receiving requests, sending information, etc., through the transmission module 670 (e.g., a Wi-Fi module), provides the user with wireless broadband internet access. Although the transmission module 670 is shown in the figure, it is understood that it does not belong to the essential constitution of the terminal device 600 and can be omitted entirely as needed within the scope not changing the essence of the invention.
The processor 680 is a control center of the terminal device 600, connects various parts of the entire cellular phone using various interfaces and lines, and performs various functions of the terminal device 600 and processes data by operating or executing software programs and/or modules stored in the memory 620 and calling data stored in the memory 620, thereby integrally monitoring the electronic device. Optionally, processor 680 may include one or more processing cores; in some embodiments, processor 680 may integrate an application processor, which handles primarily the operating system, user interface, applications, etc., and a modem processor, which handles primarily wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 680.
Terminal device 600 also includes a power supply 690 (e.g., a battery) that provides power to the various components, which in some embodiments may be logically coupled to processor 680 via a power management system, such that the functions of managing charging, discharging, and power consumption may be performed via the power management system. The power supply 690 may also include any component including one or more dc or ac power sources, recharging systems, power failure detection circuitry, power converters or inverters, power status indicators, and the like.
Although not shown, the terminal device 600 further includes a camera (e.g., a front camera, a rear camera), a bluetooth module, and the like, which are not described in detail herein. Specifically, in this embodiment, the display unit of the electronic device is a touch screen display, the mobile terminal further includes a memory, and one or more programs, where the one or more programs are stored in the memory and configured to be executed by the one or more processors, and the one or more programs include instructions for:
the control terminal equipment is connected with the target equipment;
acquiring a first preview picture acquired by terminal equipment in real time;
acquiring a background image acquired by target equipment in real time;
synthesizing the first preview picture and the background picture in real time to obtain a second preview picture synthesized in real time and displaying the second preview picture;
and when a shooting instruction is received, shooting a second preview picture.
In specific implementation, the above modules may be implemented as independent entities, or may be combined arbitrarily to be implemented as the same or several entities, and specific implementation of the above modules may refer to the foregoing method embodiments, which are not described herein again.
It will be understood by those skilled in the art that all or part of the steps of the methods of the above embodiments may be performed by instructions or by associated hardware controlled by the instructions, which may be stored in a computer readable storage medium and loaded and executed by a processor. To this end, the present invention provides a storage medium, in which a plurality of instructions are stored, and the instructions can be loaded by a processor to execute the steps of any embodiment of the method for shooting provided by the present invention.
Wherein the storage medium may include: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like.
Since the instructions stored in the storage medium can execute the steps in any of the shooting methods provided in the embodiments of the present invention, the beneficial effects that can be achieved by any of the shooting methods provided in the embodiments of the present invention can be achieved, for details, see the foregoing embodiments, and are not described herein again.
The above detailed description is provided for a shooting method, an apparatus, a terminal device and a storage medium provided in the embodiments of the present application, and a specific example is applied in the present application to explain the principle and the implementation of the present application, and the description of the above embodiments is only used to help understanding the method and the core idea of the present application; meanwhile, for those skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application. Moreover, it will be apparent to those skilled in the art that various modifications and adaptations can be made without departing from the principles of the invention, and such modifications and adaptations are intended to be within the scope of the invention.

Claims (10)

1. A shooting method is characterized by being applied to terminal equipment and comprising the following steps:
controlling the terminal equipment to be connected with target equipment;
acquiring a first preview picture acquired by the terminal equipment in real time;
acquiring a background image acquired by the target equipment in real time;
synthesizing the first preview picture and the background picture in real time to obtain a second preview picture synthesized in real time and displaying the second preview picture;
and shooting the second preview picture when a shooting instruction is received.
2. The photographing method according to claim 1, wherein the step of combining the first preview screen and the background image in real time to obtain a second preview screen combined in real time and displaying the second preview screen includes:
when detecting that a target object exists in the first preview picture, extracting the target object;
synthesizing the target object and the background image in real time to obtain a second preview picture synthesized in real time;
and displaying the second preview picture in real time.
3. The photographing method according to claim 2, wherein the background image acquired by the target device is an image having three-dimensional data information, and the step of synthesizing the target object and the background image in real time to obtain a real-time synthesized second preview screen includes:
dividing the background image into a plurality of image layers according to the depth information in the three-dimensional data information of the background image;
and synthesizing the target object and the plurality of image layers of the background image to obtain a second preview image synthesized in real time.
4. The photographing method according to claim 2, wherein the terminal device is provided with at least two cameras each having a different photographing focal length, and the extracting the target object includes:
determining depth-of-field information of a target object in a first preview picture according to the first preview picture acquired by each camera;
and performing edge segmentation on the target object in any one of the first preview pictures according to the depth information so as to extract the target object.
5. The photographing method according to claim 4, wherein the step of synthesizing the target object with the background image in real time to obtain a real-time synthesized second preview screen includes:
replacing a first preview picture acquired by a camera with higher resolution in at least two cameras of the terminal equipment with a background image acquired by the target equipment;
according to the depth of field information, performing three-dimensional reconstruction on the target object extracted from a first preview picture acquired by a camera with lower resolution to obtain a three-dimensional target object;
and synthesizing the three-dimensional target object and the background image to obtain a real-time synthesized second preview picture.
6. The photographing method according to claim 4, wherein the step of acquiring the background image acquired by the target device in real time includes:
acquiring packaging format data transmitted by the target equipment in real time, wherein the packaging format data comprises an H264 bare code stream;
and decoding the packaging format data to obtain a background image.
7. The photographing method according to claim 2, wherein, before the step of photographing the second preview screen, the photographing method includes:
and editing the target object on the second preview picture according to an editing instruction, wherein the editing instruction comprises at least one of layer switching, zooming, rotating and brightness adjusting.
8. A camera, comprising: the device comprises a connecting module, a first obtaining module, a second obtaining module, a synthesizing module and a shooting module;
the connection module is used for controlling the connection between the terminal equipment and target equipment;
the first acquisition module is used for acquiring a first preview picture acquired by the terminal equipment in real time;
the second acquisition module is used for acquiring a background image acquired by the target equipment in real time;
the synthesis module is used for synthesizing the first preview picture and the background image in real time to obtain a second preview picture synthesized in real time and displaying the second preview picture;
and the shooting module is used for shooting the second preview picture when a shooting instruction is received.
9. A terminal device comprising a processor, a memory and a computer program stored in the memory and configured to be executed by the processor, the memory being coupled to the processor and the processor implementing the photographing method according to any one of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program, wherein the computer program, when executed, controls an apparatus in which the computer-readable storage medium is located to perform the photographing method according to any one of claims 1 to 7.
CN202110754835.6A 2021-07-02 2021-07-02 Shooting method, shooting device, terminal equipment and storage medium Pending CN113489903A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110754835.6A CN113489903A (en) 2021-07-02 2021-07-02 Shooting method, shooting device, terminal equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110754835.6A CN113489903A (en) 2021-07-02 2021-07-02 Shooting method, shooting device, terminal equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113489903A true CN113489903A (en) 2021-10-08

Family

ID=77940647

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110754835.6A Pending CN113489903A (en) 2021-07-02 2021-07-02 Shooting method, shooting device, terminal equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113489903A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102625129A (en) * 2012-03-31 2012-08-01 福州一点通广告装饰有限公司 Method for realizing remote reality three-dimensional virtual imitated scene interaction
CN107197144A (en) * 2017-05-24 2017-09-22 珠海市魅族科技有限公司 Filming control method and device, computer installation and readable storage medium storing program for executing
CN107404617A (en) * 2017-07-21 2017-11-28 努比亚技术有限公司 A kind of image pickup method and terminal, computer-readable storage medium
CN107507239A (en) * 2017-08-23 2017-12-22 维沃移动通信有限公司 A kind of image partition method and mobile terminal
CN108322644A (en) * 2018-01-18 2018-07-24 努比亚技术有限公司 A kind of image processing method, mobile terminal and computer readable storage medium
CN110009555A (en) * 2018-01-05 2019-07-12 广东欧珀移动通信有限公司 Image weakening method, device, storage medium and electronic equipment
CN112532887A (en) * 2020-12-18 2021-03-19 惠州Tcl移动通信有限公司 Shooting method, device, terminal and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102625129A (en) * 2012-03-31 2012-08-01 福州一点通广告装饰有限公司 Method for realizing remote reality three-dimensional virtual imitated scene interaction
CN107197144A (en) * 2017-05-24 2017-09-22 珠海市魅族科技有限公司 Filming control method and device, computer installation and readable storage medium storing program for executing
CN107404617A (en) * 2017-07-21 2017-11-28 努比亚技术有限公司 A kind of image pickup method and terminal, computer-readable storage medium
CN107507239A (en) * 2017-08-23 2017-12-22 维沃移动通信有限公司 A kind of image partition method and mobile terminal
CN110009555A (en) * 2018-01-05 2019-07-12 广东欧珀移动通信有限公司 Image weakening method, device, storage medium and electronic equipment
CN108322644A (en) * 2018-01-18 2018-07-24 努比亚技术有限公司 A kind of image processing method, mobile terminal and computer readable storage medium
CN112532887A (en) * 2020-12-18 2021-03-19 惠州Tcl移动通信有限公司 Shooting method, device, terminal and storage medium

Similar Documents

Publication Publication Date Title
CN110389802B (en) Display method of flexible screen and electronic equipment
CN109819313B (en) Video processing method, device and storage medium
CN110502954B (en) Video analysis method and device
KR101874895B1 (en) Method for providing augmented reality and terminal supporting the same
WO2019184889A1 (en) Method and apparatus for adjusting augmented reality model, storage medium, and electronic device
CN111476911B (en) Virtual image realization method, device, storage medium and terminal equipment
KR102018887B1 (en) Image preview using detection of body parts
CN109191549B (en) Method and device for displaying animation
CN108776822B (en) Target area detection method, device, terminal and storage medium
WO2021147921A1 (en) Image processing method, electronic device and computer-readable storage medium
CN112532887B (en) Shooting method, device, terminal and storage medium
CN110750734A (en) Weather display method and device, computer equipment and computer-readable storage medium
CN112118397B (en) Video synthesis method, related device, equipment and storage medium
CN111447389A (en) Video generation method, device, terminal and storage medium
CN112581358A (en) Training method of image processing model, image processing method and device
CN111083526B (en) Video transition method and device, computer equipment and storage medium
CN110807769B (en) Image display control method and device
CN110086998B (en) Shooting method and terminal
CN109819314B (en) Audio and video processing method and device, terminal and storage medium
CN108616687A (en) A kind of photographic method, device and mobile terminal
CN112822544A (en) Video material file generation method, video synthesis method, device and medium
CN112235650A (en) Video processing method, device, terminal and storage medium
CN110300275B (en) Video recording and playing method, device, terminal and storage medium
CN114554112B (en) Video recording method, device, terminal and storage medium
CN113485596B (en) Virtual model processing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20211008