CN114040108B - Auxiliary shooting method, device, terminal and computer readable storage medium - Google Patents

Auxiliary shooting method, device, terminal and computer readable storage medium Download PDF

Info

Publication number
CN114040108B
CN114040108B CN202111375248.2A CN202111375248A CN114040108B CN 114040108 B CN114040108 B CN 114040108B CN 202111375248 A CN202111375248 A CN 202111375248A CN 114040108 B CN114040108 B CN 114040108B
Authority
CN
China
Prior art keywords
scene
terminal
shooting
construction
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111375248.2A
Other languages
Chinese (zh)
Other versions
CN114040108A (en
Inventor
吴俊�
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Douku Software Technology Co Ltd
Original Assignee
Hangzhou Douku Software Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Douku Software Technology Co Ltd filed Critical Hangzhou Douku Software Technology Co Ltd
Priority to CN202111375248.2A priority Critical patent/CN114040108B/en
Publication of CN114040108A publication Critical patent/CN114040108A/en
Priority to PCT/CN2022/121755 priority patent/WO2023087929A1/en
Application granted granted Critical
Publication of CN114040108B publication Critical patent/CN114040108B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/80Services using short range communication, e.g. near-field communication [NFC], radio-frequency identification [RFID] or low energy communication

Abstract

The application belongs to the technical field of shooting, and particularly relates to an auxiliary shooting method, an auxiliary shooting device, a terminal and a computer readable storage medium.

Description

Auxiliary shooting method, device, terminal and computer readable storage medium
Technical Field
The application belongs to the technical field of shooting, and particularly relates to an auxiliary shooting method, an auxiliary shooting device, a terminal and a computer readable storage medium.
Background
At present, in order to enhance the shooting effect of an image, after the shooting of the image is completed, a series of processing operations can be performed on the image through the shooting of a terminal by applying image processing functions such as a self-contained filter, a sticker, a special effect and the like, so that the processing of the image is realized.
However, these image processing methods mainly depend on the image processing function of the terminal itself, and have certain limitations.
Disclosure of Invention
The embodiment of the application provides an auxiliary shooting method, an auxiliary shooting device, a terminal and a computer readable storage medium, which can enable the terminal to shoot and obtain a processed image without depending on the self image processing function.
An embodiment of the present application provides an auxiliary shooting method, where the auxiliary shooting method is applied to a first terminal, and includes:
generating a scene construction instruction carrying scene construction information when the first terminal is in an interactive shooting mode;
the scene construction instruction is sent to the second terminal to obtain a target shooting scene; the scene construction instruction is used for instructing the second terminal to perform scene construction according to the scene construction information to obtain an auxiliary shooting scene; the target shooting scene comprises the auxiliary shooting scene obtained by the second terminal through scene construction according to the scene construction information.
A second aspect of an embodiment of the present application provides an auxiliary photographing method, which is applied to a second terminal, including:
receiving a scene construction instruction carrying scene construction information sent by a first terminal, wherein the scene construction instruction is generated by the first terminal when the first terminal is in an interactive shooting mode;
and constructing the scene according to the scene construction information to obtain an auxiliary shooting scene.
A third aspect of an embodiment of the present application provides an auxiliary photographing apparatus, configured to a first terminal, including:
the generating unit is used for generating a scene construction instruction carrying scene construction information when the first terminal is in the interactive shooting mode;
the sending unit is used for sending the scene construction instruction to the second terminal so as to obtain a target shooting scene; the scene construction instruction is used for instructing the second terminal to perform scene construction according to the scene construction information to obtain an auxiliary shooting scene; the target shooting scene comprises the auxiliary shooting scene obtained by the second terminal through scene construction according to the scene construction information.
A fourth aspect of the present application provides an auxiliary photographing apparatus, configured in a second terminal, including:
The device comprises a receiving unit, a first terminal and a second terminal, wherein the receiving unit is used for receiving a scene construction instruction carrying scene construction information sent by the first terminal, and the scene construction instruction is generated by the first terminal when the first terminal is in an interactive shooting mode;
and the construction unit is used for constructing the scene according to the scene construction information to obtain an auxiliary shooting scene.
A fifth aspect of the embodiments of the present application provides a terminal comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the steps of the method of the first aspect when executing the computer program.
A sixth aspect of the embodiments of the present application provides a terminal comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the steps of the method of the second aspect when executing the computer program.
A seventh aspect of the embodiments of the present application provides a computer readable storage medium storing a computer program which when executed implements the steps of the method of the first aspect or the steps of the method of the second aspect.
In the embodiment of the application, when the first terminal is in the interactive shooting mode, the scene construction instruction is sent to the second terminal to realize interaction with the second terminal, so that the second terminal performs scene construction according to the scene construction information, and the current shooting scene of the first terminal is subjected to beautifying and other treatments, for example, the processed target shooting scene is obtained by overlapping multimedia information such as images or sounds, and further the first terminal can shoot under the target shooting scene obtained by the auxiliary construction of the second terminal, and a target shooting image corresponding to the target shooting scene is obtained, so that the first terminal can shoot to obtain the processed image without depending on the image processing function of the first terminal, and the authenticity of image shooting can be improved.
Drawings
In order to more clearly illustrate the auxiliary photographing method of the embodiments of the present application, the drawings required to be used in the embodiments will be briefly described below, and it should be understood that the following drawings only illustrate some embodiments of the present application and should not be construed as limiting the scope, and other related drawings may be obtained according to the drawings without inventive effort to those skilled in the art.
Fig. 1 is a schematic flow chart of a first implementation of an auxiliary shooting method according to an embodiment of the present application;
fig. 2 is a schematic diagram of the construction of a target shooting scene according to an embodiment of the present application;
FIGS. 3a-3b are schematic diagrams of an interactive capture control provided by an embodiment of the present application;
fig. 4 is a schematic diagram of a robot for adjusting a posture according to an embodiment of the present application;
FIG. 5 is a schematic diagram of a robot displaying an image of a target scene provided by an embodiment of the present application;
fig. 6 is a schematic diagram of a second implementation flow of an auxiliary shooting method according to an embodiment of the present application;
fig. 7 is an interaction schematic diagram among a first terminal, a second terminal, and a cloud server according to an embodiment of the present application;
FIG. 8 is a schematic diagram of a scene build type selection interface provided by an embodiment of the application;
fig. 9 is a first schematic structural diagram of an auxiliary shooting device according to an embodiment of the present application;
fig. 10 is a second schematic structural diagram of an auxiliary shooting device according to an embodiment of the present application;
fig. 11 is a schematic structural diagram of a terminal according to an embodiment of the present application.
Detailed Description
The present application will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
"and/or" herein is merely an association relationship describing an association object, and means that three relationships may exist, for example, a and/or B may mean: a exists alone, A and B exist together, and B exists alone. In addition, in the description of the embodiments of the present application, unless otherwise indicated, "a plurality" means two or more, and "at least one", "one or more" means one, two or more.
Reference in the specification to "one embodiment" or "some embodiments" or the like means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," and the like in various places throughout this specification are not necessarily all the same embodiment, but mean "one or more but not all embodiments" unless expressly specified otherwise. The terms "comprising," "including," "having," and variations thereof mean "including but not limited to," unless expressly specified otherwise.
In the related art, when the terminal shoots an image by using a shooting application, the terminal can process a filter on the shot image, or add a sticker on the image, or process a plurality of special effects, so as to process the image. However, these image processing methods are mainly implemented by the image processing function of the terminal itself, and have certain limitations.
Based on the above technical problems, embodiments of the present application provide an auxiliary shooting method, an apparatus, a terminal, and a computer readable storage medium, which can enable the terminal to shoot and obtain a processed image without relying on its own image processing function.
The following is a description of the embodiments of the present application by way of example. It should be noted that, the image capturing referred to in the embodiment of the present application may refer to capturing a single photo, or may refer to capturing a video.
Fig. 1 is a schematic diagram of a first implementation flow of an auxiliary shooting method according to an embodiment of the present application. The auxiliary shooting method is applied to the first terminal, can be executed by an auxiliary shooting device of the first terminal, and can be realized in the following steps 101 to 102.
Step 101, when a first terminal is in an interactive shooting mode, generating a scene construction instruction carrying scene construction information.
And 102, sending a scene construction instruction to a second terminal to obtain a target shooting scene.
In the embodiment of the present application, the first terminal may be a plurality of types of terminals with an image capturing function, for example, the first terminal is a mobile phone, a tablet computer, a vehicle-mounted device, an augmented reality (augmented reality, AR)/Virtual Reality (VR) device, a notebook computer, an ultra-mobile personal computer (UMPC), a netbook, a personal digital assistant (personal digital assistant, PDA) or the like, and the specific type of the first terminal is not limited in the embodiment of the present application.
The second terminal may be a terminal such as an intelligent automobile, an intelligent television, an intelligent robot, an intelligent household appliance, an intelligent toy, an advertisement machine, a stage media control device, etc., and the embodiment of the application does not limit the specific type of the second terminal.
The first terminal may establish communication connection with the second terminal through a wireless connection method such as bluetooth or WiFi, zigBee, UWB or a wired connection method.
In the embodiment of the application, the interactive shooting mode is that the first terminal can interact with the second terminal to enable the second terminal to perform scene construction according to scene construction information, so that the current shooting scene of the first terminal is processed, and the shooting mode of the processed target shooting scene is obtained.
In the interactive shooting mode, the first terminal and the second terminal are in a communication connection state, so that after a scene construction instruction carrying scene construction information is generated, the scene construction instruction can be sent to the second terminal, and the second terminal can perform scene construction according to the scene construction information to obtain a processed target shooting scene.
In the embodiment of the present application, the above-mentioned scene construction instruction is used to instruct the second terminal to perform scene construction according to the scene construction information. The scene construction information may be determined according to a scene type of the scene to be constructed.
For example, when the scene type of the scene to be constructed is the image type of the scene image to be displayed, the scene construction information may include image information corresponding to the scene image to be displayed; the scene construction instruction may be used to instruct the second terminal to display the scene image to be displayed according to the image information. The image information may be the number of the image or the content of the image itself.
For another example, when the scene type of the scene to be constructed is a voice type of the voice to be played, the scene construction information may include voice information of the voice to be played, and the scene construction instruction may be used to instruct the second terminal to play the voice to be played according to the voice information. The voice information may be the number of the voice or the voice content itself.
In the embodiment of the present application, the sending of the scene construction instruction to the second terminal to obtain the target shooting scene may refer to sending the scene construction instruction to the second terminal, so that the second terminal performs scene construction according to the scene construction information to obtain the auxiliary shooting scene, and the auxiliary shooting scene is overlapped with the actual scene of the environment where the first terminal is located to obtain the target shooting scene. The target shooting scene comprises an auxiliary shooting scene obtained by the second terminal through scene construction according to the scene construction information.
For example, the first terminal is a mobile phone, the second terminal is an automobile, a mobile phone user sits in the automobile, when a scene construction instruction carrying image information corresponding to a scene image to be displayed is sent to the automobile, the automobile can display the scene image to be displayed on a display screen of a car window, and further the mobile phone user sitting in the automobile can shoot a scene image of a target shooting scene obtained by overlapping an actual scene outside the car window with the scene image displayed on the display screen of the car window.
For example, the to-be-displayed scene image is a snowflake image 21 shown in fig. 2, the actual scene outside the vehicle window is a house 22 shown in fig. 2, and the target shooting scene is a scene obtained by overlapping the snowflake image 21 and the house 22, and the image 23 can be obtained by using a mobile phone to shoot the target scene.
In the embodiment of the application, when the first terminal is in the interactive shooting mode, the scene construction instruction is sent to the second terminal to realize interaction with the second terminal, so that the second terminal performs scene construction according to the scene construction information, and the current shooting scene of the first terminal is subjected to beautifying and other treatments, for example, the processed target shooting scene is obtained by overlapping multimedia information such as images or sounds, and further the first terminal can shoot under the target shooting scene obtained by the auxiliary construction of the second terminal, and a target shooting image corresponding to the target shooting scene is obtained, so that the first terminal can shoot to obtain the processed image without depending on the image processing function of the first terminal, and the authenticity of image shooting can be improved.
The following illustrates four implementation manners of how to set the shooting mode of the first terminal to the interactive shooting mode according to the embodiment of the present application:
First kind:
alternatively, in some embodiments of the present application, before the step 101, it may be determined whether the first terminal and the second terminal are in a communication connection state, and when the first terminal and the second terminal are in a communication connection state, the shooting mode of the first terminal is set to the interactive shooting mode.
For example, after a shooting application is started, the first terminal can actively perform equipment scanning to establish communication connection with the second terminal, and when the first terminal and the second terminal are in a communication connection state, the first terminal is automatically set to an interactive shooting mode; or after the user opens Bluetooth or other connection functions and selects a terminal (second terminal) to be connected, completing communication connection between the first terminal and the second terminal, so that the first terminal and the second terminal are in a communication connection state, and setting a shooting mode of the first terminal as an interactive shooting mode when the first terminal and the second terminal are in the communication connection state.
That is, when the first terminal and the second terminal are in a communication connection state, the photographing mode of the first terminal will be automatically set to the interactive photographing mode, without performing other operations, regardless of whether the first terminal opens the photographing application.
For example, when the first terminal performs other functions, communication connection is established with the second terminal, so that the first terminal and the second terminal are in a communication connection state, a shooting application is started at the first terminal, and a shooting mode of the shooting application can be automatically set to an interactive shooting mode without performing operations such as mode selection.
Second kind:
alternatively, in other embodiments of the present application, the shooting mode of the first terminal may be set to the interactive shooting mode when the interactive shooting control set on the shooting interface of the first terminal is triggered.
That is, the setting of the first terminal interactive photographing mode is independent of whether or not the first terminal and the second terminal are in a communication connection state. The first terminal may set the photographing mode of the first terminal to the interactive photographing mode in response to a triggering operation of the interactive photographing control set on the photographing interface of the first terminal by the user after the communication connection is established with the second terminal, or set the photographing mode of the first terminal to the interactive photographing mode in response to a triggering operation of the interactive photographing control set on the photographing interface of the first terminal by the user before the communication connection is established between the first terminal and the second terminal.
For example, the shooting interface of the shooting application of the first terminal may be preset with an interactive shooting control, when the interactive shooting control set on the shooting interface of the first terminal is triggered, the shooting mode of the first terminal is set to be an interactive shooting mode, and when the first terminal is in the interactive shooting mode, if no communication connection is established between the first terminal and the second terminal, the establishment of the communication connection between the first terminal and the second terminal is automatically triggered, so that the first terminal and the second terminal are in a communication connection state.
For example, as shown in fig. 3a, the photographing interface of the photographing application of the first terminal may be preset with an interactive photographing control 301, and when the interactive photographing control 301 is clicked or pressed, it is determined that the user of the first terminal wants to perform image photographing in the interactive photographing mode, and at this time, the photographing mode of the first terminal is set to the interactive photographing mode. In some embodiments of the present application, when the first terminal is in the interactive shooting mode in this way, if the first terminal does not establish a communication connection with the second terminal, the first terminal is automatically triggered to establish a communication connection with the second terminal, so that the first terminal and the second terminal are in a communication connection state.
For example, the first terminal sends a bluetooth scanning signal to identify a second terminal existing in the environment where the first terminal is currently located, and establishes a communication connection with the second terminal; or the first terminal receives the Bluetooth scanning signal sent by the second terminal, so that communication connection is established with the second terminal.
It should be noted that, the above-mentioned fig. 3a is merely an illustration of the location and shape of the interactive shooting control, and the present application does not limit the location and shape of the interactive shooting control, and in some embodiments of the present application, the interactive shooting control may be hidden in a popup window of the shooting interface, instead of being directly displayed on the interface.
For example, as shown in fig. 3b, the interactive shooting control is hidden in a popup window of the shooting interface, the user clicks the control "more", so that the first terminal loads the shooting mode selection window 32 on the shooting interface, the shooting mode selection window 32 is provided with a shooting mode selection control hidden by the shooting application, and when the user slides the interactive shooting control 321 leftwards, the first terminal is in the interactive shooting mode.
Third kind:
in other embodiments of the present application, when the second terminal may implement construction of multiple scenes, the setting of the photographing mode of the first terminal to the interactive photographing mode may be implemented in the following manner of step a01 to step a 05.
And A01, acquiring scene construction type information corresponding to the second terminal when the first terminal and the second terminal are in a communication connection state.
And step A02, generating a scene construction type selection interface based on the scene construction type information.
And step A03, determining the scene type of the scene to be constructed in response to a scene construction type selection operation triggered by a user on a scene construction type selection interface.
For example, when the first terminal and the second terminal are in a communication connection state, equipment type information of the second terminal is acquired, scene construction type information corresponding to the second terminal is determined based on the equipment type information, a scene construction type selection interface containing one or more scene construction type selection controls is generated based on the scene construction type information, and then a scene type of a scene to be constructed is determined in response to triggering operation of the scene construction type selection controls by a user on the scene construction type selection interface.
And step A04, determining whether the second terminal meets the construction conditions of the scene to be constructed or not based on the scene type of the scene to be constructed.
And step A05, if the second terminal meets the construction conditions of the scene to be constructed, setting the shooting mode of the first terminal as an interactive shooting mode.
In the embodiment of the application, because different scene types may need to call different resources, after determining the scene type of the scene to be constructed, whether the second terminal meets the construction condition of the scene to be constructed or not, namely, whether the second terminal has the resource condition for constructing the scene to be constructed or not can be judged, when the second terminal meets the resource condition, the shooting mode of the first terminal is set to be an interactive shooting mode, then, a scene construction instruction carrying scene construction information is generated and sent to the second terminal, so that the second terminal directly carries out scene construction according to the scene construction information, and a target shooting scene is obtained.
Correspondingly, if the second terminal does not meet the construction condition of the scene to be constructed, when the shooting application of the first terminal is started, the shooting mode of the first terminal can be the same as the default shooting mode of the first terminal in the related technology. For example, if the second terminal does not meet the construction condition of the scene to be constructed, when the first terminal shooting application is started, the shooting mode of the first terminal may be set to a default photo shooting mode or a video shooting mode, which is not limited by the present application.
Fourth kind:
Optionally, in some embodiments of the present application, when the second terminal may implement construction of multiple scenes, the following manner from step B01 to step B05 may be further adopted to implement determination whether the first terminal is in the interactive shooting mode.
And B01, acquiring scene construction type information corresponding to the second terminal when the first terminal and the second terminal are in a communication connection state.
And step B02, generating a scene construction type selection interface based on the scene construction type information.
And step B03, determining the scene type of the scene to be constructed in response to a scene construction type selection operation triggered by a user on a scene construction type selection interface.
And step B04, determining whether the second terminal meets the construction conditions of the scene to be constructed or not based on the scene type of the scene to be constructed.
In this embodiment, the implementation manner of the steps B01 to B04 is the same as the implementation manner of the steps a01 to a04, and will not be repeated here.
And step B05, if the second terminal meets the construction conditions of the scene to be constructed, adding an interactive shooting control on a shooting interface of the first terminal, and setting the shooting mode of the first terminal as an interactive shooting mode when the interactive shooting control is triggered.
In the embodiment of the present application, when the second terminal satisfies the construction condition of the scene to be constructed, the interactive shooting control 301 shown in fig. 3a or the interactive shooting control 321 shown in fig. 3b is added to the shooting interface of the first terminal, so that the user can select the scene to be constructed in advance, and when shooting is required, the interactive shooting control is triggered again, so that the first terminal is in the interactive shooting mode, and generates the scene construction instruction carrying the scene construction information, and then the scene construction instruction is sent to the second terminal, so that the second terminal starts to execute the scene construction action corresponding to the scene construction information without the user of the first terminal not being ready to shoot.
Similarly, if the second terminal does not meet the construction condition of the scene to be constructed, in the related art, when the first terminal shooting application is started, the shooting mode in which the first terminal is located sets the shooting mode in which the first terminal is located when the construction condition of the scene to be constructed is not met. For example, if the second terminal does not meet the construction condition of the scene to be constructed, when the first terminal shooting application is started, the first terminal is set to be in a photo shooting mode or a video shooting mode, which is not limited by the present application.
It should be noted that the foregoing is merely an example of how to set the shooting mode of the first terminal to the interactive shooting mode, and is not meant to limit the scope of the present application, and in other embodiments of the present application, the first terminal may be in the interactive shooting mode based on other implementation manners.
It can be appreciated that, in some embodiments of the present application, after determining that the first terminal is in the interactive shooting mode based on the first and second manners, the steps a01 to a04 or the steps B01 to B04 may be further executed to determine whether the second terminal meets the construction condition of the scene to be constructed, and when the second terminal meets the construction condition of the scene to be constructed, a scene construction instruction carrying scene construction information is generated, which will not be described herein.
Optionally, in an application scenario, the scenario construction information in the foregoing embodiments may include a preview frame image of the first terminal, and the scenario construction instruction may be used to instruct the second terminal to perform image recognition on the preview frame image, and execute a preset type of auxiliary shooting action according to a recognition result, so that the first terminal obtains the target shooting scenario.
Optionally, the above-mentioned auxiliary shooting actions of the preset type may include one or more of gesture adjustment, outputting a shooting suggestion, and acquiring and displaying a target scene image corresponding to the preview frame image.
Optionally, when the first terminal is a mobile phone and the second terminal is a robot, and the above scene construction information is a preview frame image of the first terminal, the mobile phone may send the preview frame image displayed on the preview frame image display interface of the photographing application to the robot, and the robot may perform image recognition on the preview frame image, and when recognizing that the preview frame image includes the robot, may perform one or more types of auxiliary photographing actions of adjusting the posture of the robot and outputting a photographing suggestion, and acquiring and displaying a target scene image corresponding to the preview frame image according to the position and action of a person of the robot and other group photo in the preview frame image.
For example, as shown in fig. 4, the mobile phone may send the preview frame image 41 displayed on the preview frame image display interface of the photographing application to the robot, and the robot identifies the robot itself 411 and the mobile phone user 412 that forms a group with the robot by performing image recognition on the preview frame image, and adjusts the motion and expression of the robot according to the position and motion of the mobile phone user 412, so that the preview frame image displayed on the preview frame image display interface of the mobile phone is updated to the preview frame image 42 to perform image photographing in cooperation with the mobile phone user.
Optionally, during the process of adjusting the motion and expression of the robot, the robot may also output a corresponding shooting suggestion, for example, prompt the user to be located at a position other than one meter away from the robot, where it needs to display a dance, or prompt the user to be located at the left side of the robot, put the hand on the shoulder of the robot, and so on.
Alternatively, when the robot is provided with a display screen, a target scene image corresponding to the preview frame image may also be acquired and displayed.
The target scene image is an image matched with the style of the target user in the preview frame image.
For example, as shown in fig. 5, a display screen 511a is provided in front of the robot chest, so that the robot 51 can acquire and display matched target scene images according to the clothing style, body shape, age and other information of the user 512 that is a group photo together with the robot in the preview frame image 51, and different style images can be obtained when different users are group photo with the same robot.
It should be noted that the foregoing merely illustrates the preset type of auxiliary shooting actions, and in other embodiments of the present application, more or fewer types of auxiliary shooting actions may be included, and specific execution procedures of each type of auxiliary shooting actions may be related to the type of the second terminal.
Optionally, when the second terminal is an automobile, if the preview frame image is a preview frame image obtained by shooting a scene outside the automobile window by a user, the second terminal may obtain different target scene images according to different scenes obtained by recognition, and display the target scene images on a display of the automobile window.
For example, when the vehicle window is identified as being blue sky and white cloud, a target scene image matched with the blue sky and white cloud can be acquired and displayed, so as to obtain a target shooting scene, for example, a target shooting scene comprising kites with different sizes flying on the blue sky is obtained; when the outside of the vehicle window is identified as the night sky, a target scene image matched with the night sky can be acquired to obtain a target shooting scene, for example, a night sky target shooting scene containing a stars on the whole day is obtained; when the grass is identified outside the vehicle window, a target scene image matched with the grass can be acquired and displayed, and a target shooting scene is obtained, for example, a target shooting scene containing that the cattle and sheep are eating the grass is obtained.
Alternatively, when the above-mentioned second terminal is a refrigerator, performing the adjustment of the posture thereof may refer to an action executable by the refrigerator such as opening a door of the refrigerator.
It should be noted that, the foregoing merely illustrates content included in the scene construction information, and in some embodiments of the present application, the scene construction information may further include voice information of a voice to be played; the scene construction instruction is also used for instructing the second terminal to play the voice according to the voice information of the voice to be played so as to output the voice for video shooting.
It should be further noted that, in some embodiments of the present application, the above-mentioned process of identifying the preview frame image of the first terminal may be performed not only on the second terminal but also on the first terminal, where the first terminal may obtain an identification result based on the identification of the preview frame image, and generate, according to the identification result, a scene construction instruction carrying one or more kinds of scene construction information in the image information corresponding to the gesture adjustment parameter and the target scene image, and send the scene construction instruction to the second terminal, so that the second terminal performs scene construction according to the scene construction information, and obtains the target shooting scene. Wherein the target scene image is a scene image corresponding to the preview frame image.
For example, when the first terminal is a mobile phone and the second terminal is a robot, the mobile phone may perform image recognition on a preview frame image displayed on a preview frame image display interface of a photographing application, and when the preview frame image is recognized to include the robot, send a scene construction instruction carrying one or more kinds of scene construction information of image information corresponding to a gesture adjustment parameter and a target scene image to the robot according to the position and action of a person of the robot and other group photos in the preview frame image, so as to send the scene construction instruction to the robot. And, shooting suggestions can also be output to the mobile phone user.
For another example, when the second terminal is an automobile, if the preview frame image is a preview frame image obtained by shooting a scene outside the automobile window by a user, the first terminal may obtain different target scene images according to different scenes obtained by recognition, and send a scene construction instruction carrying image information of the target scene images to the automobile, so that the automobile displays the target scene images on a display of the automobile window.
For example, when it is recognized that the outside of the vehicle window is blue sky and white cloud, a target scene image matching with the blue sky and white cloud may be acquired, when it is recognized that the outside of the vehicle window is night sky, a target shooting scene may be acquired, when it is recognized that the outside of the vehicle window is grassland, a target scene image matching with the grassland may be acquired and displayed, and a scene construction instruction carrying image information of the target scene image may be sent to the vehicle, so that the vehicle displays the target scene image on a display of the vehicle window.
Optionally, in some embodiments of the present application, when the first terminal and the second terminal are in a communication connection state and the first terminal is in an interactive shooting mode, a user of the first terminal may directly select, according to an actual application scenario, a scenario type of a scenario to be constructed, and send a scenario construction instruction carrying scenario construction information corresponding to the scenario type of the scenario to be constructed to the second terminal, so as to control the second terminal to perform scenario construction.
For example, the scene type of the scene to be constructed may include an image type of an image of the scene to be displayed and/or the scene type of the scene to be constructed may include a voice type of the voice to be played.
Optionally, in some embodiments of the present application, in the step a04 and the step B04, when determining, based on a scene type of a scene to be constructed, whether the second terminal meets a construction condition of the scene to be constructed, if the scene type of the scene to be constructed includes an image type of an image of the scene to be displayed, it may be determined whether the second terminal includes a display screen for displaying the image of the scene to be displayed, and if the second terminal includes the display screen for displaying the image of the scene to be displayed, it is determined that the second terminal meets the construction condition of the scene to be constructed.
Correspondingly, in the step 102, generating a scene construction instruction carrying scene construction information may refer to generating a scene construction instruction carrying image information corresponding to the scene image to be displayed. The scene construction instruction is used for indicating the second terminal to display the scene image to be displayed according to the image information, and an auxiliary shooting scene is obtained.
Optionally, in some embodiments of the present application, in step a04 and step B04, when determining, based on a scene type of a scene to be constructed, whether the second terminal meets a construction condition of the scene to be constructed, if the scene type of the scene to be constructed includes a voice type of a voice to be played, whether the second terminal includes a speaker for playing the voice to be played is detected, and if the second terminal includes the speaker for playing the voice to be played, it is determined that the second terminal meets the construction condition of the scene to be constructed.
Correspondingly, in the step 102, generating a scene construction instruction carrying scene construction information may refer to generating a scene construction instruction carrying voice information corresponding to the voice to be played. The scene construction instruction is used for indicating the second terminal to play the voice to be played according to the voice information, and an auxiliary shooting scene is obtained.
It should be noted that, in the embodiment of the present application, the scene types of the scene to be constructed may include multiple scene types, for example, two or more of the image types of the scene image to be displayed, the voice types of the voice to be played, and the gesture types corresponding to the target gesture, which is not limited in this aspect of the present application.
Accordingly, the determining whether the second terminal meets the construction condition of the scene to be constructed based on the scene type of the scene to be constructed may refer to: and respectively determining whether the second terminal meets the construction conditions of the scenes to be constructed or not based on the scene type of each scene to be constructed.
In the embodiment of the application, when the first terminal is in the interactive shooting mode, the scene construction instruction is sent to the second terminal to realize interaction with the second terminal, so that the second terminal carries out scene construction according to the scene construction information, and therefore, multimedia information such as images or sounds is superimposed on the current shooting scene of the first terminal to obtain a processed target shooting scene, and further, the first terminal can shoot under the target shooting scene obtained by the auxiliary construction of the second terminal, and a target shooting image corresponding to the target shooting scene is obtained, and the first terminal can shoot to obtain the processed image without depending on the image processing function of the first terminal.
Fig. 6 is a schematic diagram illustrating a second implementation flow of an auxiliary shooting method according to an embodiment of the present application. The auxiliary photographing method applied to the second terminal may be performed by an auxiliary photographing device of the second terminal, and may be implemented in the following manner of steps 601 to 602.
Step 601, receiving a scene construction instruction carrying scene construction information sent by a first terminal, wherein the scene construction instruction is generated by the first terminal when the first terminal is in an interactive shooting mode;
step 602, performing scene construction according to the scene construction information to obtain an auxiliary shooting scene.
In the embodiment of the application, the auxiliary shooting scene is overlapped with the actual scene of the environment where the first terminal is located, so that the target shooting scene can be obtained.
Optionally, in some embodiments of the present application, when the scene construction information includes a preview frame image of the first terminal, the performing scene construction according to the scene construction information to obtain the auxiliary shooting scene may be implemented in the following manner: and carrying out image recognition on the preview frame image, and executing auxiliary shooting actions of a preset type according to the recognition result to obtain an auxiliary shooting scene.
The auxiliary shooting actions of the preset type can comprise one or more types of auxiliary shooting actions of posture adjustment, shooting suggestion output and target scene image acquisition and display corresponding to the preview frame image.
Optionally, in some embodiments of the present application, the second terminal may be a terminal including a display screen, and the scene construction information may include image information corresponding to an image of a scene to be displayed;
the above-mentioned scene construction according to the scene construction information, obtain the auxiliary shooting scene, include: and displaying the image according to the image information to obtain an auxiliary shooting scene.
Optionally, in some embodiments of the present application, the second terminal may be a terminal including a speaker, and the scene construction information includes voice information corresponding to a voice to be played; the above-mentioned scene construction according to the scene construction information, obtain the auxiliary shooting scene, include: and performing voice playing according to the voice information to obtain an auxiliary shooting scene.
In the embodiment of the present application, the specific implementation process of the steps 601 to 602 and the embodiments further obtained based thereon may refer to the descriptions of the embodiments corresponding to fig. 1 to 5, and are not repeated herein.
Compared to the auxiliary shooting method in which the execution subject is the first terminal, in some embodiments of the present application, in the auxiliary shooting method in which the execution subject is the second terminal, the second terminal may also collect the voice played by the first terminal, or collect the voice of the user of the first terminal, and output the voice for auxiliary video shooting based on the collected voice.
For example, when the first terminal is a mobile phone and the second terminal is a robot, the mobile phone user can talk with the robot or sing with the robot in the process of shooting video images, the robot can collect the voice content of the mobile phone user or the voice played by the mobile phone, and output corresponding voice to cooperate with the user to record video.
In the embodiment of the application, the second terminal receives the scene construction instruction carrying the scene construction information generated and sent by the first terminal when the first terminal is in the interactive shooting mode, and then carries out scene construction according to the scene construction information, so that the current shooting scene of the first terminal is superimposed with multimedia information such as images or sounds to obtain a processed target shooting scene, and then the first terminal can shoot under the target shooting scene obtained by the auxiliary construction of the second terminal, and a target shooting image corresponding to the target shooting scene is obtained, so that the first terminal can shoot to obtain the processed image without depending on the image processing function of the first terminal.
Optionally, in some embodiments of the present application, part of the steps of the foregoing embodiments may be performed by a plug-in configured at the first terminal or the second terminal, and the plug-in may be stored in a cloud server, as shown in fig. 7, which is an interaction schematic diagram between the first terminal, the second terminal, and the cloud server provided in the embodiment of the present application, and the auxiliary shooting method includes the following steps:
step 701, a first terminal establishes communication connection with a second terminal;
step 702, the first terminal obtains scene construction type information corresponding to the second terminal. I.e. the scene construction type supported by the second terminal.
Specifically, after the shooting application is started, the first terminal can perform device scanning in a wireless connection mode such as bluetooth, wiFi, zigBee and the like, and obtain device information such as a device type, a device type number, version information and the like of the terminal to be connected obtained through scanning. And then, the first terminal sends a data request carrying the equipment information to the cloud server, and receives authentication information corresponding to the equipment information and scene construction type information corresponding to the terminal to be connected, which are sent by the cloud server according to the stored equipment information table and the shooting scene table. And finally, establishing a physical data channel with the terminal to be connected based on the authentication information, wherein the terminal to be connected is the second terminal, and the scene construction type information corresponding to the terminal to be connected is the scene construction type information corresponding to the second terminal.
For example, as shown in the following table one, the device information table stored in the cloud server is shown in the following table two, and the shooting scene table stored in the cloud server is shown in the following table two.
Table one:
device type Device type number Discovery protocol Data protocol Authentication information Version information
Mobile phone A UUID_XXXXX Bluetooth (R) BLE Authentication information Version information
Robot A UUID_YYYYY Bluetooth (R) WIFIP2P Authentication information Version information
And (II) table:
scene numbering Device type number Scene name
100001 UUID_XXXXX Snow flake image
100002 UUID_XXXXX Flower falling image
100003 UUID_YYYYY Dancing angel
Step 703, determining a scene type of the scene to be constructed based on the scene construction type information corresponding to the second terminal.
Optionally, the first terminal generates a scene construction type selection interface according to the scene construction type information corresponding to the second terminal, and then determines the scene type of the scene to be constructed in response to a scene construction type selection operation triggered by the user scene construction type selection interface.
For example, the first terminal is a mobile phone, the second terminal is an automobile, and the scene construction type information corresponding to the automobile may include: and displaying scene images such as snowflake images, star images, flower images, ox and sheep weed images, dance angels and the like, and playing voices such as voice A, voice B and voice C. When the user triggers a scene image, which is a snowflake image, at the scene construction type selection interface 81 shown in fig. 8, it is determined that the scene type of the scene to be constructed includes the image type of the scene image to be displayed, and the image type of the scene image to be displayed is a snowflake image.
In step 704, the first terminal downloads a first plug-in corresponding to the scene to be constructed, for example, plug-in K, stores and runs the first plug-in, so as to determine whether the second terminal meets the construction condition of the scene to be constructed based on the scene type of the scene to be constructed, and when the second terminal meets the construction condition of the scene to be constructed, sets the shooting mode of the first terminal to be an interactive shooting mode, generates a scene construction instruction carrying scene construction information, and sends the scene construction instruction carrying scene construction information to the second terminal.
For example, the plug-in K detects whether the second terminal includes a display screen for displaying an image of a scene to be displayed, and if the second terminal includes a display screen for displaying an image of a scene to be displayed, determines that the second terminal satisfies a construction condition of the scene to be constructed.
Step 705, the second terminal downloads a second plug-in corresponding to the scene to be constructed, for example, plug-in J, from the cloud server, and stores and runs the second plug-in to perform scene construction according to the scene construction information, so as to obtain an auxiliary shooting scene.
In the embodiment of the application, as shown in the following table three, plug-ins for constructing different scenes corresponding to terminals of a failure type can be stored in the cloud server.
Table three:
plug-in numbering Scene numbering Device type Plug-in name Plug-in multimedia resource
PLUG_00001 100001 Mobile phone A K Effect animations, pictures, fonts, scripts, etc
PLUG_00002 100001 Automobile A J Effect animations, pictures, fonts, scripts, etc
PLUG_00003 100003 Mobile phone B P Effect animations, pictures, fonts, scripts, etc
PLUG_00004 100003 Automobile B Q Effect animations, pictures, fonts, scripts, etc
Optionally, in some embodiments, the second terminal may further download, through the first terminal, a second plug-in corresponding to the scene to be built from the cloud server, and then obtain the second plug-in from the first terminal.
Step 706, uninstalling the card first card and the second card.
For example, after the first terminal completes shooting, the plug-in K and the plug-in J are unloaded.
In the embodiment of the application, the first plug-in and the second plug-in corresponding to the scene to be constructed are stored in the cloud server, the equipment information and the plug-in are managed by utilizing the cloud, so that shooting business logic can be separated from the first terminal and the second terminal, capability matching interaction is realized in a bidirectional plug-in mode, better shooting experience can be provided, and flexible configuration of interconnection capability between terminals can be realized.
In the present application, for the sake of simplicity of description, the foregoing method embodiments are all expressed as a series of combinations of actions, but it should be understood by those skilled in the art that the present application is not limited by the order of actions described, and some steps may be performed in other order in some embodiments of the present application.
Fig. 9 is a schematic structural diagram of an auxiliary shooting device 900 according to an embodiment of the present application, where the auxiliary shooting device is configured in a first terminal, and includes a generating unit 901 and a transmitting unit 902.
The generating unit 901 is configured to generate a scene construction instruction carrying scene construction information when the first terminal is in an interactive shooting mode;
a sending unit 902, configured to send a scene construction instruction to the second terminal, so as to obtain a target shooting scene; the scene construction instruction is used for indicating the second terminal to carry out scene construction according to the scene construction information so as to obtain an auxiliary shooting scene; the target shooting scene comprises an auxiliary shooting scene obtained by the second terminal through scene construction according to scene construction information.
Optionally, in some embodiments of the present application, the auxiliary photographing device 900 is further configured to:
when the first terminal and the second terminal are in a communication connection state, setting a shooting mode of the first terminal as an interactive shooting mode;
or,
when an interactive shooting control set on a shooting interface of the first terminal is triggered, setting a shooting mode of the first terminal as an interactive shooting mode.
Optionally, in some embodiments of the present application, the auxiliary photographing device 900 is further configured to:
When the first terminal and the second terminal are in a communication connection state, scene construction type information corresponding to the second terminal is acquired;
generating a scene construction type selection interface based on the scene construction type information;
responding to a scene construction type selection operation triggered by a user on a scene construction type selection interface, and determining the scene type of a scene to be constructed based on the scene construction type selection operation;
determining whether the second terminal meets the construction conditions of the scene to be constructed or not based on the scene type of the scene to be constructed;
if the second terminal meets the construction conditions of the scene to be constructed, setting the shooting mode of the first terminal as an interactive shooting mode;
or,
if the second terminal meets the construction conditions of the scene to be constructed, an interactive shooting control is added to a shooting interface of the first terminal, and when the interactive shooting control is triggered, the shooting mode of the first terminal is set to be an interactive shooting mode.
Optionally, in some embodiments of the present application, the scene construction information includes a preview frame image of the first terminal;
the scene construction instruction is used for indicating the second terminal to perform image recognition on the preview frame image, and executing auxiliary shooting actions of a preset type according to the recognition result to obtain an auxiliary shooting scene;
The auxiliary shooting actions of the preset type comprise one or more types of auxiliary shooting actions of gesture adjustment, shooting suggestion output and target scene image acquisition and display corresponding to the preview frame image.
Optionally, in some embodiments of the present application, the generating unit is specifically configured to:
identifying the preview frame image of the first terminal to obtain an identification result;
and generating a scene construction instruction carrying the gesture adjustment parameters and one or more kinds of scene construction information in the target scene image corresponding to the preview frame image according to the identification result.
Optionally, in some embodiments of the present application, the scene type of the to-be-constructed scene includes an image type of an image of the to-be-displayed scene, and when determining whether the second terminal meets the construction condition of the to-be-constructed scene based on the scene type of the to-be-constructed scene, the method is specifically used for:
detecting whether the second terminal comprises a display screen for displaying an image of a scene to be displayed;
if the second terminal comprises a display screen for displaying the scene image to be displayed, determining that the second terminal meets the construction conditions of the scene to be constructed;
the generating unit is specifically configured to:
generating a scene construction instruction carrying image information corresponding to a scene image to be displayed; the scene construction instruction is used for indicating the second terminal to display the scene image to be displayed according to the image information, and an auxiliary shooting scene is obtained.
Optionally, in some embodiments of the present application, the scene type of the scene to be constructed includes a voice type of the voice to be played; the determining whether the second terminal meets the construction condition of the scene to be constructed based on the scene type of the scene to be constructed is specifically used for:
detecting whether the second terminal comprises a loudspeaker for playing the voice to be played or not;
if the second terminal comprises a loudspeaker for playing the voice to be played, determining that the second terminal meets the construction conditions of the scene to be constructed;
the generating unit is specifically configured to:
generating a scene construction instruction carrying voice information corresponding to the voice to be played; the scene construction instruction is used for indicating the second terminal to play the voice to be played according to the voice information, and an auxiliary shooting scene is obtained.
Fig. 10 is a schematic structural diagram of another auxiliary photographing device 110 according to an embodiment of the present application, where the auxiliary photographing device is configured at a second terminal, and includes a receiving unit 111 and a constructing unit 112.
A receiving unit 111, configured to receive a scene construction instruction carrying scene construction information sent by a first terminal, where the scene construction instruction is generated by the first terminal when the first terminal is in an interactive shooting mode;
The construction unit 112 is configured to perform scene construction according to the scene construction information, so as to obtain an auxiliary shooting scene.
Optionally, in some embodiments of the present application, the scene construction information includes a preview frame image of the first terminal;
performing scene construction according to the scene construction information to obtain an auxiliary shooting scene, including:
performing image recognition on the preview frame image, and executing auxiliary shooting actions of a preset type according to a recognition result to obtain an auxiliary shooting scene;
the auxiliary shooting actions of the preset type comprise one or more types of auxiliary shooting actions of gesture adjustment, shooting suggestion output and target scene image acquisition and display corresponding to the preview frame image.
Optionally, in some embodiments of the present application, the second terminal may be a terminal including a display screen, and the scene construction information includes image information corresponding to an image of a scene to be displayed;
the construction unit is specifically used for: and displaying the image according to the image information to obtain an auxiliary shooting scene.
Optionally, in some embodiments of the present application, the second terminal may be a terminal including a speaker, and the scene construction information includes voice information corresponding to a voice to be played;
The construction unit is specifically used for: and performing voice playing according to the voice information to obtain an auxiliary shooting scene.
It should be noted that, for convenience and brevity of description, the specific working processes of the auxiliary photographing device 900 and the auxiliary photographing device 110 described above may refer to the corresponding processes of the auxiliary photographing methods described above, which are not described herein again.
Fig. 11 is a schematic structural diagram of a terminal according to an embodiment of the present application, where the terminal may be the first terminal or the second terminal, and the terminal may include: a processor 121, a memory 122, one or more input devices 123 (only one shown in fig. 11), and one or more output devices 124 (only one shown in fig. 11). Processor 121, memory 122, input device 123, and output device 124 are connected by bus 125.
Alternatively, when the second terminal is a terminal including a display screen, the display screen may be a display screen having a transparent display function, and when the second terminal is an automobile, the display screen may be located in a door window and/or a windshield and/or a sunroof of the automobile.
It should be appreciated that in embodiments of the present application, the processor 121 may be a central processing unit (Central Processing Unit, CPU), which may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSPs), application specific integrated circuits (Application Specific Integrated Circuit, ASICs), field programmable gate arrays (Field-Programmable Gate Array, FPGAs) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The input device 123 may include a virtual keyboard, a touch pad, a fingerprint sensor (for collecting fingerprint information of a user and direction information of a fingerprint), a microphone, etc., and the output device 124 may include a display, a speaker, etc.
The memory 122 stores a computer program that is executable on the processor 121, for example, a program for assisting a photographing method. The above-described steps in the above-described embodiment of the auxiliary photographing method are implemented when the above-described computer program is executed by the above-described processor 81, for example, steps 101 to 102 shown in fig. 1, or steps 601 to 602 shown in fig. 6, and the above-described functions in the above-described embodiment of the apparatus are implemented when the above-described computer program is executed by the above-described processor 121, for example, the functions of units 901 to 902 shown in fig. 9, or the functions of units 111 to 112 shown in fig. 10.
The computer program may be divided into one or more modules/units, which are stored in the memory 122 and executed by the processor 121 to accomplish the present application. The one or more modules/units may be a series of instruction segments of a computer program capable of performing a specific function, for describing an execution procedure of the computer program in the first terminal or the second terminal performing the auxiliary photographing.
Exemplary, the embodiments of the present application also provide a computer-readable storage medium storing a computer program which, when executed, implements the steps of the respective auxiliary photographing methods described above.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units or modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, the specific names of the functional units and modules are only for distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working process of the units and modules in the system may be the corresponding process in the foregoing method embodiment, which is not described herein again.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether these functions are performed in hardware or software depends on the particular application and design constraints of the auxiliary photographing method. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/user terminal and method may be implemented in other manners. For example, the apparatus/user terminal embodiments described above are merely illustrative, e.g., the division of modules or units is merely a logical functional division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection via interfaces, devices or units, which may be in electrical, mechanical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated modules/units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the present application may implement all or part of the flow of the method of the above embodiment, or may be implemented by a computer program to instruct related hardware, and the computer program may be stored in a computer readable storage medium, where the computer program, when executed by a processor, may implement the steps of each of the method embodiments described above. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, executable files or in some intermediate form, etc. The computer readable medium may include: any entity or device capable of carrying computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth.
The above embodiments are only for illustrating the technical solution of the present application, and are not limiting; although the application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some of the method features can be replaced equivalently; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application, and are intended to be included in the scope of the present application.

Claims (17)

1. An auxiliary photographing method, wherein the auxiliary photographing method is applied to a first terminal, comprising:
generating a scene construction instruction carrying scene construction information when the first terminal is in an interactive shooting mode;
the scene construction instruction is sent to a second terminal to obtain a target shooting scene; the scene construction instruction is used for instructing the second terminal to perform scene construction according to the scene construction information to obtain an auxiliary shooting scene; the target shooting scene comprises the auxiliary shooting scene obtained by the second terminal through scene construction according to the scene construction information; the scene construction information comprises image information corresponding to a scene image to be displayed or voice information of voice to be played; the target shooting scene is obtained by overlapping the auxiliary shooting scene with an actual scene of the environment where the first terminal is located;
The method further comprises the steps of:
shooting is carried out under the target shooting scene so as to obtain a target shooting image.
2. The auxiliary photographing method of claim 1, wherein the auxiliary photographing method comprises:
when the first terminal and the second terminal are in a communication connection state, scene construction type information corresponding to the second terminal is acquired;
generating a scene construction type selection interface based on the scene construction type information;
responding to a scene construction type selection operation triggered by a user on the scene construction type selection interface, and determining the scene type of a scene to be constructed based on the scene construction type selection operation;
determining whether the second terminal meets the construction conditions of the scene to be constructed or not based on the scene type of the scene to be constructed;
if the second terminal meets the construction conditions of the scene to be constructed, setting the shooting mode of the first terminal as an interactive shooting mode;
or,
if the second terminal meets the construction conditions of the scene to be constructed, an interactive shooting control is added to a shooting interface of the first terminal, and when the interactive shooting control is triggered, the shooting mode of the first terminal is set to be an interactive shooting mode.
3. The auxiliary photographing method of claim 1 or 2, wherein the scene construction information includes a preview frame image of the first terminal;
the scene construction instruction is used for instructing the second terminal to perform image recognition on the preview frame image, and executing auxiliary shooting actions of a preset type according to a recognition result to obtain the auxiliary shooting scene;
the auxiliary shooting actions of the preset type comprise posture adjustment, shooting suggestion output and one or more types of auxiliary shooting actions in the target scene image corresponding to the preview frame image.
4. The auxiliary photographing method of claim 1 or 2, wherein the generating a scene construction instruction carrying scene construction information includes:
identifying the preview frame image of the first terminal to obtain an identification result;
and generating a scene construction instruction carrying the gesture adjustment parameters and one or more kinds of scene construction information in the target scene image corresponding to the preview frame image according to the identification result.
5. The auxiliary photographing method of claim 2, wherein the scene type of the scene to be constructed includes an image type of an image of the scene to be displayed, and the determining whether the second terminal satisfies the construction condition of the scene to be constructed based on the scene type of the scene to be constructed includes:
Detecting whether the second terminal comprises a display screen for displaying the scene image to be displayed;
if the second terminal comprises a display screen for displaying the scene image to be displayed, determining that the second terminal meets the construction conditions of the scene to be constructed;
the generating a scene construction instruction carrying scene construction information comprises the following steps:
generating a scene construction instruction carrying image information corresponding to the scene image to be displayed; the scene construction instruction is used for indicating the second terminal to display the scene image to be displayed according to the image information, and the auxiliary shooting scene is obtained.
6. The auxiliary photographing method of claim 2, wherein the scene type of the scene to be constructed includes a voice type of voice to be played; the determining whether the second terminal meets the construction condition of the scene to be constructed based on the scene type of the scene to be constructed includes:
detecting whether the second terminal comprises a loudspeaker for playing the voice to be played or not;
if the second terminal comprises a loudspeaker for playing the voice to be played, determining that the second terminal meets the construction condition of the scene to be constructed;
The generating a scene construction instruction carrying scene construction information comprises the following steps:
generating a scene construction instruction carrying voice information corresponding to the voice to be played; the scene construction instruction is used for indicating the second terminal to play the voice to be played according to the voice information, and the auxiliary shooting scene is obtained.
7. An auxiliary photographing method, wherein the auxiliary photographing method is applied to a second terminal, comprising:
receiving a scene construction instruction carrying scene construction information sent by a first terminal, wherein the scene construction instruction is generated by the first terminal when the first terminal is in an interactive shooting mode; the scene construction information comprises image information corresponding to a scene image to be displayed or voice information corresponding to voice to be played;
performing scene construction according to the scene construction information to obtain an auxiliary shooting scene;
the auxiliary shooting scene is used for generating a target shooting scene so that the first terminal shoots under the target shooting scene to obtain a target shooting image; the target shooting scene is obtained by overlapping the auxiliary shooting scene with an actual scene of the environment where the first terminal is located.
8. The auxiliary photographing method of claim 7, wherein the scene construction information includes a preview frame image of the first terminal;
performing scene construction according to the scene construction information to obtain an auxiliary shooting scene, including:
performing image recognition on the preview frame image, and executing auxiliary shooting actions of a preset type according to a recognition result to obtain the auxiliary shooting scene;
the auxiliary shooting actions of the preset type comprise posture adjustment, shooting suggestion output and one or more types of auxiliary shooting actions in the target scene image corresponding to the preview frame image.
9. The auxiliary shooting method as claimed in claim 7, wherein the second terminal is a terminal including a display screen, and the scene construction information includes image information corresponding to an image of a scene to be displayed;
performing scene construction according to the scene construction information to obtain an auxiliary shooting scene, including:
and displaying the image according to the image information to obtain the auxiliary shooting scene.
10. The auxiliary shooting method of claim 7, wherein the second terminal is a terminal including a speaker, and the scene construction information includes voice information corresponding to a voice to be played;
Performing scene construction according to the scene construction information to obtain an auxiliary shooting scene, including:
and performing voice playing according to the voice information to obtain the auxiliary shooting scene.
11. An auxiliary photographing apparatus, wherein the auxiliary photographing apparatus is configured at a first terminal, comprising:
the generating unit is used for generating a scene construction instruction carrying scene construction information when the first terminal is in the interactive shooting mode;
the sending unit is used for sending the scene construction instruction to the second terminal so as to obtain a target shooting scene; the scene construction instruction is used for instructing the second terminal to perform scene construction according to the scene construction information to obtain an auxiliary shooting scene; the target shooting scene comprises the auxiliary shooting scene obtained by the second terminal through scene construction according to the scene construction information; the scene construction information comprises image information corresponding to a scene image to be displayed or voice information of voice to be played; the target shooting scene is obtained by overlapping the auxiliary shooting scene with an actual scene of the environment where the first terminal is located;
the device also comprises a shooting unit, a shooting unit and a shooting unit, wherein the shooting unit is used for shooting under the target shooting scene so as to obtain a target shooting image.
12. An auxiliary photographing apparatus, wherein the auxiliary photographing apparatus is disposed at a second terminal, comprising:
the device comprises a receiving unit, a first terminal and a second terminal, wherein the receiving unit is used for receiving a scene construction instruction carrying scene construction information sent by the first terminal, and the scene construction instruction is generated by the first terminal when the first terminal is in an interactive shooting mode; the scene construction information comprises image information corresponding to a scene image to be displayed or voice information of voice to be played;
the construction unit is used for constructing the scene according to the scene construction information to obtain an auxiliary shooting scene;
the auxiliary shooting scene is used for generating a target shooting scene so that the first terminal shoots under the target shooting scene to obtain a target shooting image; the target shooting scene is obtained by overlapping the auxiliary shooting scene with an actual scene of the environment where the first terminal is located.
13. A terminal comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the method according to any of claims 1-6 when executing the computer program.
14. A terminal comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the method according to any of claims 7-10 when executing the computer program.
15. The terminal of claim 14, wherein the terminal comprises a display screen having a transparent display function.
16. The terminal of claim 15, wherein the terminal is an automobile;
the display screen is located in a door window and/or a windscreen and/or a sunroof of the automobile.
17. A computer readable storage medium, characterized in that the computer readable storage medium stores a computer program, characterized in that the computer program when executed realizes the steps of the method according to any of claims 1-10.
CN202111375248.2A 2021-11-19 2021-11-19 Auxiliary shooting method, device, terminal and computer readable storage medium Active CN114040108B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202111375248.2A CN114040108B (en) 2021-11-19 2021-11-19 Auxiliary shooting method, device, terminal and computer readable storage medium
PCT/CN2022/121755 WO2023087929A1 (en) 2021-11-19 2022-09-27 Assisted photographing method and apparatus, and terminal and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111375248.2A CN114040108B (en) 2021-11-19 2021-11-19 Auxiliary shooting method, device, terminal and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN114040108A CN114040108A (en) 2022-02-11
CN114040108B true CN114040108B (en) 2023-12-01

Family

ID=80138349

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111375248.2A Active CN114040108B (en) 2021-11-19 2021-11-19 Auxiliary shooting method, device, terminal and computer readable storage medium

Country Status (2)

Country Link
CN (1) CN114040108B (en)
WO (1) WO2023087929A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114040108B (en) * 2021-11-19 2023-12-01 杭州逗酷软件科技有限公司 Auxiliary shooting method, device, terminal and computer readable storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104104875A (en) * 2014-07-23 2014-10-15 深圳市中兴移动通信有限公司 Method and device for setting shooting parameters in image shooting process
CN105872361A (en) * 2016-03-28 2016-08-17 努比亚技术有限公司 Shooting guidance device, system and method
CN106210517A (en) * 2016-07-06 2016-12-07 北京奇虎科技有限公司 The processing method of a kind of view data, device and mobile terminal
CN106303194A (en) * 2015-05-28 2017-01-04 中兴通讯股份有限公司 Remotely image pickup method, master control camera terminal, controlled camera terminal and camera system
CN107835364A (en) * 2017-10-30 2018-03-23 维沃移动通信有限公司 One kind is taken pictures householder method and mobile terminal
CN110177204A (en) * 2019-04-29 2019-08-27 上海掌门科技有限公司 Photographic method, electronic equipment and computer-readable medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114040108B (en) * 2021-11-19 2023-12-01 杭州逗酷软件科技有限公司 Auxiliary shooting method, device, terminal and computer readable storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104104875A (en) * 2014-07-23 2014-10-15 深圳市中兴移动通信有限公司 Method and device for setting shooting parameters in image shooting process
CN106303194A (en) * 2015-05-28 2017-01-04 中兴通讯股份有限公司 Remotely image pickup method, master control camera terminal, controlled camera terminal and camera system
CN105872361A (en) * 2016-03-28 2016-08-17 努比亚技术有限公司 Shooting guidance device, system and method
CN106210517A (en) * 2016-07-06 2016-12-07 北京奇虎科技有限公司 The processing method of a kind of view data, device and mobile terminal
CN107835364A (en) * 2017-10-30 2018-03-23 维沃移动通信有限公司 One kind is taken pictures householder method and mobile terminal
CN110177204A (en) * 2019-04-29 2019-08-27 上海掌门科技有限公司 Photographic method, electronic equipment and computer-readable medium

Also Published As

Publication number Publication date
WO2023087929A1 (en) 2023-05-25
CN114040108A (en) 2022-02-11

Similar Documents

Publication Publication Date Title
US10970865B2 (en) Electronic device and method for applying image effect to images obtained using image sensor
CN116320782B (en) Control method, electronic equipment, computer readable storage medium and chip
KR102597680B1 (en) Electronic device for providing customized quality image and method for controlling thereof
CN113112505B (en) Image processing method, device and equipment
CN109961453B (en) Image processing method, device and equipment
CN109087376B (en) Image processing method, image processing device, storage medium and electronic equipment
US20160127653A1 (en) Electronic Device and Method for Providing Filter in Electronic Device
US11245933B2 (en) Method and server for generating image data by using multiple cameras
CN111316627B (en) Shooting method, user terminal and computer readable storage medium
CN103813108A (en) Array camera, mobile terminal, and methods for operating the same
WO2022048398A1 (en) Multimedia data photographing method and terminal
JP2020537441A (en) Photography method and electronic equipment
CN111339938A (en) Information interaction method, device, equipment and storage medium
CN108718389B (en) Shooting mode selection method and mobile terminal
CN111866483A (en) Color restoration method and device, computer readable medium and electronic device
CN114040108B (en) Auxiliary shooting method, device, terminal and computer readable storage medium
WO2022148319A1 (en) Video switching method and apparatus, storage medium, and device
CN113609358B (en) Content sharing method, device, electronic equipment and storage medium
CN106127166A (en) A kind of augmented reality AR image processing method, device and intelligent terminal
KR102339223B1 (en) Electronic device and method for providing contents related camera
CN112532904B (en) Video processing method and device and electronic equipment
CN108833794B (en) Shooting method and mobile terminal
CN107194363B (en) Image saturation processing method and device, storage medium and computer equipment
CN112702616A (en) Processing method and device for playing content
CN113744126A (en) Image processing method and device, computer readable medium and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant