WO2023087929A1 - 一种辅助拍摄方法、装置、终端和计算机可读存储介质 - Google Patents

一种辅助拍摄方法、装置、终端和计算机可读存储介质 Download PDF

Info

Publication number
WO2023087929A1
WO2023087929A1 PCT/CN2022/121755 CN2022121755W WO2023087929A1 WO 2023087929 A1 WO2023087929 A1 WO 2023087929A1 CN 2022121755 W CN2022121755 W CN 2022121755W WO 2023087929 A1 WO2023087929 A1 WO 2023087929A1
Authority
WO
WIPO (PCT)
Prior art keywords
scene
terminal
shooting
construction
image
Prior art date
Application number
PCT/CN2022/121755
Other languages
English (en)
French (fr)
Inventor
吴俊�
Original Assignee
杭州逗酷软件科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 杭州逗酷软件科技有限公司 filed Critical 杭州逗酷软件科技有限公司
Publication of WO2023087929A1 publication Critical patent/WO2023087929A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/80Services using short range communication, e.g. near-field communication [NFC], radio-frequency identification [RFID] or low energy communication

Definitions

  • the present application belongs to the technical field of photographing, and in particular relates to an auxiliary photographing method, device, terminal and computer-readable storage medium.
  • a series of processing operations can be performed on the image through the built-in image processing functions such as filters, stickers, and special effects in the shooting application of the terminal to realize the processing of the image.
  • Embodiments of the present application provide an auxiliary shooting method, device, terminal, and computer-readable storage medium, which enable the terminal to capture processed images without relying on its own image processing function.
  • the first aspect of the embodiment of the present application provides an auxiliary shooting method, the auxiliary shooting method is applied to a first terminal, including:
  • the scene construction instruction is used to instruct the second terminal to perform scene construction according to the scene construction information to obtain an auxiliary shooting scene;
  • the target shooting scene includes the auxiliary shooting scene obtained by performing scene construction by the second terminal according to the scene construction information.
  • the second aspect of the embodiment of the present application provides an auxiliary shooting method, the auxiliary shooting method is applied to a second terminal, including:
  • Scene construction is performed according to the scene construction information to obtain an auxiliary shooting scene.
  • the third aspect of the embodiment of the present application provides an auxiliary shooting device, the auxiliary shooting device is configured on the first terminal, including:
  • a generating unit configured to generate a scene construction instruction carrying scene construction information when the first terminal is in the interactive shooting mode
  • a sending unit configured to send the scene construction instruction to the second terminal to obtain a target shooting scene; the scene construction instruction is used to instruct the second terminal to perform scene construction according to the scene construction information, and obtain assistance
  • a shooting scene the target shooting scene includes the auxiliary shooting scene obtained by performing scene construction by the second terminal according to the scene construction information.
  • the fourth aspect of the embodiment of the present application provides an auxiliary shooting device, the auxiliary shooting device is configured on the second terminal, including:
  • the receiving unit is configured to receive a scene construction instruction carrying scene construction information sent by the first terminal, the scene construction instruction is generated by the first terminal when it is in an interactive shooting mode;
  • a construction unit configured to construct a scene according to the scene construction information to obtain an auxiliary shooting scene.
  • the fifth aspect of the embodiments of the present application provides a terminal, including a memory, a processor, and a computer program stored in the memory and operable on the processor.
  • the processor executes the computer program, the above-mentioned first In one aspect the steps of the method.
  • the sixth aspect of the embodiments of the present application provides a terminal, including a memory, a processor, and a computer program stored in the memory and operable on the processor.
  • the processor executes the computer program, the above-mentioned first The steps of the method described in the second aspect.
  • the seventh aspect of the embodiment of the present application provides a computer-readable storage medium, the computer-readable storage medium stores a computer program, and when the computer program is executed, the steps of the method of the above-mentioned first aspect or the steps of the above-mentioned second aspect are implemented. method steps.
  • FIG. 1 is a schematic diagram of the first implementation flow of the auxiliary shooting method provided by the embodiment of the present application;
  • FIG. 2 is a schematic diagram of the construction of the target shooting scene provided by the embodiment of the present application.
  • 3a-3b are schematic diagrams of the interactive shooting control provided by the embodiment of the present application.
  • Fig. 4 is a schematic diagram of the posture adjustment of the robot provided by the embodiment of the present application.
  • FIG. 5 is a schematic diagram of a target scene image displayed by a robot provided in an embodiment of the present application.
  • FIG. 6 is a schematic diagram of a second implementation flow of the auxiliary shooting method provided by the embodiment of the present application.
  • FIG. 7 is a schematic diagram of interaction among the first terminal, the second terminal, and the cloud server provided by the embodiment of the present application;
  • FIG. 8 is a schematic diagram of a scene construction type selection interface provided by an embodiment of the present application.
  • Fig. 9 is a schematic diagram of the first structure of the auxiliary shooting device provided by the embodiment of the present application.
  • Fig. 10 is a second structural schematic diagram of the auxiliary shooting device provided by the embodiment of the present application.
  • FIG. 11 is a schematic structural diagram of a terminal provided by an embodiment of the present application.
  • references to "one embodiment” or “some embodiments” or the like in this specification means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application.
  • appearances of the phrases “in one embodiment,” “in some embodiments,” “in other embodiments,” “in other embodiments,” etc. in various places in this specification are not necessarily are all the same embodiment, but mean “one or more but not all embodiments” unless specifically stated otherwise.
  • the terms “including”, “comprising”, “having” and variations thereof mean “including but not limited to”, unless specifically stated otherwise.
  • the terminal when the terminal uses a shooting application to capture images, it may perform filter processing on the captured images, or add stickers to the images, or perform some special effect processing to realize image processing.
  • these image processing methods mainly rely on the image processing function of the terminal itself, which has certain limitations.
  • embodiments of the present application provide an auxiliary shooting method, device, terminal and computer-readable storage medium, enabling the terminal to capture processed images without relying on its own image processing function.
  • FIG. 1 shows a schematic flowchart of a first implementation of a method for assisting shooting provided by an embodiment of the present application.
  • the auxiliary photographing method is applied to the first terminal, and may be executed by an auxiliary photographing device of the first terminal.
  • the auxiliary photographing method may be implemented in steps 101 to 102 as follows.
  • Step 101 when the first terminal is in an interactive shooting mode, generate a scene construction instruction carrying scene construction information.
  • Step 102 sending the scene construction instruction to the second terminal to obtain the target shooting scene.
  • the first terminal may be various types of terminals with an image capturing function, for example, the first terminal is a mobile phone, a tablet computer, a vehicle-mounted device, an augmented reality (augmented reality, AR)/virtual reality (virtual reality) , VR) equipment, notebook computer, ultra-mobile personal computer (ultra-mobile personal computer, UMPC), netbook, personal digital assistant (personal digital assistant, PDA) and other terminals, the embodiment of this application does not make any specific type of the first terminal limit.
  • the above-mentioned second terminal may be a terminal such as a smart car, a smart TV, a smart robot, a smart home appliance, a smart toy, an advertising machine, and a stage media control device.
  • a terminal such as a smart car, a smart TV, a smart robot, a smart home appliance, a smart toy, an advertising machine, and a stage media control device.
  • the embodiment of the present application does not impose any limitation on the specific type of the second terminal.
  • the above-mentioned first terminal may establish a communication connection with the second terminal through a wireless connection manner such as Bluetooth, WiFi, ZigBee, UWB, or a wired connection manner.
  • a wireless connection manner such as Bluetooth, WiFi, ZigBee, UWB, or a wired connection manner.
  • the interactive shooting mode is that the first terminal can interact with the second terminal to make the second terminal construct the scene according to the scene construction information, thereby processing the current shooting scene of the first terminal and obtaining the processed target shooting The shooting mode of the scene.
  • the first terminal and the second terminal are in a state of communication connection, therefore, after generating the scene construction instruction carrying the scene construction information, the scene construction instruction can be sent to the second terminal, so that the second terminal constructs the scene according to the scene
  • the information is used to construct the scene, and the processed target shooting scene is obtained.
  • the above scene construction instruction is used to instruct the second terminal to perform scene construction according to the scene construction information.
  • the scene construction information may be determined according to the scene type of the scene to be constructed.
  • the scene construction information may include image information corresponding to the scene image to be displayed; the scene construction instruction may be used to instruct the second terminal to treat Display the scene image for display.
  • the image information may be the serial number of the image or the content of the image itself.
  • the scene construction information may include the voice information of the voice to be played, and the above scene construction instruction may be used to instruct the second terminal to use the voice information to be played. to play.
  • the voice information may be the number of the voice or the content of the voice itself.
  • the above-mentioned sending the scene construction instruction to the second terminal to obtain the target shooting scene may refer to sending the scene construction instruction to the second terminal, so that the second terminal can be assisted in scene construction according to the above scene construction information Shooting a scene, and superimposing the auxiliary shooting scene and the actual scene of the environment where the first terminal is located, to obtain the above target shooting scene.
  • the target shooting scene includes an auxiliary shooting scene obtained by scene construction performed by the second terminal according to the scene construction information.
  • the first terminal is a mobile phone
  • the second terminal is a car
  • the mobile phone user sits in the car.
  • the car When sending the scene construction instruction carrying the image information corresponding to the scene image to be displayed to the car, the car can display it on the display screen of the car window.
  • the scene image to be displayed is displayed, so that the mobile phone user sitting in the car can simultaneously capture the scene image of the target shooting scene obtained by superimposing the actual scene outside the car window and the scene image displayed on the display screen of the car window.
  • the scene image to be displayed is a snowflake image 21 as shown in FIG. 2, and the actual scene outside the car window is a house 22 as shown in FIG.
  • the mobile phone shoots the scene of the target, and an image 23 can be obtained.
  • the first terminal realizes the interaction with the second terminal by sending the scene construction instruction to the second terminal when it is in the interactive shooting mode, so that the second terminal constructs the scene according to the scene construction information, so that the second terminal A terminal performs beautification and other processing on the current shooting scene, for example, superimposing multimedia information such as images or sounds to obtain a processed target shooting scene, and then enables the first terminal to shoot in the target shooting scene obtained with the assistance of the second terminal, And obtain the target shooting image corresponding to the target shooting scene, so that the first terminal can capture the processed image without relying on its own image processing function, which can improve the authenticity of image shooting.
  • step 101 above it may be determined whether the first terminal and the second terminal are in a communication connection state, and when the first terminal and the second terminal are in a communication connection state, Set the shooting mode of the first terminal to an interactive shooting mode.
  • the first terminal can actively scan the device to establish a communication connection with the second terminal, and when the first terminal and the second terminal are in a communication connection state, automatically set the first terminal to Interactive shooting mode; or, after the user turns on Bluetooth or other connection functions and selects the terminal (second terminal) to be connected, the communication connection between the first terminal and the second terminal is completed, so that the first terminal and the second terminal In the state of communication connection, and when the first terminal and the second terminal are in the state of communication connection, the shooting mode of the first terminal is set as the interactive shooting mode.
  • the shooting mode of the first terminal will be automatically set to the interactive shooting mode, and no other operations are required.
  • the shooting application is started on the first terminal, and its shooting mode can be automatically set to interactive Shooting mode, there is no need to perform operations such as mode selection.
  • the second type is the first type:
  • the shooting mode of the first terminal may be set to the interactive shooting mode.
  • the setting of the interactive shooting mode of the first terminal has nothing to do with whether the first terminal and the second terminal are in a communication connection state.
  • the first terminal may set the shooting mode of the first terminal to the interactive shooting mode in response to the user's trigger operation on the interactive shooting control set on the shooting interface of the first terminal, or, at the Before a terminal establishes a communication connection with a second terminal, the shooting mode of the first terminal is set to an interactive shooting mode in response to a user's trigger operation on an interactive shooting control set on the shooting interface of the first terminal.
  • the shooting interface of the shooting application of the first terminal may be preset with an interactive shooting control.
  • the shooting mode of the first terminal is set to the interactive shooting mode, and the When a terminal is in the interactive shooting mode, if no communication connection is established between the first terminal and the second terminal, the establishment of the communication connection between the first terminal and the second terminal is automatically triggered, so that the first terminal and the second terminal are in the Communication connection status.
  • the shooting interface of the shooting application of the first terminal may be preset with an interactive shooting control 301.
  • the interactive shooting control 301 When the interactive shooting control 301 is clicked or pressed, it is determined that the user of the first terminal wishes to take an image in the interactive shooting mode. Shooting, at this time, set the shooting mode of the first terminal to the interactive shooting mode.
  • the first terminal when the first terminal is in the interactive shooting mode in this way, if the first terminal has not established a communication connection with the second terminal, the first terminal is automatically triggered to establish a communication connection with the second terminal, So that the first terminal and the second terminal are in a communication connection state.
  • the first terminal sends a Bluetooth scanning signal to identify a second terminal in the environment where the first terminal is currently located, and establish a communication connection with the second terminal; or, the first terminal receives the Bluetooth scanning signal sent by the second terminal to realize To establish a communication connection with the second terminal, this application does not limit the connection mode between the first terminal and the second terminal.
  • the interactive shooting control is hidden in the pop-up window of the shooting interface, and the user clicks on the control "More" to make the first terminal load the shooting mode selection window 32 on the shooting interface, and the shooting mode selection window 32 sets There is a shooting mode selection control hidden in the shooting application.
  • the user slides the interactive shooting control 321 to the left, the first terminal is in the interactive shooting mode.
  • the third type is the third type.
  • the following steps A01 to A05 can be used to realize setting the shooting mode of the first terminal as the interactive shooting mode.
  • Step A01 when the first terminal and the second terminal are in a communication connection state, acquire scene construction type information corresponding to the second terminal.
  • Step A02 generating a scene construction type selection interface based on the scene construction type information.
  • Step A03 in response to the scene construction type selection operation triggered by the user on the scene construction type selection interface, determine the scene type of the scene to be constructed.
  • the device type information of the second terminal is obtained first, and the scene construction type information corresponding to the second terminal is determined based on the device type information, and the scene construction type information is generated based on the scene construction type information.
  • the scene construction type selection interface includes one or more scene construction type selection controls, and then, in response to the user's trigger operation on the scene construction type selection control on the scene construction type selection interface, determine the scene type of the scene to be constructed.
  • Step A04 based on the scene type of the scene to be constructed, it is determined whether the second terminal satisfies the construction condition of the scene to be constructed.
  • Step A05 if the second terminal satisfies the construction conditions of the scene to be constructed, then set the shooting mode of the first terminal to the interactive shooting mode.
  • the scene type of the scene to be constructed since different scene types may need to call different resources, after determining the scene type of the scene to be constructed, it may first be judged whether the second terminal satisfies the construction conditions of the scene to be constructed, that is, whether it is capable of constructing The resource conditions of the scene to be constructed, and when the second terminal meets the resource conditions, set the shooting mode of the first terminal to the interactive shooting mode, and then generate a scene construction command carrying scene construction information and send it to the second terminal , so that the second terminal directly constructs a scene according to the scene construction information, so as to obtain a target shooting scene.
  • the shooting mode of the first terminal may be the same as the default shooting mode of the first terminal in the related art.
  • the shooting mode of the first terminal can be set to the default photo shooting mode or video shooting mode, which is not covered by this application. Do limit.
  • the fourth type is the fourth type.
  • the following steps B01 to B05 can also be used to determine whether the first terminal is in the interactive shooting mode.
  • Step B01 when the first terminal and the second terminal are in a communication connection state, acquire scene construction type information corresponding to the second terminal.
  • Step B02 generating a scene construction type selection interface based on the scene construction type information.
  • Step B03 in response to the scene construction type selection operation triggered by the user on the scene construction type selection interface, determine the scene type of the scene to be constructed.
  • Step B04 determining whether the second terminal satisfies the construction conditions of the scene to be constructed based on the scene type of the scene to be constructed.
  • the implementation manner of the above steps B01 to B04 is the same as the implementation manner of the above steps A01 to A04, and will not be repeated here.
  • Step B05 if the second terminal satisfies the construction conditions of the scene to be constructed, add an interactive shooting control to the shooting interface of the first terminal, and set the shooting mode of the first terminal to the interactive shooting mode when the interactive shooting control is triggered.
  • an interactive shooting control 301 as shown in FIG. 3 a or an interactive shooting control 321 as shown in FIG. 3 b is added to the shooting interface of the first terminal,
  • the user can select the scene to be built in advance, and when shooting is required, trigger the interactive shooting control, put the first terminal in the interactive shooting mode, and generate a scene construction command carrying scene construction information, and then the scene construction command
  • the information is sent to the second terminal, so that the second terminal starts to execute the scene construction action corresponding to the scene construction information without the user of the first terminal being ready to shoot.
  • the shooting mode of the first terminal is set The shooting mode in which the condition was built.
  • the first terminal is set to a photo shooting mode or a video shooting mode when the shooting application of the first terminal is started, which is not limited in this application.
  • the first terminal may be in an interactive shooting mode based on other implementation manners.
  • the scene construction information in each of the above embodiments may include the preview frame image of the first terminal, and the above scene construction instruction may be used to instruct the second terminal to perform image recognition on the preview frame image, and according to As a result of the recognition, a preset type of auxiliary shooting action is performed, so that the first terminal obtains the above-mentioned target shooting scene.
  • the aforementioned preset types of auxiliary shooting actions may include one or more types of auxiliary shooting actions in posture adjustment, outputting shooting suggestions, and obtaining and displaying target scene images corresponding to preview frame images.
  • the mobile phone can send the preview frame image displayed on the preview frame image display interface of the shooting application to Robot
  • the robot can perform image recognition on the preview frame image
  • it recognizes that the preview frame image contains a robot it can adjust its own posture and output shooting according to the position and movement of the robot and other people taking pictures together in the preview frame image
  • One or more types of auxiliary shooting actions in the target scene image corresponding to the preview frame image are suggested, acquired and displayed.
  • the mobile phone can send the preview frame image 41 displayed on the preview frame image display interface of the shooting application to the robot, and the robot can determine that the preview frame image contains the robot itself 411 by performing image recognition on the preview frame image.
  • the mobile phone user 412 who took a photo with him, and according to the position and action of the mobile phone user 412, adjust his actions and expressions, so that the preview frame image displayed on the mobile phone preview frame image display interface is updated to the preview frame image 42, to cooperate with the mobile phone user.
  • Image capture the mobile phone user 412 who took a photo with him, and according to the position and action of the mobile phone user 412, adjust his actions and expressions, so that the preview frame image displayed on the mobile phone preview frame image display interface is updated to the preview frame image 42, to cooperate with the mobile phone user.
  • the robot can also output corresponding shooting suggestions, for example, reminding the user that it is one meter away from itself and that it needs to show a dance, or prompting the user to be on its left Suggestions for shooting from the side, putting your hands on their shoulders, etc.
  • the robot when it is provided with a display screen, it can also acquire and display the target scene image corresponding to the preview frame image.
  • the target scene image is an image that matches the style of the target user in the preview frame image.
  • a display screen 511a is set on the chest of the robot, so that the robot 51 can obtain and display the matching target according to the clothing style, body shape, age and other information of the user 512 who took a photo with it in the preview frame image 51 Scene images, so that when different users take pictures with the same robot, they can get images of different styles.
  • auxiliary shooting actions are only an example of the preset types of auxiliary shooting actions. In other embodiments of the present application, more or fewer types of auxiliary shooting actions may also be included, and each type of auxiliary shooting actions The specific execution process of the action may be related to the type of the second terminal.
  • the second terminal when the above-mentioned second terminal is a car, if the above-mentioned preview frame image is a preview frame image obtained by the user shooting the scene outside the car window, then the second terminal can obtain different target scenes according to different recognized scenes image, and display the target scene image on the display in the window.
  • the target scene image matching the blue sky and white clouds can be obtained and displayed to obtain the target shooting scene, for example, the target shooting scene containing kites of different sizes flying in the blue sky can be obtained;
  • the target scene image matching the night sky can be obtained, and the target shooting scene can be obtained, for example, the night sky target shooting scene containing stars can be obtained;
  • the grass field is recognized outside the car window, it can be obtained and displayed.
  • the matched target scene image is used to obtain a target shooting scene, for example, a target shooting scene including cattle and sheep grazing is obtained.
  • adjusting one's own posture may refer to actions that can be performed by refrigerators such as opening the refrigerator door.
  • the above scene construction information may also include the voice information of the voice to be played; the scene construction instruction is also used to indicate The second terminal plays the voice according to the voice information of the voice to be played, so as to output the voice for video shooting.
  • the above-mentioned process of identifying the preview frame image of the first terminal can be performed not only on the second terminal, but also on the first terminal.
  • the first terminal Based on the recognition of the preview frame image, the recognition result can be obtained, and according to the recognition result, a scene construction instruction carrying posture adjustment parameters and one or more scene construction information in the image information corresponding to the target scene image can be generated, and sent to the first
  • the second terminal is used to construct the scene according to the scene construction information by the second terminal to obtain the target shooting scene.
  • the target scene image is a scene image corresponding to the preview frame image.
  • the mobile phone can perform image recognition on the preview frame image displayed on the preview frame image display interface of the shooting application, and when it is recognized that the preview frame image contains a robot, according to the preview In the frame image, the position and action of the robot and other characters taking a group photo together, send to the robot a scene construction instruction carrying posture adjustment parameters and one or more scene construction information in the image information corresponding to the target scene image, so as to send it to robot.
  • shooting suggestions can also be output to mobile phone users.
  • the first terminal can acquire different target scene images according to different recognized scenes , and send the scene construction instruction carrying the image information of the target scene image to the car, so that the car displays the target scene image on the display of its window.
  • the target scene image matching the blue sky and white clouds can be obtained;
  • the target scene image matching the grass can be obtained and displayed, and the scene construction instruction carrying the image information of the target scene image is sent to the car, so that the car displays the target scene on the display of the car window image.
  • the user of the first terminal can directly select Select the scene type of the scene to be constructed, and send the scene construction instruction carrying the scene construction information corresponding to the scene type of the scene to be constructed to the second terminal, so as to control the second terminal to construct the scene.
  • the scene type of the scene to be constructed may include the image type of the scene image to be displayed and/or the scene type of the scene to be constructed may include the voice type of the voice to be played.
  • step A04 and the above step B04 when determining whether the second terminal satisfies the construction conditions of the scene to be constructed based on the scene type of the scene to be constructed, if the scene to be constructed The type includes the image type of the scene image to be displayed.
  • the second terminal By detecting whether the second terminal includes a display screen for displaying the scene image to be displayed, if the second terminal includes a display screen for displaying the scene image to be displayed, then determine the second terminal Satisfy the construction conditions of the scene to be built.
  • generating a scene construction instruction carrying scene construction information may refer to generating a scene construction instruction carrying image information corresponding to a scene image to be displayed.
  • the scene construction instruction is used to instruct the second terminal to display the image of the scene to be displayed according to the image information, so as to obtain the auxiliary shooting scene.
  • step A04 and the above step B04 when determining whether the second terminal satisfies the construction conditions of the scene to be constructed based on the scene type of the scene to be constructed, if the scene to be constructed Type includes the voice type of the voice to be played, then detect whether the second terminal includes a speaker for playing the voice to be played, if the second terminal includes a speaker for playing the voice to be played, then determine that the second terminal meets the construction of the scene to be built condition.
  • generating a scene construction instruction carrying scene construction information may refer to generating a scene construction instruction carrying voice information corresponding to the voice to be played.
  • the scene construction instruction is used to instruct the second terminal to play the voice to be played according to the voice information, so as to obtain the auxiliary shooting scene.
  • the scene type of the scene to be constructed may include multiple scene types, for example, simultaneously including the image type of the scene image to be displayed, the voice type of the voice to be played, and the gesture type corresponding to the target gesture
  • determining whether the second terminal satisfies the construction conditions of the scene to be constructed based on the scene type of the scene to be constructed may refer to: determining whether the second terminal satisfies the construction conditions of the scene to be constructed based on each scene type of the scene to be constructed .
  • the first terminal realizes the interaction with the second terminal by sending the scene construction instruction to the second terminal when it is in the interactive shooting mode, so that the second terminal constructs the scene according to the scene construction information, so that the second terminal A terminal's current shooting scene superimposes multimedia information such as images or sounds to obtain a processed target shooting scene, and then enables the first terminal to shoot in the target shooting scene obtained with the assistance of the second terminal, and obtains the corresponding target shooting scene.
  • the target captures an image, so that the first terminal can capture a processed image without relying on its own image processing function.
  • FIG. 6 shows a schematic flowchart of a second implementation of a method for assisting shooting provided by an embodiment of the present application.
  • the auxiliary photographing method is applied to the second terminal, and may be executed by an auxiliary photographing device of the second terminal.
  • the auxiliary photographing method may be implemented in steps 601 to 602 as follows.
  • Step 601 receiving a scene construction instruction carrying scene construction information sent by the first terminal, the scene construction instruction is generated by the first terminal when it is in the interactive shooting mode;
  • Step 602 perform scene construction according to the scene construction information, and obtain an auxiliary shooting scene.
  • the above-mentioned auxiliary shooting scene and the actual scene of the environment where the first terminal is located are superimposed on each other to obtain the above-mentioned target shooting scene.
  • the above-mentioned scene construction information includes the preview frame image of the first terminal
  • the above-mentioned scene construction according to the scene construction information to obtain the auxiliary shooting scene can be implemented in the following manner: Image recognition is performed on the preview frame image, and a preset type of auxiliary shooting action is performed according to the recognition result to obtain an auxiliary shooting scene.
  • the preset types of auxiliary shooting actions may include one or more types of auxiliary shooting actions in posture adjustment, outputting shooting suggestions, and acquiring and displaying target scene images corresponding to preview frame images.
  • the second terminal may be a terminal including a display screen, and the scene construction information may include image information corresponding to the scene image to be displayed;
  • the aforementioned scene construction based on the scene construction information to obtain the auxiliary shooting scene includes: performing image display according to the image information to obtain the auxiliary shooting scene.
  • the above-mentioned second terminal may be a terminal that includes a speaker, and the above-mentioned scene construction information includes voice information corresponding to the voice to be played; the above-mentioned scene construction is performed according to the scene construction information to obtain an auxiliary shooting scene , including: performing voice playback according to the voice information to obtain an auxiliary shooting scene.
  • the second terminal can also collect the voice played by the first terminal, or The voice of the user of the first terminal is collected, and the voice used for assisting video shooting is output based on the collected voice.
  • the mobile phone user can have a conversation with the robot or sing a duet with the robot during the process of shooting video images, and the robot can collect the voice content of the mobile phone user, or Voice, output the corresponding voice to cooperate with the user in video recording.
  • the second terminal receives the scene construction instruction carrying the scene construction information generated and sent by the first terminal when it is in the interactive shooting mode, and then constructs the scene according to the scene construction information, so that the current shooting of the first terminal
  • the scene superimposes multimedia information such as images or sounds to obtain a processed target shooting scene, and then enables the first terminal to shoot in the target shooting scene obtained with the assistance of the second terminal, and obtains a target shooting image corresponding to the target shooting scene, This makes it possible for the first terminal to capture and obtain processed images without relying on its own image processing function.
  • the steps in the above embodiments can be executed by a plug-in configured on the first terminal or the second terminal, and the plug-in can be stored in a cloud server, as shown in FIG. 7 , which is a schematic diagram of the interaction between the first terminal, the second terminal, and the cloud server provided in the embodiment of the present application, the auxiliary shooting method includes the following steps:
  • Step 701 the first terminal establishes a communication connection with the second terminal
  • step 702 the first terminal obtains scene construction type information corresponding to the second terminal. That is, the scene construction type supported by the second terminal.
  • the first terminal after the first terminal starts the shooting application, it can perform device scanning through wireless connection methods such as Bluetooth, WiFi, ZigBee, etc., and obtain device information such as device type, device type number, and version information of the terminal to be connected after scanning.
  • the first terminal sends a data request carrying the device information to the cloud server, and receives the authentication information corresponding to the device information and the scene corresponding to the terminal to be connected sent by the cloud server according to the device information table and shooting scene table stored therein. Build type information.
  • a physical data channel is established with the terminal to be connected based on the authentication information.
  • the terminal to be connected is the above-mentioned second terminal
  • the scene construction type information corresponding to the terminal to be connected is the scene construction type information corresponding to the above-mentioned second terminal .
  • Table 1 below is the device information table stored in the cloud server
  • Table 2 below is the shooting scene table stored in the cloud server.
  • Equipment type Device type number discovery protocol data protocol Certification Information Version Information Mobile A UUID_XXXX Bluetooth BLE Certification Information Version Information Robot A UUID_YYYY Bluetooth WIFI P2P Certification Information Version Information
  • scene number Device type number scene name 100001 UUID_XXXXX snowflake image 100002 UUID_XXXX falling flower image 100003 UUID_YYYYY backup dancer angel
  • Step 703 Determine the scene type of the scene to be constructed based on the scene construction type information corresponding to the second terminal.
  • the first terminal first generates a scene construction type selection interface according to the scene construction type information corresponding to the second terminal, and then determines the scene type of the scene to be constructed in response to the scene construction type selection operation triggered by the user on the scene construction type selection interface .
  • the first terminal is a mobile phone
  • the second terminal is a car
  • the scene construction type information corresponding to the car may include: displaying snowflake images, starry images, falling flower images, cattle and sheep grazing images, accompanying dancer angels and other scene images, and playing voice A, voice B, voice C and other voices.
  • the scene construction type selection interface 81 as shown in Figure 8, it is determined that the scene type of the scene to be constructed includes the image type of the scene image to be displayed, and the image of the scene image to be displayed The type is a snowflake image.
  • Step 704 the first terminal downloads the first plug-in corresponding to the scene to be built from the cloud server, for example, plug-in K, stores and runs the first plug-in, and determines whether the second terminal meets the scene type of the scene to be built.
  • the construction conditions of the scene when the second terminal meets the construction conditions of the scene to be constructed, set the shooting mode of the first terminal to the interactive shooting mode, and generate a scene construction instruction carrying the scene construction information, and convert the scene carrying the scene construction information
  • the construction instruction is sent to the second terminal and other method steps.
  • the plug-in K detects whether the second terminal includes a display screen for displaying the scene image to be displayed, and if the second terminal includes a display screen for displaying the scene image to be displayed, then determines that the second terminal meets the construction conditions of the scene to be constructed.
  • Step 705 the second terminal downloads the second plug-in corresponding to the scene to be built from the cloud server, for example, plug-in J, stores and runs the second plug-in, to construct the scene according to the scene construction information, and obtain the auxiliary shooting scene.
  • the cloud server for example, plug-in J
  • the cloud server may store plug-ins for constructing different scenarios corresponding to different types of terminals.
  • the second terminal may also download the second plug-in corresponding to the scene to be built from the cloud server through the first terminal, and then obtain the second plug-in from the first terminal.
  • Step 706 uninstall the first plug-in and the second plug-in.
  • plug-in K and plug-in J are uninstalled.
  • the business logic of shooting can be separated from the first terminal and the second terminal , to achieve capability matching interaction in the form of two-way plug-in, which can provide a better shooting experience, and can realize flexible configuration of interconnection capabilities between terminals.
  • FIG. 9 shows a schematic structural diagram of an auxiliary photographing device 900 provided in an embodiment of the present application.
  • the auxiliary photographing device is configured at a first terminal and includes a generating unit 901 and a sending unit 902 .
  • a generating unit 901 configured to generate a scene construction instruction carrying scene construction information when the first terminal is in the interactive shooting mode
  • the sending unit 902 is configured to send a scene construction instruction to the second terminal to obtain a target shooting scene; the scene construction instruction is used to instruct the second terminal to perform scene construction according to the scene construction information to obtain an auxiliary shooting scene; the target shooting scene includes An auxiliary shooting scene obtained by performing scene construction according to the scene construction information by the second terminal.
  • the above-mentioned auxiliary shooting device 900 is also used for:
  • the shooting mode of the first terminal is set to the interactive shooting mode.
  • the above-mentioned auxiliary shooting device 900 is also used for:
  • the shooting mode of the first terminal is set to an interactive shooting mode
  • an interactive shooting control is added to the shooting interface of the first terminal, and when the interactive shooting control is triggered, the shooting mode of the first terminal is set to the interactive shooting mode.
  • the scene construction information includes a preview frame image of the first terminal
  • the scene construction instruction is used to instruct the second terminal to perform image recognition on the preview frame image, and perform a preset type of auxiliary shooting action according to the recognition result to obtain the auxiliary shooting scene;
  • the preset types of auxiliary shooting actions include one or more types of auxiliary shooting actions of posture adjustment, outputting shooting suggestions, and obtaining and displaying a target scene image corresponding to the preview frame image.
  • the above generating unit is specifically used to:
  • a scene construction instruction carrying pose adjustment parameters and one or more types of scene construction information in the target scene image corresponding to the preview frame image is generated according to the recognition result.
  • the scene type of the scene to be constructed includes the image type of the scene image to be displayed, and when determining whether the second terminal satisfies the construction conditions of the scene to be constructed based on the scene type of the scene to be constructed , specifically for:
  • the second terminal includes a display screen for displaying a scene image to be displayed
  • the second terminal includes a display screen for displaying the image of the scene to be displayed, it is determined that the second terminal meets the construction conditions of the scene to be constructed;
  • the above generation unit is specifically used for:
  • the scene construction instruction is used to instruct the second terminal to display the scene image to be displayed according to the image information to obtain an auxiliary shooting scene.
  • the scene type of the scene to be built includes the voice type of the voice to be played; the above-mentioned based on the scene type of the scene to be built determines whether the second terminal meets the construction conditions of the scene to be built, specifically Used for:
  • the second terminal includes a loudspeaker for playing the voice to be played, it is determined that the second terminal meets the construction conditions of the scene to be constructed;
  • the above generation unit is specifically used for:
  • the scene construction instruction is used to instruct the second terminal to play the voice to be played according to the voice information to obtain an auxiliary shooting scene.
  • FIG. 10 shows a schematic structural diagram of another auxiliary photographing device 110 provided by an embodiment of the present application.
  • the auxiliary photographing device is configured on a second terminal and includes a receiving unit 111 and a constructing unit 112 .
  • the receiving unit 111 is configured to receive a scene construction instruction carrying scene construction information sent by the first terminal, where the scene construction instruction is generated by the first terminal when it is in the interactive shooting mode;
  • the construction unit 112 is configured to construct a scene according to the scene construction information to obtain an auxiliary shooting scene.
  • the scene construction information includes a preview frame image of the first terminal
  • Scene construction is carried out according to the scene construction information, and auxiliary shooting scenes are obtained, including:
  • Image recognition is performed on the preview frame image, and a preset type of auxiliary shooting action is performed according to the recognition result to obtain an auxiliary shooting scene;
  • the preset types of auxiliary shooting actions include one or more types of auxiliary shooting actions of posture adjustment, outputting shooting suggestions, and obtaining and displaying a target scene image corresponding to the preview frame image.
  • the second terminal may be a terminal including a display screen, and the scene construction information includes image information corresponding to the scene image to be displayed;
  • the above construction unit is specifically used for: performing image display according to image information to obtain an auxiliary shooting scene.
  • the second terminal may be a terminal including a speaker, and the scene construction information includes voice information corresponding to the voice to be played;
  • the above construction unit is specifically used for: performing voice playback according to the voice information to obtain an auxiliary shooting scene.
  • auxiliary shooting device 900 and the auxiliary shooting device 110 described above can refer to the corresponding process of each auxiliary shooting method described above, and will not be repeated here.
  • FIG. 11 shows a schematic structural diagram of a terminal provided by an embodiment of the present application.
  • the terminal may be the above-mentioned first terminal or the above-mentioned second terminal.
  • the terminal may include: a processor 121, a memory 122 , one or more input devices 123 (only one is shown in FIG. 11 ) and one or more output devices 124 (only one is shown in FIG. 11 ).
  • the processor 121 , the memory 122 , the input device 123 and the output device 124 are connected through a bus 125 .
  • the above-mentioned display screen can be a display screen with a transparent display function, and, when the second terminal is a car, the display screen can be located between the door window and the window of the car. /or windshield and/or sunroof.
  • the so-called processor 121 may be a central processing unit (Central Processing Unit, CPU), and the processor may also be other general-purpose processors, digital signal processors (Digital Signal Processor, DSP) , Application Specific Integrated Circuit (ASIC), Field Programmable Gate Array (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
  • a general-purpose processor may be a microprocessor, or the processor may be any conventional processor, and the like.
  • the input device 123 may include a virtual keyboard, a touch panel, a fingerprint sensor (for collecting the user's fingerprint information and fingerprint direction information), a microphone, etc.
  • the output device 124 may include a display, a speaker, and the like.
  • the memory 122 stores a computer program that can run on the processor 121 , for example, the computer program is a program for assisting a photographing method.
  • the steps in the above-mentioned embodiment of the auxiliary shooting method are realized, for example, step 101 to step 102 shown in FIG. 1 , or step 601 to step 602 shown in FIG.
  • the functions in the above device embodiments are realized when the above computer program is executed, for example, the functions of the units 901 to 902 shown in FIG. 9 , or the functions of the units 111 to 112 shown in FIG. 10 .
  • the above-mentioned computer program may be divided into one or more modules/units, and the one or more modules/units are stored in the above-mentioned memory 122 and executed by the above-mentioned processor 121 to complete the present application.
  • the above-mentioned one or more modules/units may be a series of computer program instruction segments capable of completing specific functions, and the instruction segments are used to describe the execution process of the above-mentioned computer program in the above-mentioned first terminal or second terminal for assisting shooting.
  • an embodiment of the present application further provides a computer-readable storage medium, where a computer program is stored in the computer-readable storage medium, and when the computer program is executed, the steps of the above-mentioned photographing assistance methods are implemented.
  • the disclosed device/user terminal and method may be implemented in other ways.
  • the device/user terminal embodiments described above are only illustrative, for example, the division of modules or units is only a logical function division, and there may be other division methods in actual implementation, such as multiple units or components May be combined or may be integrated into another system, or some features may be omitted, or not implemented.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be through some interfaces, and the indirect coupling or communication connection of devices or units may be in electrical, mechanical or other forms.
  • a unit described as a separate component may or may not be physically separated, and a component displayed as a unit may or may not be a physical unit, that is, it may be located in one place, or may be distributed to multiple network units. Part or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, each unit may exist separately physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated units can be implemented in the form of hardware or in the form of software functional units.
  • an integrated module/unit is realized in the form of a software function unit and sold or used as an independent product, it can be stored in a computer-readable storage medium.
  • the present application realizes all or part of the processes in the methods of the above embodiments, and can also be completed by instructing related hardware through computer programs, and the computer programs can be stored in a computer-readable storage medium.
  • the computer program includes computer program code
  • the computer program code may be in the form of source code, object code, executable file or some intermediate form.
  • the computer-readable medium may include: any entity or device capable of carrying computer program code, recording medium, U disk, removable hard disk, magnetic disk, optical disk, computer memory, read-only memory (Read-Only Memory, ROM), random access Memory (Random Access Memory, RAM), electrical carrier signal, telecommunication signal and software distribution medium, etc. It should be noted that the content contained on computer readable media may be appropriately increased or decreased according to the requirements of legislation and patent practice in the jurisdiction. For example, in some jurisdictions, according to legislation and patent practice, computer readable media does not include Electrical carrier signals and telecommunication signals.

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Multimedia (AREA)
  • Studio Devices (AREA)

Abstract

本申请属于拍摄技术领域,尤其涉及一种辅助拍摄方法、装置、终端和计算机可读存储介质,本申请实施例中,第一终端通过在处于互动拍摄模式时,将场景构建指令发送给第二终端,实现与第二终端的交互,使第二终端根据场景构建信息进行场景构建,从而对第一终端当前的拍摄场景进行美化等处理,例如,叠加图像或声音等多媒体信息,得到经过处理的目标拍摄场景,进而使第一终端能够在第二终端辅助构建得到的目标拍摄场景下进行拍摄,并得到该目标拍摄场景对应的目标拍摄图像,使得第一终端可以无需依赖其自身的图像处理功能,即可拍摄得到经过处理的图像,可以提高图像拍摄的真实性。

Description

一种辅助拍摄方法、装置、终端和计算机可读存储介质
本申请要求于2021年11月19日提交中国专利局、申请号为202111375248.2、申请名称为“一种辅助拍摄方法、装置、终端和计算机可读存储介质”的中国专利申请的优先权,其全部内容通过引用结合到本申请中。
技术领域
本申请属于拍摄技术领域,尤其涉及一种辅助拍摄方法、装置、终端和计算机可读存储介质。
背景技术
目前,为了增强图像的拍摄效果,在完成图像的拍摄后,可以通过终端的拍摄应用自带的滤镜、贴纸、特效等图像处理功能对图像进行一系列的处理操作,实现对图像的处理。
发明内容
本申请实施例提供一种辅助拍摄方法、装置、终端和计算机可读存储介质,可以使终端无需依赖其自身的图像处理功能,即可拍摄得到经过处理的图像。
本申请实施例第一方面提供一种辅助拍摄方法,该辅助拍摄方法应用于第一终端,包括:
在第一终端处于互动拍摄模式时,生成携带场景构建信息的场景构建指令;
将所述场景构建指令发送给所述第二终端,以得到目标拍摄场景;所述场景构建指令用于指示所述第二终端根据所述场景构建信息进行场景构建,得到辅助拍摄场景;所述目标拍摄场景包含所述第二终端根据所述场景构建信息进行场景构建得到的所述辅助拍摄场景。
本申请实施例第二方面提供一种辅助拍摄方法,所述辅助拍摄方法应用于第二终端,包括:
接收第一终端发送的携带场景构建信息的场景构建指令,所述场景构建指令由所述第一终端在处于互动拍摄模式时生成;
根据所述场景构建信息进行场景构建,得到辅助拍摄场景。
本申请实施例第三方面提供一种辅助拍摄装置,所述辅助拍摄装置配置于第一终端,包括:
生成单元,用于在第一终端处于互动拍摄模式时,生成携带场景构建信息的场景构建指令;
发送单元,用于将所述场景构建指令发送给所述第二终端,以得到目标拍摄场景;所述场景构建指令用于指示所述第二终端根据所述场景构建信息进行场景构建,得到辅助拍摄场景;所述目标拍摄场景包含所述第二终端根据所述场景构建信息进行场景构建得到的所述辅助拍摄场景。
本申请实施例第四方面提供一种辅助拍摄装置,所述辅助拍摄装置配置于第二终端,包括:
接收单元,用于接收第一终端发送的携带场景构建信息的场景构建指令,所述场景构建指令由所述第一终端在处于互动拍摄模式时生成;
构建单元,用于根据所述场景构建信息进行场景构建,得到辅助拍摄场景。
本申请实施例第五方面提供一种终端,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现上述第一方面所述方法的步骤。
本申请实施例第六方面提供一种终端,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时实 现上述第二方面所述方法的步骤。
本申请实施例第七方面提供一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序被执行时实现上述第一方面的方法的步骤或上述第二方面的方法的步骤。
附图说明
为了更清楚地说明本申请实施例的辅助拍摄方法,下面将对实施例中所需要使用的附图作简单地介绍,应当理解,以下附图仅示出了本申请的某些实施例,因此不应被看作是对范围的限定,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他相关的附图。
图1是本申请实施例提供的辅助的拍摄方法的第一实现流程示意图;
图2是本申请实施例提供的目标拍摄场景的构建示意图;
图3a-3b是本申请实施例提供的互动拍摄控件的示意图;
图4是本申请实施例提供的机器人调整姿态的示意图;
图5是本申请实施例提供的机器人显示目标场景图像的示意图;
图6是本申请实施例提供的辅助拍摄方法的第二实现流程示意图;
图7是本申请实施例提供的第一终端、第二终端、云服务器三者之间的交互示意图;
图8是本申请实施例提供的场景构建类型选择界面的示意图;
图9是本申请实施例提供的辅助拍摄装置的第一结构示意图;
图10是本申请实施例提供的辅助拍摄装置的第二结构示意图;
图11是本申请实施例提供的终端的结构示意图。
具体实施方式
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。
本文中的“和/或”仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,在本申请实施例的描述中,除非另有说明,“多个”是指两个或多于两个,“至少一个”、“一个或多个”是指一个、两个或两个以上。
在本说明书中描述的“一个实施例”或“一些实施例”等意味着在本申请的一个或多个实施例中包括结合该实施例描述的特定特征、结构或特点。由此,在本说明书中的不同之处出现的语句“在一个实施例中”、“在一些实施例中”、“在其他一些实施例中”、“在另外一些实施例中”等不是必然都相同的实施例,而是意味着“一个或多个但不是所有的实施例”,除非是以其他方式另外特别强调。术语“包括”、“包含”、“具有”及它们的变形都意味着“包括但不限于”,除非是以其他方式另外特别强调。
在相关技术中,终端在利用拍摄应用进行图像拍摄时,可以对拍摄得到的图像进行滤镜处理,或者,在图像上增加贴纸,或做一些特效处理,实现对图像的处理。但是,这些图像处理方式,主要依赖终端自身的图像处理功能实现,存在一定的局限性。
基于上述技术问题,本申请实施例提供一种辅助拍摄方法、装置、终端和计算机可读存储介质,可以使终端无需依赖其自身的图像处理功能,即可拍摄得到经过处理的图像。
下面通过实施例的方式对本申请的方案进行举例说明。需要说明的是,本申请实施例提到的图像拍摄可以是指单张照片的拍摄,也可以是指视频拍摄。
示例性的,如图1示出了本申请实施例提供的一种辅助拍摄方法的第一实现流程示意图。该辅助拍摄方法应用于第一终端,可以由第一终端的辅助拍摄装置执行,该辅助拍摄方法可以采用下述步骤101至步骤102的方式实现。
步骤101,在第一终端处于互动拍摄模式时,生成携带场景构建信息的场景构建指令。
步骤102,将场景构建指令发送给第二终端,以得到目标拍摄场景。
本申请实施例中,第一终端可以为具有图像拍摄功能的多种类型终端,例如,该第一终端为手机、平板电脑、车载设备、增强现实(augmented reality,AR)/虚拟现实(virtual reality,VR)设备、笔记本电脑、超级移动个人计算机(ultra-mobile personal computer,UMPC)、上网本、个人数字助理(personal digital assistant,PDA)等终端,本申请实施例对第一终端的具体类型不作任何限制。
上述第二终端可以为智能汽车、智能电视、智能机器人、智能家电、智能玩具、广告机、舞台媒体控制设备等终端,本申请实施例对第二终端的具体类型不作任何限制。
并且,上述第一终端可以通过蓝牙、WiFi、ZigBee、UWB等无线连接方式或者有线连接方式与第二终端建立通信连接。
本申请实施例中,互动拍摄模式为第一终端能够通过与第二终端进行交互,使第二终端根据场景构建信息进行场景构建,从而处理第一终端当前的拍摄场景,得到经过处理的目标拍摄场景的拍摄模式。
在互动拍摄模式下,第一终端与第二终端处于通信连接状态,因此,在生成携带场景构建信息的场景构建指令之后,可以将场景构建指令发送给第二终端,使第二终端根据场景构建信息进行场景构建,得到经过处理的目标拍摄场景。
本申请实施例中,上述场景构建指令用于指示该第二终端根据场景构建信息进行场景构建。其中,该场景构建信息可以根据待构建场景的场景类型确定。
例如,当待构建场景的场景类型为待显示场景图像的图像类型时,则场景构建信息可以包括待显示场景图像对应的图像信息;该场景构建指令可以用于指示第二终端根据该图像信息对待显示场景图像进行显示。其中,该图像信息可以为图像的编号或者图像内容本身。
又例如,当待构建场景的场景类型为待播放语音的语音类型时,则场景构建信息可以包括待播放语音的语音信息,上述场景构建指令可以用于指示第二终端根据该语音信息对待播放语音进行播放。其中,该语音信息可以为语音的编号或者语音内容本身。
本申请实施例中,上述将场景构建指令发送给第二终端,以得到目标拍摄场景,可以是指将场景构建指令发送给第二终端,使第二终端根据上述场景构建信息进行场景构建得到辅助拍摄场景,并由该辅助拍摄场景与第一终端所处环境的实际场景相互叠加,得到上述目标拍摄场景。该目标拍摄场景包含第二终端根据上述场景构建信息进行场景构建得到的辅助拍摄场景。
例如,第一终端为手机,第二终端为汽车,手机用户坐在汽车内,在将携带待显示场景图像对应的图像信息的场景构建指令发送给汽车时,汽车可以在车窗的显示屏上显示该待显示场景图像,进而使得坐在汽车内手机用户可以同时拍摄得到车窗外的实际场景与汽车车窗的显示屏上显示的场景图像叠加得到的目标拍摄场景的场景图像。
例如,该待显示场景图像为如图2所示的雪花图像21,车窗外的实际场景为如图2所示的房屋22,则目标拍摄场景为雪花图像21与房屋22叠加得到的场景,使用手机对该目标拍摄场景,可以得到图像23。
本申请实施例中,第一终端通过在处于互动拍摄模式时,将场景构建指令发送给第二终端,实现与第二终端的交互,使第二终端根据场景构建信息进行场景构建,从而对第一终端当前的拍摄场景进行美化等处理,例如,叠加图像或声音等多媒体信息,得到经过处理的目标拍摄场景,进而使第一终端能够在第二终端辅助构建得到的目标拍摄场景下进行拍摄,并得到该目标拍摄场景对应的目标拍摄图像,使得第一终端可以无需依赖其自身的图像处理功能,即可拍摄得到经过处理的图像,可以提高图像拍摄的真实性。
下面对本申请实施例关于如何将第一终端的拍摄模式设置为互动拍摄模式的四种实现方式进行举例说明:
第一种:
可选的,在本申请的一些实施例中,在上述步骤101之前,可以通过判断第一终端与第二终端是否处于通信连接状态,并在第一终端与第二终端处于通信连接状态时,将第一终端的拍摄模式设置为互动拍摄模式。
例如,第一终端可以在开启拍摄应用后,主动进行设备扫描,实现与第二终端之间建立通信连接,并在第一终端与第二终端处于通信连接状态时,将第一终端自动设置为互动拍摄模式;或者,在用户开启蓝牙或其他连接功能,并选择需要连接的终端(第二终端)后,完成第一终端与第二终端之间的通信连接,使第一终端与第二终端处于通信连接状态,并且,在第一终端与第二终端处于通信连接状态时,将第一终端的拍摄模式设置为互动拍摄模式。
即,在第一终端与第二终端处于通信连接状态时,无论第一终端是否开启拍摄应用,第一终端的拍摄模式将被自动设置为互动拍摄模式,无需进行其他操作。
例如,第一终端在执行其他功能时,与第二终端建立了通信连接,使第一终端与第二终端处于通信连接状态,则在第一终端开启拍摄应用,其拍摄模式可以自动设置为互动拍摄模式,无需进行模式选择等操作。
第二种:
可选的,在本申请的另一些实施例中,可以在第一终端的拍摄界面设置的互动拍摄控件被触发时,将第一终端的拍摄模式设置为互动拍摄模式。
即,第一终端互动拍摄模式的设置,与第一终端和第二终端两者之间是否处于通信连接状态无关。第一终端可以在与第二终端建立通信连接之后,响应于用户对第一终端的拍摄界面设置的互动拍摄控件的触发操作,将第一终端的拍摄模式设置为互动拍摄模式,或者,在第一终端与第二终端建立通信连接之前,响应于用户对第一终端的拍摄界面设置的互动拍摄控件的触发操作,将第一终端的拍摄模式设置为互动拍摄模式。
例如,第一终端拍摄应用的拍摄界面可以预先设置有互动拍摄控件,在第一终端的拍摄界面设置的互动拍摄控件被触发时,将第一终端的拍摄模式设置为互动拍摄模式,并在第一终端处于互动拍摄模式时,若第一终端与第二终端之间未建立通信连接,则自动触发第一终端与第二终端之间通信连接的建立,以使第一终端与第二终端处于通信连接状态。
例如,如图3a所示,第一终端拍摄应用的拍摄界面可以预先设置有互动拍摄控件301,在互动拍摄控件301被点击或按压时,确定第一终端的用户希望在互动拍摄模式下进行图像拍摄,此时,将第一终端的拍摄模式设置为互动拍摄模式。在本申请的一些实施例中,利用此种方式使第一终端处于互动拍摄模式时,若第一终端未与第二终端建立通信连接,则自动触发第一终端与第二终端建立通信连接,以使第一终端与第二终端处于通信连接状态。
例如,第一终端发送蓝牙扫描信号,以识别第一终端当前所处环境中存在的第二终端,并与第二终端建立通信连接;或者,第一终端接收第二终端发送蓝牙扫描信号,实现与第二终端建立通信连接,本申请对第一终端与第二终端之间的连接方式不做限定。
需要说明的是,上述图3a仅仅是对互动拍摄控件的位置和形状进行举例说明,本申请对互动拍摄控件的位置和形状不做限定,并且,在本申请的一些实施例中,该互动拍摄控件可以隐藏在拍摄界面的弹窗中,而非直接显示在界面上。
例如,如图3b所示,互动拍摄控件隐藏在拍摄界面的弹窗中,用户通过点击控件“更多”,使第一终端在拍摄界面加载拍摄模式选择窗口32,该拍摄模式选择窗口32设置有拍摄应用隐藏的拍摄模式选择控件,用户向左滑动互动拍摄控件321时,第一终端处于互动拍摄模式。
第三种:
在本申请的另一些实施例中,当第二终端可以实现多种场景的构建时,可以采用下述步骤A01至步骤A05的方式实现将第一终端的拍摄模式设置为互动拍摄模式。
步骤A01,在第一终端与第二终端处于通信连接状态时,获取第二终端对应的场景构建类型信息。
步骤A02,基于场景构建类型信息生成场景构建类型选择界面。
步骤A03,响应于用户在场景构建类型选择界面触发的场景构建类型选择操作,确定待构建场景的场景类型。
例如,在第一终端与第二终端处于通信连接状态时,先获取第二终端的设备类型信息,并基于该设备类型信息确定第二终端对应的场景构建类型信息,并基于场景构建类型信息生成包含一个或多个场景构建类型选择控件的场景构建类型选择界面,然后,响应于用户在场景构建类型选择界面对场景构建类型选择控件的触发操作,确定待构建场景的场景类型。
步骤A04,基于待构建场景的场景类型确定第二终端是否满足待构建场景的构建条件。
步骤A05,若第二终端满足待构建场景的构建条件,则将第一终端的拍摄模式设置为互动拍摄模式。
本申请实施例中,由于不同场景类型有可能需要调用不同的资源,因此,在确定待构建场景的场景类型之后,可以先判断第二终端是否满足待构建场景的构建条件,即,是否具备构建待构建场景的资源条件,并在第二终端满足该资源条件时,将第一终端的拍摄模式设置为互动拍摄模式,接着,生成携带场景构建信息的场景构建指令,将其发送给第二终端,使第二终端直接根据该场景构建信息进行场景构建,以得到目标拍摄场景。
相应的,若第二终端不满足待构建场景的构建条件,则第一终端的拍摄应用开启时,第一终端所处的拍摄模式可以与相关技术中第一终端默认的拍摄模式相同。例如,若第二终端不满足待构建场景的构建条件,则在第一终端拍摄应用开启时,可以将第一终端的拍摄模式设置为默认的照片拍摄模式或视频拍摄模式,本申请对此不做限制。
第四种:
可选的,在本申请的一些实施方式中,当第二终端可以实现多种场景的构建时,还可以采用下述步骤B01至步骤B05的方式实现第一终端是否处于互动拍摄模式的判断。
步骤B01,在第一终端与第二终端处于通信连接状态时,获取第二终端对应的场景构建类型信息。
步骤B02,基于场景构建类型信息生成场景构建类型选择界面。
步骤B03,响应于用户在场景构建类型选择界面触发的场景构建类型选择操作,确定待构建场景的场景类型。
步骤B04,基于待构建场景的场景类型确定第二终端是否满足待构建场景的构建条件。
本实施例中,上述步骤B01至B04的实现方式与上述步骤A01至步骤A04的实现方式相同,此处不再赘述。
步骤B05,若第二终端满足待构建场景的构建条件,则在第一终端的拍摄界面增加互动拍摄控件,并在互动拍摄控件被触发时,将第一终端的拍摄模式设置为互动拍摄模式。
本申请实施例中,通过在第二终端满足待构建场景的构建条件时,在第一终端的拍摄界面增加如图3a所示的互动拍摄控件301,或者图3b所示的互动拍摄控件321,使得用户可以提前选择好需要待构建场景,并在需要进行拍摄时,再触发互动拍摄控件,使第一终端处于互动拍摄模式,并生成携带场景构建信息的场景构建指令,再将该场景构建指令发送给第二终端,使得第二终端无需在第一终端的用户还没有准备好进行拍摄,就开始执行场景构建信息对应的场景构建动作。
同样的,若第二终端不满足待构建场景的构建条件,则可以与相关技术中,第一终端拍摄应用开启时,第一终端所处的拍摄模式设置第一终端在不满足待构建场景的构建条件时所处的拍摄模式。例如,若第二终端不满足待构建场景的构建条件,则在第一终端拍摄应用开启时,将第一终端设置为照片拍摄模式或视频拍摄模式,本申请对此不做限制。
需要说明的是,上述仅仅是对如何将第一终端的拍摄模式设置为互动拍摄模式的实现方式进行举例说明,并不表示为对本申请保护范围的限制,在本申请的其他实施方式中,还可以基于其他实现方式使第一终端处于互动拍摄模式。
可以理解的是,在本申请的一些实施方式中,基于上述第一种和第二种方式确定第一终端处于互动拍摄模式之后,还可以执行上述步骤A01至步骤A04或上述步骤B01至B04的步骤,以确定第二终端是否满足待构建场景的构建条件,并在第二终端满足待构建场景的构建条件时,生成携带场景构建信息的场景构建指令,此处不再赘述。
可选的,在一个应用场景中,上述各个实施例中的场景构建信息可以包括第一终端的预览帧图像,上述场景构建指令可以用于指示第二终端对预览帧图像进行图像识别,并根据识别结果执行预设类型的辅助拍摄动作,以使第一终端得到上述目标拍摄场景。
可选的,上述预设类型的辅助拍摄动作可以包括姿态调整、输出拍摄建议以及获取并显示与预览帧图像对应的目标场景图像中的一种或多种类型的辅助拍摄动作。
可选的,当第一终端为手机、第二终端为机器人,上述场景构建信息为第一终端的预览帧图像时,手机可以通过将拍摄应用的预览帧图像显示界面显示的预览帧图像发送给机器人,机器人可以对预览帧图像进行图像识别,并在识别到预览帧图像中包含机器人时,可以根据预览帧图像中机器人与其他一同合影的人物的位置和动作,执行调整自身的姿态、输出拍摄建议以及获取并显示与预览帧图像对应的目标场景图像中的一种或多种类型的辅助拍摄动作。
例如,如图4所示,手机可以通过将拍摄应用的预览帧图像显示界面显示的预览帧图像41发送给机器人,机器人通过对该预览帧图像进行图像识别,确定预览帧图像中包含机器人自身411以及与其一同合影的手机用户412,并根据手机用户412的位置和动作,调整自己的动作以及表情,使手机预览帧图像显示界面显示的预览帧图像更新为预览帧图像42,以配合手机用户进行图像拍摄。
可选的,机器人在调整自己的动作以及表情的过程中,还可以输出相应的拍摄建议,例如,提示用户位于离自己一米以外的位置,其需要展示一段舞蹈,或者,提示用户位于其左侧、将手搭在其肩膀上等等拍摄建议。
可选的,当机器人设置有显示屏时,还可以获取并显示与预览帧图像对应的目标场景图像。
其中,上述目标场景图像为与预览帧图像中目标用户的风格匹配的图像。
例如,如图5所示,机器人胸前设置有显示屏511a,使得机器人51可以根据预览帧图像51中与其一同合影的用户512的衣着风格、体型、年龄等信息,获取并显示相匹配的目标场景图像,使得不同用户与同一机器人合影时,可以得到不同风格图像。
需要说明的是,上述仅仅是对预设类型的辅助拍摄动作进行举例说明,在本申请的其他实施例中,还可以包括更多或更少类型的辅助拍摄动作,并且,各个类型的辅助拍摄动作的具体执行过程可以与第二终端的类型相关。
可选的,当上述第二终端为汽车时,若上述预览帧图像为用户对车窗外的景物进行拍摄得到的预览帧图像,则第二终端可以根据识别得到的不同的景物获取不同的目标场景图像,并在车窗的显示器上显示该目标场景图像。
例如,当识别到车窗外为蓝天白云时,可以获取并显示与蓝天白云匹配的目标场景图像,得到目标拍摄场景,如,得到包含不同大小的风筝在蓝天上飞翔的目标拍摄场景;当识别到车窗外为夜空时,则可以获取与夜空匹配的目标场景图像,得到目标拍摄场景,如,得到包含满天繁星的夜空目标拍摄场景;当识别到车窗外为草地时,可以获取并显示与草地匹配的目标场景图像,得到目标拍摄场景,如,得到包含牛羊正在吃草的目标拍摄场景。
可选的,当上述第二终端为冰箱时,执行调整自身的姿态可以是指打冰箱门等等冰箱 可执行的动作。
需要说明的是,上述仅仅是对场景构建信息包含的内容进行举例说明,在本申请的一些实施例中,上述场景构建信息还可以包括待播放语音的语音信息;该场景构建指令还用于指示第二终端根据待播放语音的语音信息进行语音播放,以输出用于视频拍摄的语音。
还需要说明的是,在本申请的一些实施例中,上述对第一终端的预览帧图像进行识别的过程,不仅可以在第二终端上执行,还可以在第一终端上执行,第一终端可以基于对预览帧图像的识别,得到识别结果,并根据识别结果生成携带姿态调整参数以及目标场景图像对应的图像信息中的一种或多种场景构建信息的场景构建指令,将其发送给第二终端,以由第二终端根据该场景构建信息进行场景构建,得到目标拍摄场景。其中,目标场景图像为与预览帧图像对应的场景图像。
例如,当第一终端为手机、第二终端为机器人时,手机可以对拍摄应用的预览帧图像显示界面显示的预览帧图像进行图像识别,并在识别到预览帧图像中包含机器人时,根据预览帧图像中机器人与其他一同合影的人物的位置和动作,向机器人发送携带姿态调整参数以及目标场景图像对应的图像信息中的一种或多种场景构建信息的场景构建指令,以将其发送给机器人。并且,还可以向手机用户输出拍摄建议。
又例如,当上述第二终端为汽车时,若上述预览帧图像为用户对车窗外的景物进行拍摄得到的预览帧图像,则第一终端可以根据识别得到的不同的景物获取不同的目标场景图像,并将携带目标场景图像的图像信息的场景构建指令发送给汽车,以使汽车在其车窗的显示器上显示该目标场景图像。
例如,当识别到车窗外为蓝天白云时,可以获取与蓝天白云匹配的目标场景图像,当识别到车窗外为夜空时,则可以获取与夜空匹配的目标场景图像,得到目标拍摄场景,当识别到车窗外为草地时,可以获取并显示与草地匹配的目标场景图像,并将携带目标场景图像的图像信息的场景构建指令发送给汽车,以使汽车在其车窗的显示器上显示该目标场景图像。
可选的,在本申请的一些实施方式中,当第一终端与第二终端处于通信连接状态,并且,第一终端处于互动拍摄模式时,第一终端的用户可以根据实际应用场景,直接挑选好需要构建的待构建场景的场景类型,并向第二终端发送携带待构建场景的场景类型对应的场景构建信息的场景构建指令的方式,控制第二终端进行场景构建。
例如,上述待构建场景的场景类型可以包括待显示场景图像的图像类型和/或,待构建场景的场景类型包括待播放语音的语音类型。
可选的,在本申请的一些实施方式中,上述步骤A04以及上述步骤B04中,在基于待构建场景的场景类型确定第二终端是否满足待构建场景的构建条件时,若待构建场景的场景类型包括待显示场景图像的图像类型,可以通过检测第二终端是否包含用于显示待显示场景图像的显示屏,若第二终端包含用于显示待显示场景图像的显示屏,则确定第二终端满足待构建场景的构建条件。
相应的,上述步骤102中,生成携带场景构建信息的场景构建指令,可以是指生成携带待显示场景图像对应的图像信息的场景构建指令。该场景构建指令用于指示第二终端根据图像信息对待显示场景图像进行显示,得到辅助拍摄场景。
可选的,在本申请的一些实施方式中,上述步骤A04以及上述步骤B04中,在基于待构建场景的场景类型确定第二终端是否满足待构建场景的构建条件时,若待构建场景的场景类型包括待播放语音的语音类型,则检测第二终端是否包含用于播放待播放语音的扬声器,若第二终端包含用于播放待播放语音的扬声器,则确定第二终端满足待构建场景的构建条件。
相应的,上述步骤102中,生成携带场景构建信息的场景构建指令,可以是指生成携 带待播放语音对应的语音信息的场景构建指令。其中,该场景构建指令用于指示第二终端根据语音信息对待播放语音进行播放,得到辅助拍摄场景。
需要说明的是,在本申请实施例中,待构建场景的场景类型可以包括多种场景类型,例如,同时包括待显示场景图像的图像类型、待播放语音的语音类型和目标姿态对应的姿态类型中的两种或两种以上的场景类型,本申请对此不做限制。
相应的,上述基于待构建场景的场景类型确定第二终端是否满足待构建场景的构建条件可以是指:基于每一种待构建场景的场景类型分别确定第二终端是否满足待构建场景的构建条件。
本申请实施例中,第一终端通过在处于互动拍摄模式时,将场景构建指令发送给第二终端,实现与第二终端的交互,使第二终端根据场景构建信息进行场景构建,从而对第一终端当前的拍摄场景叠加图像或声音等多媒体信息,得到经过处理的目标拍摄场景,进而使第一终端能够在第二终端辅助构建得到的目标拍摄场景下进行拍摄,并得到该目标拍摄场景对应的目标拍摄图像,使得第一终端可以无需依赖其自身的图像处理功能,即可拍摄得到经过处理的图像。
示例性的,如图6示出了本申请实施例提供的一种辅助拍摄方法的第二实现流程示意图。该辅助拍摄方法应用于第二终端,可以由第二终端的辅助拍摄装置执行,该辅助拍摄方法可以采用下述步骤601至步骤602的方式实现。
步骤601,接收第一终端发送的携带场景构建信息的场景构建指令,场景构建指令由第一终端在处于互动拍摄模式时生成;
步骤602,根据场景构建信息进行场景构建,得到辅助拍摄场景。
本申请实施例中,上述辅助拍摄场景与第一终端所处环境的实际场景相互叠加,可以得到上述目标拍摄场景。
可选的,在本申请的一些实施例中,当上述场景构建信息包括第一终端的预览帧图像时,上述根据场景构建信息进行场景构建,得到辅助拍摄场景,可以采用下述方式实现:对预览帧图像进行图像识别,并根据识别结果执行预设类型的辅助拍摄动作,得到辅助拍摄场景。
其中,预设类型的辅助拍摄动作可以包括姿态调整、输出拍摄建议以及获取并显示与预览帧图像对应的目标场景图像中的一种或多种类型的辅助拍摄动作。
可选的,在本申请的一些实施例中,上述第二终端可以为包含显示屏的终端,上述场景构建信息可以包括待显示场景图像对应的图像信息;
上述根据场景构建信息进行场景构建,得到辅助拍摄场景,包括:根据图像信息进行图像显示,得到辅助拍摄场景。
可选的,在本申请的一些实施例中,上述第二终端可以为包含扬声器的终端,上述场景构建信息包括待播放语音对应的语音信息;上述根据场景构建信息进行场景构建,得到辅助拍摄场景,包括:根据语音信息进行语音播放,得到辅助拍摄场景。
本申请实施例中,上述步骤601至步骤602,以及在其基础上进一步得到的实施例的具体实现过程可以参考上述图1至图5对应的实施例的描述,此处不再赘述。
相比于执行主体为第一终端的辅助拍摄方法,在本申请的一些实施方式中,在执行主体为第二终端的辅助拍摄方法中,第二终端还可以采集第一终端播放的语音,或者采集第一终端的用户的语音,并基于该采集得到的语音输出用于辅助视频拍摄的语音。
例如,当上述第一终端为手机,第二终端为机器人时,手机用户在拍摄视频图像的过程中,可以与机器人对话,或者与机器人对唱,机器人可以采集手机用户的语音内容,或者手机播放的语音,输出相应的语音,以配合用户进行视频录制。
本申请实施例中,第二终端通过接收第一终端在处于互动拍摄模式时生成并发送的携 带场景构建信息的场景构建指令,进而根据场景构建信息进行场景构建,从而对第一终端当前的拍摄场景叠加图像或声音等多媒体信息,得到经过处理的目标拍摄场景,进而使第一终端能够在第二终端辅助构建得到的目标拍摄场景下进行拍摄,并得到该目标拍摄场景对应的目标拍摄图像,使得第一终端可以无需依赖其自身的图像处理功能,即可拍摄得到经过处理的图像。
可选的,在本申请的一些实施方式中,上述各实施方式的部分步骤可以通过配置在第一终端或第二终端的插件执行,并且,该插件可以存储在云服务器,如图7所示,为本申请实施例提供的第一终端、第二终端、云服务器三者之间的交互示意图,该辅助拍摄方法包括下述步骤:
步骤701,第一终端与第二终端建立通信连接;
步骤702,第一终端获取第二终端对应的场景构建类型信息。即,第二终端支持的场景构建类型。
具体的,第一终端在开启拍摄应用后,可以蓝牙、WiFi、ZigBee等无线连接方式进行设备扫描,并获取扫描得到的待连接终端的设备类型、设备类型编号、版本信息等设备信息。接着,第一终端向云服务器发送携带该设备信息的数据请求,并接收云服务器根据其存储的设备信息表和拍摄场景表发送的与该设备信息对应的认证信息以及该待连接终端对应的场景构建类型信息。最后,基于该认证信息与待连接终端建立物理数据通道,此时,该待连接终端即为上述第二终端,待连接终端对应的场景构建类型信息即为上述第二终端对应的场景构建类型信息。
例如,如下表一所示,为云服务器中存储的设备信息表,如下表二所示,为云服务器中存储的拍摄场景表。
表一:
设备类型 设备类型编号 发现协议 数据协议 认证信息 版本信息
手机A UUID_XXXXX 蓝牙 BLE 认证信息 版本信息
机器人A UUID_YYYYY 蓝牙 WIFI P2P 认证信息 版本信息
表二:
场景编号 设备类型编号 场景名称
100001 UUID_XXXXX 雪花图像
100002 UUID_XXXXX 落花图像
100003 UUID_YYYYY 伴舞天使
步骤703,基于第二终端对应的场景构建类型信息确定待构建场景的场景类型。
可选的,第一终端先根据第二终端对应的场景构建类型信息生成场景构建类型选择界面,再响应于用户在场景构建类型选择界面触发的场景构建类型选择操作,确定待构建场景的场景类型。
例如,第一终端为手机,第二终端为汽车,汽车对应的场景构建类型信息可以包括:显示雪花图像、繁星图像、落花图像、牛羊吃草图像、伴舞天使等等场景图像,以及播放语音A、语音B、语音C等语音。当用户在如图8所示的场景构建类型选择界面81触发了雪花图像这一场景图像,则确定待构建场景的场景类型包括待显示场景图像的图像类型,并且,该待显示场景图像的图像类型为雪花图像。
步骤704,第一终端从云服务器中下载与待构建场景对应的第一插件,例如,插件K,存储并运行该第一插件,实现基于待构建场景的场景类型确定第二终端是否满足待构建场景的构建条件,在第二终端满足待构建场景的构建条件时,将第一终端的拍摄模式设置为互动拍摄模式,以及生成携带场景构建信息的场景构建指令,并将携带场景构建信息的场 景构建指令发送给第二终端等方法步骤。
例如,插件K检测第二终端是否包含用于显示待显示场景图像的显示屏,若第二终端包含用于显示待显示场景图像的显示屏,则确定第二终端满足待构建场景的构建条件。
步骤705,第二终端从云服务器中下载与待构建场景对应的第二插件,例如,插件J,存储并运行第二插件,以根据场景构建信息进行场景构建,得到辅助拍摄场景。
本申请实施例中,如下表三所示,云服务器中可以存储有不能类型的终端对应的构建不同场景的插件。
表三:
插件编号 场景编号 设备类型 插件名称 插件多媒体资源
PLUG_00001 100001 手机A K 效果动画、图片、字体、脚本等
PLUG_00002 100001 汽车A J 效果动画、图片、字体、脚本等
PLUG_00003 100003 手机B P 效果动画、图片、字体、脚本等
PLUG_00004 100003 汽车B Q 效果动画、图片、字体、脚本等
可选的,在一些实施例中,第二终端还可以通过第一终端从云服务器中下载与待构建场景对应的第二插件,再从第一终端获取第二插件。
步骤706,卸载插件第一插件和第二插件。
例如,在第一终端完成拍摄之后,卸载插件K和插件J。
本申请实施例中,通过将待构建场景对应的第一插件和第二插件存储在云服务器,利用云端来管理设备信息和插件,可以实现拍摄的业务逻辑可以与第一终端和第二终端分离,以双向插件化的形式实现能力匹配互动,可以提供更好的拍摄体验,并且,可以实现终端之间互联能力的灵活配置。
本申请中,对于前述的各方法实施例,为了简单描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本申请并不受所描述的动作顺序的限制,在本申请的一些实施例中,某些步骤可以采用其它顺序进行。
示例性的,图9示出了本申请实施例提供的一种辅助拍摄装置900的结构示意图,该辅助拍摄装置配置于第一终端,包括生成单元901和发送单元902。
生成单元901,用于在第一终端处于互动拍摄模式时,生成携带场景构建信息的场景构建指令;
发送单元902,用于将场景构建指令发送给第二终端,以得到目标拍摄场景;该场景构建指令用于指示第二终端根据场景构建信息进行场景构建,得到辅助拍摄场景;该目标拍摄场景包含第二终端根据场景构建信息进行场景构建得到的辅助拍摄场景。
可选的,在本申请的一些实施例中,上述辅助拍摄装置900还用于:
在第一终端与第二终端处于通信连接状态时,将第一终端的拍摄模式设置为互动拍摄模式;
或者,
在第一终端的拍摄界面设置的互动拍摄控件被触发时,将第一终端的拍摄模式设置为互动拍摄模式。
可选的,在本申请的一些实施例中,上述辅助拍摄装置900还用于:
在第一终端与第二终端处于通信连接状态时,获取第二终端对应的场景构建类型信息;
基于场景构建类型信息生成场景构建类型选择界面;
响应于用户在场景构建类型选择界面触发的场景构建类型选择操作,基于场景构建类型选择操作确定待构建场景的场景类型;
基于待构建场景的场景类型确定第二终端是否满足待构建场景的构建条件;
若第二终端满足待构建场景的构建条件,则将第一终端的拍摄模式设置为互动拍摄模式;
或者,
若第二终端满足待构建场景的构建条件,则在第一终端的拍摄界面增加互动拍摄控件,并在互动拍摄控件被触发时,将第一终端的拍摄模式设置为互动拍摄模式。
可选的,在本申请的一些实施例中,场景构建信息包括第一终端的预览帧图像;
场景构建指令用于指示第二终端对预览帧图像进行图像识别,并根据识别结果执行预设类型的辅助拍摄动作,得到辅助拍摄场景;
预设类型的辅助拍摄动作包括姿态调整、输出拍摄建议以及获取并显示与预览帧图像对应的目标场景图像中的一种或多种类型的辅助拍摄动作。
可选的,在本申请的一些实施例中,上述生成单元具体用于:
对第一终端的预览帧图像进行识别,得到识别结果;
根据识别结果生成携带姿态调整参数以及与预览帧图像对应的目标场景图像中的一种或多种场景构建信息的场景构建指令。
可选的,在本申请的一些实施例中,上述待构建场景的场景类型包括待显示场景图像的图像类型,在基于待构建场景的场景类型确定第二终端是否满足待构建场景的构建条件时,具体用于:
检测第二终端是否包含用于显示待显示场景图像的显示屏;
若第二终端包含用于显示待显示场景图像的显示屏,则确定第二终端满足待构建场景的构建条件;
上述生成单元,具体用于:
生成携带待显示场景图像对应的图像信息的场景构建指令;场景构建指令用于指示第二终端根据图像信息对待显示场景图像进行显示,得到辅助拍摄场景。
可选的,在本申请的一些实施例中,上述待构建场景的场景类型包括待播放语音的语音类型;上述基于待构建场景的场景类型确定第二终端是否满足待构建场景的构建条件,具体用于:
检测第二终端是否包含用于播放待播放语音的扬声器;
若第二终端包含用于播放待播放语音的扬声器,则确定第二终端满足待构建场景的构建条件;
上述生成单元,具体用于:
生成携带待播放语音对应的语音信息的场景构建指令;场景构建指令用于指示第二终端根据语音信息对待播放语音进行播放,得到辅助拍摄场景。
示例性的,图10示出了本申请实施例提供的另一种辅助拍摄装置110的结构示意图,该辅助拍摄装置配置于第二终端,包括接收单元111和构建单元112。
接收单元111,用于接收第一终端发送的携带场景构建信息的场景构建指令,场景构建指令由第一终端在处于互动拍摄模式时生成;
构建单元112,用于根据场景构建信息进行场景构建,得到辅助拍摄场景。
可选的,在本申请的一些实施方式中,上述场景构建信息包括第一终端的预览帧图像;
根据场景构建信息进行场景构建,得到辅助拍摄场景,包括:
对预览帧图像进行图像识别,并根据识别结果执行预设类型的辅助拍摄动作,得到辅助拍摄场景;
预设类型的辅助拍摄动作包括姿态调整、输出拍摄建议以及获取并显示与预览帧图像对应的目标场景图像中的一种或多种类型的辅助拍摄动作。
可选的,在本申请的一些实施方式中,上述第二终端可以为包含显示屏的终端,上述场景构建信息包括待显示场景图像对应的图像信息;
上述构建单元,具体用于:根据图像信息进行图像显示,得到辅助拍摄场景。
可选的,在本申请的一些实施方式中,上述第二终端可以为包含扬声器的终端,上述场景构建信息包括待播放语音对应的语音信息;
上述构建单元,具体用于:根据语音信息进行语音播放,得到辅助拍摄场景。
需要说明的是,为描述的方便和简洁,上述描述的辅助拍摄装置900和辅助拍摄装置110的具体工作过程,可以参考上述描述的各辅助拍摄方法的对应过程,在此不再赘述。
示例性的,图11示出了本申请实施例提供的一种终端的结构示意图,该终端可以为上述第一终端,也可以为上述第二终端,该终端可以包括:处理器121、存储器122、一个或多个输入设备123(图11中仅示出一个)和一个或多个输出设备124(图11中仅示出一个)。处理器121、存储器122、输入设备123和输出设备124通过总线125连接。
可选的,当上述第二终端为包含显示屏的终端时,上述显示屏可以为具有透明显示功能的显示屏,并且,当第二终端为汽车时,该显示屏可以位于汽车的车门窗和/或风挡玻璃和/或天窗。
应当理解,在本申请实施例中,所称处理器121可以是中央处理单元(Central Processing Unit,CPU),该处理器还可以是其他通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。
输入设备123可以包括虚拟键盘、触控板、指纹采传感器(用于采集用户的指纹信息和指纹的方向信息)、麦克风等,输出设备124可以包括显示器、扬声器等。
上述存储器122存储有计算机程序,上述计算机程序可在上述处理器121上运行,例如,上述计算机程序为辅助拍摄方法的程序。上述处理器81执行上述计算机程序时实现上述辅助拍摄方法实施例中的步骤,例如,图1所示的步骤101至步骤102,或者,图6所示的步骤601至步骤602,上述处理器121执行上述计算机程序时实现上述装置实施例中的功能,例如,如图9所示单元901至902的功能,或者,如图10所示单元111至112的功能。
上述计算机程序可以被分割成一个或多个模块/单元,上述一个或者多个模块/单元被存储在上述存储器122中,并由上述处理器121执行,以完成本申请。上述一个或多个模块/单元可以是能够完成特定功能的一系列计算机程序指令段,该指令段用于描述上述计算机程序在上述进行辅助拍摄的第一终端或第二终端中的执行过程。
示例性的,本申请实施例还提供一种计算机可读存储介质,该计算机可读存储介质存储有计算机程序,该计算机程序被执行时实现上述各个辅助拍摄方法的步骤。
所属领域的技术人员可以清楚地了解到,为了描述的方便和简洁,仅以上述各功能单元的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能单元、模块完成,即将装置的内部结构划分成不同的功能单元或模块,以完成以上描述的全部或者部分功能。实施例中的各功能单元、模块可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中,上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。另外,各功能单元、模块的具体名称也只是为了便于相互区分,并不用于限制本申请的保护范围。上述系统中单元、模块的具体工作过程,可以前述方法实施例中的对应过程,在此不再赘述。
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述或记载的部分,可以参见其它实施例的相关描述。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于辅助拍摄方法的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
在本申请所提供的实施例中,应该理解到,所揭露的装置/用户终端和方法,可以通过其它的方式实现。例如,以上所描述的装置/用户终端实施例仅仅是示意性的,例如,模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通讯连接可以是通过一些接口,装置或单元的间接耦合或通讯连接,可以是电性,机械或其它的形式。
作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
集成的模块/单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请实现上述实施例方法中的全部或部分流程,也可以通过计算机程序来指令相关的硬件来完成,的计算机程序可存储于一计算机可读存储介质中,该计算机程序在被处理器执行时,可实现上述各个方法实施例的步骤。其中,计算机程序包括计算机程序代码,计算机程序代码可以为源代码形式、对象代码形式、可执行文件或某些中间形式等。计算机可读介质可以包括:能够携带计算机程序代码的任何实体或装置、记录介质、U盘、移动硬盘、磁碟、光盘、计算机存储器、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、电载波信号、电信信号以及软件分发介质等。需要说明的是,计算机可读介质包含的内容可以根据司法管辖区内立法和专利实践的要求进行适当的增减,例如在某些司法管辖区,根据立法和专利实践,计算机可读介质不包括电载波信号和电信信号。
以上实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分方法特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围,均应包含在本申请的保护范围之内。

Claims (20)

  1. 一种辅助拍摄方法,其特征在于,所述辅助拍摄方法应用于第一终端,包括:
    在第一终端处于互动拍摄模式时,生成携带场景构建信息的场景构建指令;
    将所述场景构建指令发送给第二终端,以得到目标拍摄场景;所述场景构建指令用于指示所述第二终端根据所述场景构建信息进行场景构建,得到辅助拍摄场景;所述目标拍摄场景包含所述第二终端根据所述场景构建信息进行场景构建得到的所述辅助拍摄场景。
  2. 如权利要求1所述的辅助拍摄方法,其特征在于,所述辅助拍摄方法包括:
    在所述第一终端与所述第二终端处于通信连接状态时,将所述第一终端的拍摄模式设置为互动拍摄模式。
  3. 如权利要求1所述的辅助拍摄方法,其特征在于,所述辅助拍摄方法包括:
    在所述第一终端的拍摄界面设置的互动拍摄控件被触发时,将所述第一终端的拍摄模式设置为互动拍摄模式。
  4. 如权利要求1所述的辅助拍摄方法,其特征在于,所述辅助拍摄方法包括:
    在所述第一终端与所述第二终端处于通信连接状态时,获取所述第二终端对应的场景构建类型信息;
    基于所述场景构建类型信息生成场景构建类型选择界面;
    响应于用户在所述场景构建类型选择界面触发的场景构建类型选择操作,基于所述场景构建类型选择操作确定待构建场景的场景类型;
    基于所述待构建场景的场景类型确定所述第二终端是否满足所述待构建场景的构建条件;
    若所述第二终端满足待构建场景的构建条件,则将所述第一终端的拍摄模式设置为互动拍摄模式;
    或者,
    若所述第二终端满足待构建场景的构建条件,则在所述第一终端的拍摄界面增加互动拍摄控件,并在所述互动拍摄控件被触发时,将所述第一终端的拍摄模式设置为互动拍摄模式。
  5. 如权利要求4所述的辅助拍摄方法,其特征在于,所述辅助拍摄方法包括:
    若所述第二终端不满足待构建场景的构建条件,则在所述第一终端的拍摄应用开启时,将所述第一终端设置为照片拍摄模式或视频拍摄模式。
  6. 如权利要求1、4、5任意一项所述的辅助拍摄方法,其特征在于,所述场景构建信息包括所述第一终端的预览帧图像;
    所述场景构建指令用于指示所述第二终端对所述预览帧图像进行图像识别,并根据识别结果执行预设类型的辅助拍摄动作,得到所述辅助拍摄场景;
    所述预设类型的辅助拍摄动作包括姿态调整、输出拍摄建议以及获取并显示与所述预览帧图像对应的目标场景图像中的一种或多种类型的辅助拍摄动作。
  7. 如权利要求1、4、5任意一项所述的辅助拍摄方法,其特征在于,所述生成携带场景构建信息的场景构建指令,包括:
    对所述第一终端的预览帧图像进行识别,得到识别结果;
    根据所述识别结果生成携带姿态调整参数以及与预览帧图像对应的目标场景图像中的一种或多种场景构建信息的场景构建指令。
  8. 如权利要求4所述的辅助拍摄方法,其特征在于,所述待构建场景的场景类型包括待显示场景图像的图像类型,所述基于所述待构建场景的场景类型确定所述第二终端是否满足所述待构建场景的构建条件,包括:
    检测所述第二终端是否包含用于显示所述待显示场景图像的显示屏;
    若所述第二终端包含用于显示所述待显示场景图像的显示屏,则确定所述第二终端满足所述待构建场景的构建条件;
    所述生成携带场景构建信息的场景构建指令,包括:
    生成携带所述待显示场景图像对应的图像信息的场景构建指令;所述场景构建指令用于指示所述第二终端根据所述图像信息对所述待显示场景图像进行显示,得到所述辅助拍摄场景。
  9. 如权利要求4所述的辅助拍摄方法,其特征在于,所述待构建场景的场景类型包括待播放语音的语音类型;所述基于所述待构建场景的场景类型确定所述第二终端是否满足所述待构建场景的构建条件,包括:
    检测所述第二终端是否包含用于播放所述待播放语音的扬声器;
    若所述第二终端包含用于播放所述待播放语音的扬声器,则确定所述第二终端满足所述待构建场景的构建条件;
    所述生成携带场景构建信息的场景构建指令,包括:
    生成携带所述待播放语音对应的语音信息的场景构建指令;所述场景构建指令用于指示所述第二终端根据所述语音信息对所述待播放语音进行播放,得到所述辅助拍摄场景。
  10. 一种辅助拍摄方法,其特征在于,所述辅助拍摄方法应用于第二终端,包括:
    接收第一终端发送的携带场景构建信息的场景构建指令,所述场景构建指令由所述第一终端在处于互动拍摄模式时生成;
    根据所述场景构建信息进行场景构建,得到辅助拍摄场景。
  11. 如权利要求10所述的辅助拍摄方法,其特征在于,所述场景构建信息包括所述第一终端的预览帧图像;
    所述根据所述场景构建信息进行场景构建,得到辅助拍摄场景,包括:
    对所述预览帧图像进行图像识别,并根据识别结果执行预设类型的辅助拍摄动作,得到所述辅助拍摄场景;
    所述预设类型的辅助拍摄动作包括姿态调整、输出拍摄建议以及获取并显示与所述预览帧图像对应的目标场景图像中的一种或多种类型的辅助拍摄动作。
  12. 如权利要求10所述的辅助拍摄方法,其特征在于,所述第二终端为包含显示屏的终端,所述场景构建信息包括待显示场景图像对应的图像信息;
    所述根据所述场景构建信息进行场景构建,得到辅助拍摄场景,包括:
    根据所述图像信息进行图像显示,得到所述辅助拍摄场景。
  13. 如权利要求10所述的辅助拍摄方法,其特征在于,所述第二终端为包含扬声器的终端,所述场景构建信息包括待播放语音对应的语音信息;
    所述根据所述场景构建信息进行场景构建,得到辅助拍摄场景,包括:
    根据所述语音信息进行语音播放,得到所述辅助拍摄场景。
  14. 一种辅助拍摄装置,其特征在于,所述辅助拍摄装置配置于第一终端,包括:
    生成单元,用于在第一终端处于互动拍摄模式时,生成携带场景构建信息的场景构建指令;
    发送单元,用于将所述场景构建指令发送给所述第二终端,以得到目标拍摄场景;所述场景构建指令用于指示所述第二终端根据所述场景构建信息进行场景构建,得到辅助拍摄场景;所述目标拍摄场景包含所述第二终端根据所述场景构建信息进行场景构建得到的所述辅助拍摄场景。
  15. 一种辅助拍摄装置,其特征在于,所述辅助拍摄装置配置于第二终端,包括:
    接收单元,用于接收第一终端发送的携带场景构建信息的场景构建指令,所述场景构建指令由所述第一终端在处于互动拍摄模式时生成;
    构建单元,用于根据所述场景构建信息进行场景构建,得到辅助拍摄场景。
  16. 一种终端,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,其特征在于,所述处理器执行所述计算机程序时实现如权利要求1-9中任意一项所述方法的步骤。
  17. 一种终端,包括存储器、处理器以及存储在所述存储器中并可在所述处理器 上运行的计算机程序,其特征在于,所述处理器执行所述计算机程序时实现如权利要求10-13中任意一项所述方法的步骤。
  18. 如权利要求17所述的终端,其特征在于,所述终端包括具有透明显示功能的显示屏。
  19. 如权利要求18所述的终端,其特征在于,所述终端为汽车;
    所述显示屏位于所述汽车的车门窗和/或风挡玻璃和/或天窗。
  20. 一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,其特征在于,所述计算机程序被执行时实现如权利要求1-13中任意一项所述方法的步骤。
PCT/CN2022/121755 2021-11-19 2022-09-27 一种辅助拍摄方法、装置、终端和计算机可读存储介质 WO2023087929A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111375248.2 2021-11-19
CN202111375248.2A CN114040108B (zh) 2021-11-19 2021-11-19 一种辅助拍摄方法、装置、终端和计算机可读存储介质

Publications (1)

Publication Number Publication Date
WO2023087929A1 true WO2023087929A1 (zh) 2023-05-25

Family

ID=80138349

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/121755 WO2023087929A1 (zh) 2021-11-19 2022-09-27 一种辅助拍摄方法、装置、终端和计算机可读存储介质

Country Status (2)

Country Link
CN (1) CN114040108B (zh)
WO (1) WO2023087929A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114040108B (zh) * 2021-11-19 2023-12-01 杭州逗酷软件科技有限公司 一种辅助拍摄方法、装置、终端和计算机可读存储介质

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104104875A (zh) * 2014-07-23 2014-10-15 深圳市中兴移动通信有限公司 图像拍摄过程中拍摄参数设置方法及装置
CN105872361A (zh) * 2016-03-28 2016-08-17 努比亚技术有限公司 拍摄指导装置、系统及方法
CN106210517A (zh) * 2016-07-06 2016-12-07 北京奇虎科技有限公司 一种图像数据的处理方法、装置和移动终端
CN106303194A (zh) * 2015-05-28 2017-01-04 中兴通讯股份有限公司 远程拍摄方法、主控拍摄终端、被控拍摄终端及拍摄系统
CN107835364A (zh) * 2017-10-30 2018-03-23 维沃移动通信有限公司 一种拍照辅助方法及移动终端
CN110177204A (zh) * 2019-04-29 2019-08-27 上海掌门科技有限公司 拍照方法、电子设备和计算机可读介质
CN114040108A (zh) * 2021-11-19 2022-02-11 杭州逗酷软件科技有限公司 一种辅助拍摄方法、装置、终端和计算机可读存储介质

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104104875A (zh) * 2014-07-23 2014-10-15 深圳市中兴移动通信有限公司 图像拍摄过程中拍摄参数设置方法及装置
CN106303194A (zh) * 2015-05-28 2017-01-04 中兴通讯股份有限公司 远程拍摄方法、主控拍摄终端、被控拍摄终端及拍摄系统
CN105872361A (zh) * 2016-03-28 2016-08-17 努比亚技术有限公司 拍摄指导装置、系统及方法
CN106210517A (zh) * 2016-07-06 2016-12-07 北京奇虎科技有限公司 一种图像数据的处理方法、装置和移动终端
CN107835364A (zh) * 2017-10-30 2018-03-23 维沃移动通信有限公司 一种拍照辅助方法及移动终端
CN110177204A (zh) * 2019-04-29 2019-08-27 上海掌门科技有限公司 拍照方法、电子设备和计算机可读介质
CN114040108A (zh) * 2021-11-19 2022-02-11 杭州逗酷软件科技有限公司 一种辅助拍摄方法、装置、终端和计算机可读存储介质

Also Published As

Publication number Publication date
CN114040108A (zh) 2022-02-11
CN114040108B (zh) 2023-12-01

Similar Documents

Publication Publication Date Title
WO2021121236A1 (zh) 一种控制方法、电子设备、计算机可读存储介质、芯片
CN112714214B (zh) 一种内容接续方法、设备、系统、gui及计算机可读存储介质
KR102449670B1 (ko) 복수의 카메라를 이용하여 영상 데이터를 생성하는 방법 및 서버
WO2021218325A1 (zh) 视频处理方法、装置、计算机可读介质和电子设备
US11838621B2 (en) Collaborative shooting utilizing preview images and multiple terminals
WO2022042776A1 (zh) 一种拍摄方法及终端
CN111476911A (zh) 虚拟影像实现方法、装置、存储介质与终端设备
JP2016531362A (ja) 肌色調整方法、肌色調整装置、プログラム及び記録媒体
CN111866483B (zh) 颜色还原方法及装置、计算机可读介质和电子设备
WO2022148319A1 (zh) 视频切换方法、装置、存储介质及设备
EP4258632A1 (en) Video processing method and related device
WO2023087929A1 (zh) 一种辅助拍摄方法、装置、终端和计算机可读存储介质
CN113259583B (zh) 一种图像处理方法、装置、终端及存储介质
CN113965694A (zh) 录像方法、电子设备及计算机可读存储介质
WO2020056694A1 (zh) 增强现实的通信方法及电子设备
CN112269554B (zh) 显示系统及显示方法
WO2021096339A1 (ko) 이미지 변형 방법
CN107197147A (zh) 一种全景相机的操作控制方法和装置
WO2022262536A1 (zh) 一种视频处理方法及电子设备
WO2021129444A1 (zh) 文件聚类方法及装置、存储介质和电子设备
CN115633250A (zh) 一种图像处理方法及电子设备
CN111757005A (zh) 拍摄控制方法及装置、计算机可读介质和电子设备
CN113329375B (zh) 内容处理方法、装置、系统、存储介质和电子设备
WO2022213798A1 (zh) 图像处理方法、装置、电子设备和存储介质
WO2022022722A1 (zh) 配件主题自适应方法、装置和系统