CN112887601A - Shooting method and device and electronic equipment - Google Patents

Shooting method and device and electronic equipment Download PDF

Info

Publication number
CN112887601A
CN112887601A CN202110105249.9A CN202110105249A CN112887601A CN 112887601 A CN112887601 A CN 112887601A CN 202110105249 A CN202110105249 A CN 202110105249A CN 112887601 A CN112887601 A CN 112887601A
Authority
CN
China
Prior art keywords
target object
picture
electronic device
shooting
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110105249.9A
Other languages
Chinese (zh)
Other versions
CN112887601B (en
Inventor
于雷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202110105249.9A priority Critical patent/CN112887601B/en
Publication of CN112887601A publication Critical patent/CN112887601A/en
Application granted granted Critical
Publication of CN112887601B publication Critical patent/CN112887601B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio

Landscapes

  • Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

The application discloses a shooting method, a shooting device and electronic equipment, and belongs to the technical field of communication. The method comprises the following steps: determining shooting parameters of the first target object, sending the shooting parameters to the second electronic equipment so that the second electronic equipment can shoot the second target object by using the shooting parameters to obtain a second picture, and synthesizing the second target object in the second picture into a first picture shot based on the same shooting parameters to obtain a target picture. When different target objects are shot through the same shooting parameters, the characteristics of the shot different target objects are similar, and when a plurality of target objects are synthesized into the same picture, the characteristics of the plurality of target objects are close, so that the synthesized picture can be more coordinated.

Description

Shooting method and device and electronic equipment
Technical Field
The application belongs to the technical field of communication, and particularly relates to a shooting method, a shooting device and electronic equipment.
Background
The development of internet technology makes linkage shooting possible, and through a plurality of electronic devices, target objects located at different positions can be shot into the same picture. The basic principle of linkage shooting is that target objects at different positions are shot through different electronic equipment respectively to obtain a plurality of pictures, one of the pictures is selected as a background picture, the target objects in other pictures are extracted, and the extracted target objects are synthesized into the background picture through an image synthesis technology to obtain the target picture.
In the process of implementing the present application, the inventor finds that at least the following problems exist in the prior art: in the image synthesis process, due to the difference between pictures shot by different electronic devices, the incoordination among all target objects in the synthesized target picture is caused, and therefore the picture meeting the requirements of a user cannot be obtained through linkage shooting.
Content of application
The embodiment of the application aims to provide a shooting method, a shooting device and electronic equipment, which can solve the problem of incompatibility among target objects in a linkage shooting process.
In order to solve the technical problem, the present application is implemented as follows:
in a first aspect, an embodiment of the present application provides a shooting method, where the method includes:
determining shooting parameters of a first target object;
sending the shooting parameters to second electronic equipment so that the second electronic equipment shoots a second target object by using the shooting parameters to obtain a second picture;
receiving image data of the second target object included in the second picture sent by the second electronic device;
and synthesizing the second target object in the second picture into a first picture according to the image data of the second target object to obtain a target picture, wherein the first picture is obtained by shooting the first target object based on the shooting parameters.
In a second aspect, an embodiment of the present application provides a shooting method, including:
receiving shooting parameters sent by first electronic equipment;
shooting a second target object based on the shooting parameters to obtain a second picture;
sending the image data of the second target object included in the second picture to the first electronic device, so that the first electronic device synthesizes the second target object in the second picture into the first picture to obtain a target picture;
the first picture is obtained by shooting at least one first target object by the first electronic device based on the shooting parameters.
In a third aspect, an embodiment of the present application provides an image capturing apparatus, including:
the determining module is used for determining shooting parameters of the first target object;
the sending module is used for sending the shooting parameters to second electronic equipment so that the second electronic equipment can shoot a second target object by using the shooting parameters to obtain a second picture;
a receiving module, configured to receive image data of the second target object included in the second picture sent by the second electronic device;
and the synthesis module is used for synthesizing the second target object in the second picture into a first picture to obtain a target picture, and the first picture is obtained by shooting the first target object based on the shooting parameters.
In a fourth aspect, an embodiment of the present application provides an image capturing apparatus, including:
the receiving module is used for receiving shooting parameters sent by the first electronic equipment;
the shooting module is used for shooting a second target object based on the shooting parameters to obtain a second picture;
a sending module, configured to send, to the first electronic device, image data of the second target object included in the second picture, so that the first electronic device synthesizes the second target object in the second picture into a first picture, and obtains a target picture;
the first picture is obtained by shooting at least one first target object by the first electronic device based on the shooting parameters.
In a fifth aspect, embodiments of the present application provide an electronic device, which includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor, and when executed by the processor, implement the steps of the method according to the first aspect.
In a sixth aspect, embodiments of the present application provide a readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the method according to the first aspect.
In a seventh aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method according to the first aspect.
In the embodiment of the application, the first electronic device determines shooting parameters of a first target object, sends the shooting parameters to the second electronic device, so that the second electronic device shoots a second target object by using the shooting parameters to obtain a second picture, and synthesizes the second target object in the second picture into a first picture shot based on the same shooting parameters to obtain a target picture. The first electronic device and the second electronic device can shoot target objects located at different positions respectively through the same shooting parameters, when different target objects are shot through the same shooting parameters, the characteristics of the different target objects obtained through shooting are similar, and when the plurality of target objects are synthesized into the same picture, the characteristics of the plurality of target objects are close, so that the synthesized picture is more harmonious.
Drawings
FIG. 1 is a flow chart illustrating steps of a photographing method according to an exemplary embodiment;
FIG. 2 is a schematic illustration of a shot provided in accordance with an exemplary embodiment;
FIG. 3 is a flow chart of steps of another photographing method provided in accordance with an exemplary embodiment;
FIG. 4 is a schematic diagram of a picture composition provided in accordance with an exemplary embodiment;
FIG. 5 is a block diagram of an image capture device provided in accordance with an exemplary embodiment;
FIG. 6 is a block diagram of another image capture device provided in accordance with an exemplary embodiment
FIG. 7 is a block diagram of an electronic device provided in accordance with an exemplary embodiment;
fig. 8 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application are capable of operation in sequences other than those illustrated or described herein. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The shooting method provided by the embodiment of the present application is described in detail below with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
Referring to fig. 1, fig. 1 is a flowchart illustrating steps of a photographing method according to an exemplary embodiment, which may include, as shown in fig. 1:
step 101, determining shooting parameters of a first target object.
In this embodiment, the shooting method is executed by a first electronic device, and the first electronic device is, for example, a mobile phone, a camera, a wearable device, a computer, and other terminal devices. The first electronic device can firstly determine a first target object to be shot and shooting parameters of the first target object, and then shoot the first target object based on the shooting parameters.
Illustratively, the first electronic device may include a display screen and a camera, and the first electronic device may display a shooting preview interface in the display screen, and display a preview image acquired by the camera in real time through the shooting preview interface. The first electronic device may perform target detection on a preview image in a shooting preview interface, and when it is detected that a preset type of target object exists in the preview image, the preset type of target object may be used as the first target object. For example, if the first electronic device detects that a human body exists in the shooting preview interface, the human body may be used as the first target object, and if the first electronic device detects that an animal exists in the preview image, the animal may be used as the first target object, and the human body and the animal are both target objects of preset categories. The first target object may also be determined by user selection. For example, when observing that a target object exists in the shooting preview interface, the user may click the target object displayed in the shooting preview interface, and the electronic device may detect a preset range corresponding to the click operation in response to the click operation of the user, and use the detected target object as the first target object. The method for determining the first target object by the first electronic device may include, but is not limited to, a method for detecting an image by a target detection model, a target detection technology existing in the prior art, and a target detection technology which may appear in the future may be applied to the present embodiment to detect and determine the first target object.
Optionally, the shooting parameters may include a shooting distance between the first target object and the first electronic device.
In one embodiment, the first electronic device may determine a shooting distance between the first target object and the first electronic device while determining the first target object. The shooting distance may be an actual distance between the first electronic device and the first target object, or a focal distance obtained after focusing the first target object. For example, after the first electronic device determines the human body, an actual distance between the human body and the first electronic device, that is, a shooting distance, may be detected by the distance sensor. Or, after the human body is determined, the first electronic device may focus the human body, and a focal distance obtained by focusing is used as the shooting distance.
In another embodiment, the first electronic device may also acquire a depth image of the first target object, and determine an actual distance between the first target object and the first electronic device according to the depth image. For example, a first electronic device may integrate binocular cameras, in an image acquisition process, the first electronic device may simultaneously acquire two images in a shooting scene through two cameras separated by a certain distance, find corresponding pixel points in the two images through a stereo matching algorithm, then calculate parallax information according to a trigonometric principle, obtain a depth image of depth information for representing an object in the shooting scene through parallax information conversion, and a gray value of each pixel point in the depth image represents the distance from a certain point in the shooting scene to the cameras. After the depth image is obtained, target detection may be performed on the depth image, and a position of the first target object in the depth image is determined, for example, a rectangular region of the human body in the depth image is determined, an average distance between all pixel points in the rectangular region and the first electronic device is used as a shooting distance, or a maximum distance or a minimum distance among all pixel points in the rectangular region is used as a shooting distance. The depth image acquisition method and the specific method for determining the shooting distance according to the depth image may be set according to requirements, which are not limited in this embodiment.
It should be noted that the number of the first target objects may be one or more, the first target objects may be target objects in the categories of animals, human bodies, vehicles, buildings, plants, and the like, and the specific category of the first target objects is not limited in this embodiment.
In one embodiment, after determining the shooting parameters, the first electronic device may directly shoot the first target object using the shooting parameters, resulting in a first picture including the first target object. For example, the first electronic device may capture a first target object based on the capture parameters in response to a first input, resulting in a first picture. The first input is used for controlling the first electronic equipment to shoot a first target object to obtain a first picture. The first input is, for example, a click operation of clicking a physical key in the first electronic device or clicking a virtual key in the display screen. The first electronic device may, after receiving the first input, photograph the first target object based on the determined photographing parameters in response to the first input, resulting in a first picture.
Illustratively, as shown in fig. 2, fig. 2 is a schematic diagram of photographing according to an exemplary embodiment, a photographing distance 200 is an actual distance between a first electronic device 201 and a first target object 202, and after receiving a first input, the first electronic device 201 may directly photograph the first target object 202 in response to the first input, resulting in a first picture. When the shooting distance is the focal distance, the first electronic device may keep the focal distance unchanged, and shoot the first target object 202 to obtain the first picture.
It should be noted that, the first electronic device may detect the first target object and determine the shooting parameters before receiving the first input, and shoot according to the shooting parameters after receiving the first input; or after receiving the first input, first detecting to obtain the first target object and determining the shooting parameters, and then shooting to obtain the first picture, where this embodiment does not limit the shooting process of the first picture.
And 102, sending the shooting parameters to the second electronic equipment so that the second electronic equipment shoots the second target object by using the shooting parameters to obtain a second picture.
The second electronic device may be an electronic device that establishes a communication connection with the first electronic device in advance before shooting, the second electronic device may be the same as or different from the first electronic device, and the second electronic device may be one or more devices. In the shooting process, the first electronic device is a master device, and the second electronic device is a slave device. The process of establishing the communication connection between the first electronic device and the second electronic device may be set according to requirements, and this embodiment does not limit this.
Illustratively, after establishing the communication connection with the first electronic device, the second electronic device may display the shooting preview interface, detect an object in the shooting preview interface, and determine a second target object in the shooting preview interface. The second target object may be a target object of the same type as the first target object or may be a target object of a different type from the first target object. The method for determining the second target object may refer to the first target object, which is not described in detail in this embodiment.
In this embodiment, the first electronic device may send the shooting parameters to the second electronic device, so that the second electronic device shoots the second target object based on the same shooting parameters, and obtains the second picture including the second target object. With reference to the above example, after receiving the first input, the first electronic device may send a shooting instruction and shooting parameters to the second electronic device in response to the first input. Correspondingly, after receiving the shooting instruction and the shooting parameters, the second electronic device can adjust the shooting parameters of the second target object to be consistent with the shooting parameters of the first target object in response to the shooting instruction, and shoot the second target object according to the adjusted shooting parameters.
In one embodiment, when the shooting parameters include the shooting distance, step 102 may be implemented as follows:
and sending the shooting distance to the second electronic equipment so that the second electronic equipment shoots a second target object which is away from the second electronic equipment by the shooting distance to obtain a second picture.
As shown in fig. 2, after receiving the shooting distance sent by the first electronic device, the second electronic device 203 may compare the detected actual distance 205 between the second target object 204 and the second electronic device 203 with the received shooting distance 200, and if the difference between the actual distance 205 and the shooting distance 200 is less than or equal to the preset distance threshold, the second electronic device may directly control the camera to shoot to obtain a second picture. On the contrary, if the difference between the actual distance 205 between the second target object 204 and the second electronic device 203 and the shooting distance 200 is greater than or equal to the preset distance threshold, the second electronic device may output a prompt message to prompt the user to move the second electronic device or the second target object, and when the difference between the actual distance between the second electronic device 203 and the second target object 204 and the shooting distance 200 is less than or equal to the preset distance threshold, the camera is controlled to shoot to obtain a second picture, where a dotted line area 206 in fig. 2 is the second target object after movement.
For another example, if the shooting distance is a focal length, the second electronic device may adjust the focal length of the camera to be consistent with the shooting distance. At this time, if the user is not in the focal distance, the second electronic device may output a prompt message to prompt the user to move the second electronic device or to move the second target object, so that the second target object is located in the focal distance. The form of the prompt message may include, but is not limited to, voice message, text message, pattern message, and the like, and the specific form of the prompt message is not limited in this embodiment.
And 103, receiving image data of a second target object included in a second picture sent by the second electronic equipment.
In one embodiment, after the second electronic device takes the second picture, the second electronic device may directly send the second picture to the first electronic device, so as to send the image data of the second target object included in the second picture to the first electronic device. Or the second electronic device may perform target detection on the second picture, determine a rectangular region where the second target object is located in the second picture, extract image data in the rectangular region, obtain image data of the second target object, and send the image data of the second target object to the first electronic device. The specific process of the second electronic device sending the image data of the second target object to the first electronic device may be set according to the requirement, which is not limited in this embodiment.
And 104, synthesizing a second target object in the second picture into the first picture according to the image data of the second target object to obtain a target picture.
In this embodiment, the first electronic device may combine the second target object obtained by shooting with the second electronic device into the first picture, so as to obtain a target picture including the first target object and the second target object. With reference to the foregoing example, after receiving the image data of the second target object sent by the second electronic device, the first electronic device may synthesize the image data of the second target object and the first picture, so as to synthesize the second target object into the first picture, thereby obtaining the target image. Or after receiving the second picture, the first electronic device may perform target detection on the second picture, extract image data of a second target object in the second picture, and synthesize the image data of the second target object into the first picture to obtain the target picture. The specific method for the first electronic device to synthesize the image data of the second target object into the first picture may be set according to requirements, and this embodiment does not limit this.
In practical application, when different target objects are shot at the same shooting distance, the shot different target objects have the same proportion, and when a plurality of target objects are synthesized into the same picture, the proportions of the plurality of target objects are coordinated, so that the synthesized picture is more coordinated.
When the first electronic device synthesizes the image data of the second target object into the first picture, the second target object can be synthesized into other areas except the area where the first target object is located, so that the first target object and the second target object are prevented from overlapping to influence the viewing effect.
It should be noted that the first electronic device and the second electronic device may be electronic devices located in different geographical areas, for example, the first electronic device is located in city a, and the second electronic device is located in city B, at this time, the first electronic device and the second electronic device may implement remote shooting, and shoot target objects located in different geographical areas in the same picture. Or, the first electronic device and the second electronic device may be electronic devices located in the same place, and at this time, the first electronic device and the second electronic device may respectively photograph target objects located in different directions, and photograph the target objects located in different directions in the same picture. The first electronic device and the second electronic device directly take the target objects at different positions in the same picture, so that the clipping cost in the subsequent image synthesis process can be reduced.
In summary, in this embodiment, the first electronic device determines the shooting parameters of the first target object, sends the shooting parameters to the second electronic device, so that the second electronic device shoots the second target object by using the shooting parameters to obtain a second picture, and synthesizes the second target object in the second picture into the first picture shot based on the same shooting parameters to obtain the target picture. The first electronic device and the second electronic device can shoot target objects located at different positions respectively through the same shooting parameters, when different target objects are shot through the same shooting parameters, the characteristics of the different target objects obtained through shooting are similar, and when the plurality of target objects are synthesized into the same picture, the characteristics of the plurality of target objects are close, so that the synthesized picture is more harmonious.
Referring to fig. 3, fig. 3 is a flowchart illustrating steps of another photographing method provided according to an exemplary embodiment, which may include, as shown in fig. 3:
step 301, the first electronic device determines shooting parameters of a first target object.
In this embodiment, the first electronic device may first display a shooting preview interface, and determine a first target object in the shooting preview interface and shooting parameters of the first target object. The first electronic equipment can display the first target object through the shooting preview interface, and a user can conveniently observe the picture effect through the shooting preview interface.
Step 302, the first electronic device sends the shooting parameters to the second electronic device.
In this embodiment, before shooting the first picture, the first electronic device may send the shooting parameters to the second electronic device, so that the second electronic device shoots the second target object based on the shooting parameters to obtain a preview image, and return image data of the second target object included in the preview image to the first electronic device.
Optionally, the shooting parameters include one or more of a shooting distance between the first target object and the first electronic device, position information of the first target object in a shooting preview interface of the first electronic device, illumination intensity information of an environment where the first electronic device is located, and motion information of the first target object.
Optionally, before step 302, the method may further comprise at least one of:
determining that a preview image displayed in a shooting preview interface of first electronic equipment meets a preset condition;
it is determined that the photographing confirmation operation input by the user is received.
In one embodiment, in implementing the linkage shooting process, the master device and the slave device, i.e., the first electronic device or the second electronic device, in the shooting process may be determined first, and then the shooting is started. For example, the first electronic device may send the shooting parameters to the second electronic device to enable the second electronic device to shoot a second picture when the first electronic device is determined to be the main device in the shooting process. Specifically, the first electronic device may identify a preview image displayed in a shooting preview interface, determine a category of the preview image, and if the category of the preview image is an image of a preset category such as a landscape category and a people category, determine that the preview image meets a preset condition, where the first electronic device is a main device in a shooting process. Or the first electronic device may perform target detection on target objects in a preset category in the preview image, and if it is determined that the number of the target objects is greater than or equal to the preset number, it is determined that the preview image meets the preset condition, where the first electronic device is a main device in a shooting process. When the first electronic device determines that the first electronic device is the master device, a control request may be sent to the second electronic device, requesting to set the second electronic device as the slave device in the shooting process. Accordingly, the second electronic device may set the second electronic device as a slave device in response to the control request after receiving the control request, and take the second picture based on the photographing parameters when receiving the photographing parameters transmitted by the first electronic device.
In another embodiment, the first electronic device may also determine that the first electronic device is a master device when it is determined that the shooting confirmation operation input by the user is received, and send a control request to the second electronic device requesting to set the second electronic device as a slave device. The shooting confirmation operation is, for example, an operation of clicking a shooting confirmation button, and the first electronic device may determine that the first electronic device is a main device in a shooting process when the shooting confirmation button clicked by the user is received. Still alternatively, the first electronic device may determine that the first electronic device is the main device when it is determined that the shooting confirmation operation input by the user is received and when it is determined that the preview image meets the preset condition.
In practical application, the first electronic device and the second electronic device can determine the master device and the slave device in the shooting process based on the preview image and/or user operation, and the master device and the slave device can be flexibly set, so that the shooting by a user is facilitated.
It should be noted that the first electronic device and the second electronic device are in a relative relationship, and in the shooting process, when any one electronic device determines that itself is the master device, it may send a control request to the other electronic devices to request to set the other electronic devices as the slave devices.
And step 303, the first electronic device displays the second target object in the preview image in the shooting preview interface.
In an embodiment, after receiving the shooting parameters sent by the first electronic device, the second electronic device may first shoot the second target object based on the shooting parameters to obtain a preview image, and send image data of the second target object included in the preview image to the first electronic device. The process of taking the preview image by the second electronic device is the same as the process of taking the second picture, which is not described in detail in this embodiment.
Correspondingly, after receiving the image data of the second target object in the preview image sent by the second electronic device, the first electronic device can directly synthesize the image data of the second target object into the shooting preview interface, and display the second target object in the shooting preview interface, so that a user can conveniently watch the image synthesis effect through the shooting preview interface. If the user determines that the image effect displayed in the shooting preview interface meets the requirement, the first input can be executed, the first electronic device is controlled to shoot a first picture, the first electronic device is controlled to send a shooting instruction to the second electronic device, and the second electronic device can receive the shooting instruction sent by the first electronic device, respond to the shooting instruction, and shoot a second target object based on the shooting parameters to obtain a second picture.
In one embodiment, the first electronic device may display the second target object in the capture preview interface based on depth information of the second target object. For example, after receiving the shooting parameters sent by the first electronic device, the second electronic device may first shoot a preview image of the second target object, where the preview image includes a depth image and a color image of the second target object, and the second electronic device may send the shot depth image and color image to the first electronic device. The first electronic device may display a color image of the second target object in the capture preview interface based on the depth image of the second target object. Specifically, the second electronic device may first determine depth information of the second target object according to the depth image sent by the second electronic device, where the depth information represents an actual distance between the second target object and the second electronic device. Then, the depth information of each target object in the shooting preview interface and the second target object can be compared, and the relative depth relation between the second target object and the target object included in the shooting preview interface can be determined. As shown in fig. 4, fig. 4 is a schematic diagram of a shooting preview interface provided according to an exemplary embodiment, a first electronic device 401 includes a target object 402 and a first target object 403 in the shooting preview interface, a second electronic device 404 includes a second target object 405 in a shooting preview image, and a depth of the second target object 405 is greater than depths of the first target object 403 and the target object 402. When the first electronic device displays the shooting preview interface, the depth of the second target object 405 in the shooting preview interface may be made to be consistent with the depth of the second target object 405 in the preview image, even if the second target object 405 is located behind the target object 402, the dashed box 406 shows a relative depth relationship between objects in the shooting preview interface, in the relative depth relationship, the depth of the second target object 405 is greater than the depth of the target object 402, if there is an overlapping portion between the target object 402 and the second target object 405, the overlapping portion of the second target object may be deleted, and the second target object 405 in the shooting preview interface is located behind the object 402, so that the target picture may be more harmonious.
In practical application, the first electronic device displays the second target object shot by the second electronic device in advance through the shooting preview interface, so that a user can watch the image synthesis effect in advance, and the user can conveniently adjust the shooting parameters in time.
And step 304, the second electronic device shoots a second target object based on the shooting parameters to obtain a second picture.
In one embodiment, when the shooting parameters include position information of the first target object in the shooting preview interface of the first electronic device, step 304 may be implemented by:
outputting position prompt information corresponding to the position information to prompt a user to move a second target object to a target area, wherein the target area is other areas except the area corresponding to the position information in a shooting preview interface of the second electronic equipment;
and shooting a second target object to obtain a second picture.
In this embodiment, the first electronic device may determine the position of the first target object in the shooting preview interface, obtain the position information of the first target object, and send the position information to the second electronic device. In connection with step 101, when detecting the first target object in the shooting preview interface, the first electronic device may simultaneously determine the position information of the first target object in the shooting preview interface. For example, the first electronic device may perform target detection on an image acquired in real time through the target detection model, determine coordinate information of a rectangular area where the first target object is located in the image, where the coordinate information represents an upper-left corner coordinate and a lower-right corner coordinate of the rectangular area where the first target object is located, that is, position information of the first target object in the shooting preview interface, and send the position information to the second electronic device. The specific form of the position information may be set according to the requirement, and this embodiment does not limit this. The method for detecting the position information of the first target object by the first electronic device may include, but is not limited to, a method of detecting through a target detection model, and any method known in the art or may occur in the future may be applied to the present embodiment to determine the position information of the first target object.
Correspondingly, after receiving the position information sent by the first electronic device, the second electronic device may output position prompt information to prompt the user to move the position of the second target object, and take a picture of the second target object to obtain a second picture. With reference to the above example, after receiving the coordinate information (20, 20) and (90, 90), the second electronic device may detect a position of the second target object in the shooting preview interface of the second electronic device, and if the second target object is in the area indicated by the coordinate information (20, 20) and (90, 90), output a position prompt message to prompt the user to move the second electronic device or to move the second target object so that the second target object is located in the target area in the shooting preview interface of the second electronic device. After the second target object is detected to be located in the target area, shooting of the second target object can be started to obtain a second picture. The target area is an area other than the rectangular area defined by the coordinate information (20, 20) and (90, 90). The position prompt information may be a voice prompt information or a text prompt information, and the specific form of the position prompt information may be set according to the requirement, which is not limited in this embodiment.
In practical application, the first electronic device sends the position information of the first target object to the second electronic device, and the second electronic device outputs corresponding position prompt information, so that the user can move the second target object to other areas except the area where the first target object is located in the shooting preview interface. In the process of synthesizing the second target object into the first picture, the first electronic device may synthesize the second target object into another region except for the region where the first target object is located according to the position of the second target object in the second picture, so as to avoid overlapping of the first target object and the second target object.
In one embodiment, when the shooting parameters include illumination intensity information of the environment where the first electronic device is located, step 304 may be implemented as follows:
under the condition that the illumination intensity of the environment where the first electronic equipment is located is larger than that of the current environment, supplementary lighting is carried out so that the lighting amount of the second electronic equipment is matched with that of the first electronic equipment; or, under the condition that the illumination intensity of the environment where the first electronic device is located is smaller than the illumination intensity of the current environment, reducing the lighting amount so as to enable the lighting amount of the second electronic device to be matched with the lighting amount of the first electronic device;
and shooting a second target object to obtain a second picture.
In this embodiment, before the first electronic device takes the first picture, the first electronic device may determine the illumination intensity of the environment where the first electronic device is located, obtain the illumination intensity information, and send the illumination intensity information to the second electronic device. For example, the first electronic device may collect the illumination intensity information in real time through the illumination intensity sensor and send the illumination intensity information to the second electronic device. The specific method for acquiring the illumination intensity information by the first electronic device may be set according to requirements, and this embodiment does not limit this.
Correspondingly, after receiving the illumination intensity information sent by the first electronic device, the second electronic device can adjust the lighting amount according to the illumination intensity information, and shoot the second target object after adjusting the lighting amount to obtain the second picture. For example, the second electronic device may acquire the illumination intensity information of the current environment in real time through the illumination intensity sensor, after receiving the illumination intensity information sent by the first electronic device, compare the acquired illumination intensity information with the received illumination intensity information, and if the acquired illumination intensity information is smaller than the illumination intensity information sent by the first electronic device, turn on the light supplement lamp to supplement light, so that a difference between the illumination intensity of the current environment and the illumination intensity of the environment where the first electronic device is located is smaller than or equal to a preset intensity threshold, even if the illumination intensity of the environment where the second electronic device is located is close to the illumination intensity of the environment where the first electronic device is located, so as to improve the light collection amount of the second electronic device, and match the light collection amount of the second electronic device with the light collection amount of the first electronic device. On the contrary, when the collected illumination intensity information is larger than the received illumination intensity information, the light collection amount of the camera can be reduced, so that the difference value between the illumination intensity collected by the second electronic device and the illumination intensity of the environment where the first electronic device is located is smaller than or equal to the preset intensity threshold value, and the light collection amount of the second electronic device can be matched with the light collection amount of the first electronic device. When the lighting amount of the second electronic device is matched with the lighting amount of the first electronic device, the lighting amount of the second electronic device can be close to the lighting amount of the first electronic device when the preview image or the second picture is shot by the second electronic device, so that the brightness of the picture shot by the second electronic device can be matched with the brightness of the picture shot by the first electronic device.
In practical application, the second electronic device adjusts the lighting amount of the second electronic device according to the illumination intensity information sent by the first electronic device, so that the brightness of the picture shot by the second electronic device is equivalent to that shot by the first electronic device, the illumination intensity of the first target object and the illumination intensity of the second target object in the target picture are close, and the effect of the target picture is better.
In one embodiment, when the motion information of the first target object is included in the shooting parameters, step 304 may be implemented as follows;
outputting action prompt information corresponding to the action information to prompt a user to adjust the action of the second target object to be consistent with the action of the first target object;
and shooting a second target object to obtain a second picture.
In this embodiment, the first electronic device may determine the motion of the first target object and send the motion information of the first target object to the second electronic device. For example, if the first target object is a human body, the first electronic device may detect the acquired image through the target detection model, and determine motion information of a gesture motion, an expression motion, a limb motion, and the like of the human body in the image, where the gesture motion is, for example, an "OK" gesture made by the human body, the expression motion is, for example, a smile expression and a grimace expression, and the limb motion is, for example, a back-bending motion and a bending motion. The first electronic device may encode the detected motion to obtain motion information, and transmit the motion information to the second electronic device. The specific processes of detecting the motion information and sending the motion information by the first electronic device may be set according to requirements, and this embodiment is not limited thereto.
Correspondingly, after receiving the motion information sent by the first electronic device, the second electronic device may output motion prompt information corresponding to the received motion information, and shoot the second target object to obtain the second graph. With reference to the above example, after receiving the motion information sent by the first electronic device, the second electronic device may determine the motions, such as an "OK" gesture, a smiling expression, a grimace expression, a backward motion, a bending motion, and the like, corresponding to the motion information. At this time, the second electronic device may output corresponding motion prompt information to prompt the user to stroke an "OK" gesture, smile and make a grimace, and lean back and bend over, so that the motion of the second target object is consistent with the motion of the first target object. When it is detected that the motion of the user corresponds to the motion information sent by the first electronic device, the second target object may be photographed to obtain a second map. The specific form of the action prompt message may be set according to the requirement, and this embodiment does not limit this.
In practical application, the second electronic device prompts the user to adjust the action of the second target object to be consistent with the action of the first target object according to the action information sent by the first electronic device, so that the actions of the first target object and the second target object in the target picture can be consistent. At this time, the motions of the target objects included in the target picture are consistent, so that the target picture can be more harmonious.
It should be noted that, when the shooting parameters include multiple information of the shooting distance, the position information, the illumination intensity information, and the motion information, the second electronic device may shoot based on the multiple parameters during the shooting process to obtain the second picture.
Step 305, the second electronic device sends the image data of the second target object included in the second picture to the first electronic device.
And step 306, the first electronic device synthesizes the second target object in the second picture into the first picture according to the image data of the second target object to obtain the target picture.
Optionally, the image data of the second target object includes depth information of the second target object;
accordingly, step 306 may be implemented as follows:
determining the relative depth relation between the second target object and the target object included in the first picture according to the depth information of the second target object and the depth information of the target object included in the first picture;
and synthesizing a second target object in the second picture into the first picture based on the relative depth relation to obtain a target picture.
Optionally, on the basis of the relative depth relationship, a second target object in the second picture is synthesized into the first picture, and the step of obtaining the target picture may be implemented as follows:
and in the case that the depth of the second target object is greater than the depth of the target object, synthesizing the second target object behind the target object.
Illustratively, as shown in conjunction with fig. 4, the second picture taken by the second electronic device includes a depth image and a color image of the second target object. In the process of synthesizing the second target object into the first picture, the first electronic device may first determine depth information of the second target object according to a depth image included in the second picture. Then, the depth information of each target object in the first picture and the second target object can be compared, and the relative depth relation between the second target object and the target object included in the first picture can be determined. The first picture taken by the first electronic device 401 includes a target object 402 and a first target object 403, the second picture taken by the second electronic device 404 includes a second target object 405, and the depth of the second target object 405 is greater than the depth of the first target object 403 and the target object 402. In the process of synthesizing the target picture, the first electronic device makes the depth of the second target object 405 in the target picture consistent with the depth of the second target object 405 in the second picture, even if the second target object 405 is located behind the target object 402, a dashed box 406 shows a relative depth relationship between objects in the synthesized target picture, in the relative depth relationship, the depth of the second target object 405 is greater than the depth of the target object 402, if the target object 402 and the second target object 405 have an overlapped part, the second target object of the overlapped part can be deleted, the second target object 405 in the target picture is located behind the object 402, and the target picture can be more harmonious.
In practical application, a second target object shot by the second electronic device and a first target object shot by the first electronic device are synthesized into one picture according to the depth information, so that the proportion of the objects shot by the two devices is coordinated, and the shot pictures are more harmonious. In addition, when the depth of the second target object is greater than the depth of the target object, the ratio of a plurality of objects having different depths can be adjusted after the second target object is synthesized with the target object, and the picture can be more natural.
In one embodiment, the depths of the target objects of the same category may be made the same during the image composition process. As shown in fig. 4, if the types of the first target object 403 and the second target object 405 are the same, the depth of the second target object 405 in the target picture may be the same as the depth of the first target object 403 according to the relative depth relationship between the first target object 403 and the second target object 405. The depths of the target objects with the same category in the target picture are the same, so that the target objects with the same category can be more harmonious.
Optionally, the method may further include:
and synthesizing the second target object into other areas except the first target object in the first picture.
For example, the image data of the second target object may include position information of the second target object, that is, coordinate information of the second target object in the second picture. In connection with step 303, the user takes a second picture after moving the second target object to the target area. At this time, the first electronic device may determine position information of the second target object in the second picture during the process of synthesizing the second target object into the first picture, and synthesize the second target object into the first picture according to coordinate information of the second target object in the second picture. Because the target area where the second target object is located is not in the same area as the first target object, the first target object and the second target object do not overlap.
In practical application, the second target object is synthesized into other areas except the first target object in the first picture, so that the first target object and the second target object can be prevented from being overlapped, and a clearer target picture can be obtained.
In summary, in this embodiment, the first electronic device determines the shooting parameters of the first target object, sends the shooting parameters to the second electronic device, so that the second electronic device shoots the second target object by using the shooting parameters to obtain a second picture, and synthesizes the second target object in the second picture into the first picture shot based on the same shooting parameters to obtain the target picture. The first electronic device and the second electronic device can shoot target objects located at different positions respectively through the same shooting parameters, when different target objects are shot through the same shooting parameters, the characteristics of the different target objects obtained through shooting are similar, and when the plurality of target objects are synthesized into the same picture, the characteristics of the plurality of target objects are close, so that the synthesized picture is more harmonious.
Optionally, the first electronic device may further determine color information of the shooting preview interface, and transmit the color information to the second electronic device.
Correspondingly, the second electronic device can receive the color information sent by the first electronic device and adjust the color of the second picture, so that the color of the second picture conforms to the color of the first picture.
In this embodiment, the first electronic device may detect the color of the shooting preview interface while detecting the first target object, and determine the color information of the shooting preview interface. For example, the first electronic device may count colors of all pixel points in the shooting preview interface, determine an average color value of all pixel points, and send the average color value to the second electronic device. After receiving the average color, the second electronic device may use a filter to capture a second picture, so that the color of the second picture is close to the color of the first picture.
In practical application, the second electronic device adjusts the color of the second picture according to the color information sent by the first electronic device, so that the color of the second picture is consistent with the color of the first picture, the colors of the first target object and the second target object in the target picture can be close to each other, and the color of the target picture is more harmonious.
It should be noted that, in the shooting method provided in the embodiment of the present application, the execution subject may be an image capturing apparatus, or alternatively, a control module in the image capturing apparatus for executing a loading shooting method. In the embodiment of the present application, a shooting method performed by an image shooting device is taken as an example, and the shooting method provided in the embodiment of the present application is described.
Referring to fig. 5, fig. 5 is a block diagram of an image photographing apparatus according to an exemplary embodiment, and as shown in fig. 5, the apparatus 500 may include: a determination module 501, a sending module 502, a receiving module 503, and a synthesis module 504.
The determining module 501 is used for determining shooting parameters of the first target object.
The sending module 502 is configured to send the shooting parameters to the second electronic device, so that the second electronic device shoots the second target object by using the shooting parameters to obtain a second picture.
The receiving module 503 is configured to receive image data of a second target object included in a second picture sent by a second electronic device.
The synthesizing module 504 is configured to synthesize a second target object in the second picture into a first picture according to image data of the second target object, so as to obtain a target picture, where the first picture is obtained by shooting the first target object based on the shooting parameters.
Optionally, the image data of the second target object includes depth information of the second target object;
the synthesis module 504 includes:
the determining unit is used for determining the relative depth relation between the second target object and the target object included in the first picture according to the depth information of the second target object and the depth information of the target object included in the first picture;
and the synthesizing unit is used for synthesizing a second target object in the second picture into the first picture based on the relative depth relation to obtain a target picture.
Optionally, the synthesizing unit is specifically configured to synthesize the second target object behind the target object when the depth of the second target object is greater than the depth of the target object.
Optionally, the composition module 505 is further configured to display the second target object in the preview image in a shooting preview interface of the first electronic device.
Optionally, the shooting parameters include a shooting distance between the first target object and the first electronic device;
the sending module 503 is specifically configured to send the shooting distance to the second electronic device, so that the second electronic device shoots a second target object that is away from the second electronic device by the shooting distance, to obtain a second picture.
Optionally, the shooting parameters include position information of the first target object in a shooting preview interface of the first electronic device;
the sending module 503 is specifically configured to send the position information to the second electronic device, so that the second electronic device outputs position prompt information corresponding to the position information, and takes a second target object to obtain a second picture;
the position prompting information is used for prompting a user to move a second target object to a target area, and the target area is other areas except the area corresponding to the position information in a shooting preview interface of the second electronic device.
Optionally, the shooting parameters include illumination intensity information of an environment where the first electronic device is located;
the sending module 503 is specifically configured to send the illumination intensity information to the second electronic device, so that the second electronic device adjusts the lighting amount to match the lighting amount of the first electronic device, and takes a second target object to obtain the second picture.
Optionally, the shooting parameters include motion information of the first target object;
the sending module 503 is specifically configured to send the motion information to the second electronic device, so that the second electronic device outputs motion prompt information corresponding to the motion information, and takes a second target object to obtain a second picture;
the action prompt information is used for prompting the user to adjust the action of the second target object to be consistent with the action of the first target object.
Optionally, the determining module 501 is further configured to determine that a preview image displayed in a shooting preview interface of the first electronic device meets a preset condition; and/or determining that a shooting confirmation operation input by a user is received.
Referring to fig. 6, fig. 6 is a block diagram of an image photographing apparatus according to an exemplary embodiment, and as shown in fig. 6, the apparatus 600 may include: a receiving module 601, a photographing module 602, and a transmitting module 603.
The receiving module 601 is configured to receive a shooting parameter sent by a first electronic device.
The shooting module 602 is configured to shoot a second target object based on the shooting parameters to obtain a second picture.
The sending module 603 is configured to send image data of a second target object included in the second picture to the first electronic device, so that the first electronic device synthesizes the second target object in the second picture into the first picture to obtain a target picture.
The first picture is obtained by shooting at least one first target object by the first electronic device based on shooting parameters.
Optionally, the shooting parameters include a shooting distance between the first target object and the first electronic device;
the shooting module 602 is specifically configured to shoot a second target object that is away from the second electronic device by a shooting distance, so as to obtain a second picture.
Optionally, the shooting parameters include position information of the first target object in a shooting preview interface of the first electronic device;
the shooting module 602 is specifically configured to output position prompt information corresponding to the position information to prompt a user to move a second target object to a target area, where the target area is an area other than the area corresponding to the position information in a shooting preview interface of the second electronic device; and shooting a second target object to obtain a second picture.
Optionally, the shooting parameters include illumination intensity information of an environment where the first electronic device is located;
the shooting module 602 is specifically configured to perform supplementary lighting when the illumination intensity of the environment where the first electronic device is located is greater than the illumination intensity of the current environment, so that the lighting amount of the second electronic device matches the lighting amount of the first electronic device; or, under the condition that the illumination intensity of the environment where the first electronic device is located is smaller than the illumination intensity of the current environment, reducing the lighting amount so as to enable the lighting amount of the second electronic device to be matched with the lighting amount of the first electronic device; and shooting a second target object to obtain a second picture.
Optionally, the shooting parameters include motion information of the first target object;
the shooting module 602 is specifically configured to output motion prompt information corresponding to the motion information to prompt a user to adjust a motion of the second target object to be consistent with a motion of the first target object; and shooting a second target object to obtain a second picture.
In the embodiment of the application, the first electronic device determines shooting parameters of a first target object, sends the shooting parameters to the second electronic device, so that the second electronic device shoots a second target object by using the shooting parameters to obtain a second picture, and synthesizes the second target object in the second picture into a first picture shot based on the same shooting parameters to obtain a target picture. The first electronic device and the second electronic device can shoot target objects located at different positions respectively through the same shooting parameters, when different target objects are shot through the same shooting parameters, the characteristics of the different target objects obtained through shooting are similar, and when the plurality of target objects are synthesized into the same picture, the characteristics of the plurality of target objects are close, so that the synthesized picture is more harmonious.
The image capturing device in the embodiment of the present application may be a device, or may be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine or a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The image capturing apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present application are not limited specifically.
The image capturing device provided in the embodiment of the present application can implement each process implemented by the image capturing device in the method embodiments of fig. 1 or fig. 3, and is not described herein again to avoid repetition.
Optionally, an electronic device is further provided in an embodiment of the present application, as shown in fig. 7, fig. 7 is a structural diagram of an electronic device provided according to an exemplary embodiment, where the electronic device includes a processor 701, a memory 702, and a program or an instruction stored in the memory 702 and executable on the processor 701, and when the program or the instruction is executed by the processor 701, the process of the shooting method embodiment is implemented, and the same technical effect can be achieved, and details are not repeated here to avoid repetition.
It should be noted that the electronic devices in the embodiments of the present application include the mobile electronic devices and the non-mobile electronic devices described above.
Fig. 8 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 800 includes, but is not limited to: a radio frequency unit 801, a network module 802, an audio output unit 803, an input unit 804, a sensor 805, a display unit 806, a user input unit 807, an interface unit 808, a memory 809, and a processor 810.
Those skilled in the art will appreciate that the electronic device 800 may further comprise a power source (e.g., a battery) for supplying power to the various components, and the power source may be logically connected to the processor 810 via a power management system, so as to manage charging, discharging, and power consumption management functions via the power management system. The electronic device structure shown in fig. 8 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is omitted here.
The processor 810 is configured to determine shooting parameters of a first target object;
sending the shooting parameters to second electronic equipment so that the second electronic equipment shoots a second target object by using the shooting parameters to obtain a second picture;
and receiving image data of a second target object included in a second picture sent by the second electronic equipment.
And synthesizing the second target object in the second picture into a first picture according to the image data of the second target object to obtain a target picture, wherein the first picture is obtained by shooting the first target object based on the shooting parameters.
In the embodiment of the application, the first electronic device determines shooting parameters of a first target object, sends the shooting parameters to the second electronic device, so that the second electronic device shoots a second target object by using the shooting parameters to obtain a second picture, and synthesizes the second target object in the second picture into a first picture shot based on the same shooting parameters to obtain a target picture. The first electronic device and the second electronic device can shoot target objects located at different positions respectively through the same shooting parameters, when different target objects are shot through the same shooting parameters, the characteristics of the different target objects obtained through shooting are similar, and when the plurality of target objects are synthesized into the same picture, the characteristics of the plurality of target objects are close, so that the synthesized picture is more harmonious.
Optionally, the image data of the second target object includes depth information of the second target object;
the processor 810 is specifically configured to determine, according to the depth information of the second target object and the depth information of the target object included in the first picture, a relative depth relationship between the second target object and the target object included in the first picture; and synthesizing a second target object in the second picture into the first picture based on the relative depth relation to obtain a target picture.
In practical application, a second target object shot by the second electronic device and a first target object shot by the first electronic device are synthesized into one picture according to the depth information, so that the proportion of the objects shot by the two devices is coordinated, and the shot pictures are more harmonious.
Optionally, the processor 810 is specifically configured to, in a case that the depth of the second target object is greater than the depth of the target object, synthesize the second target object behind the target object.
In practical application, when the depth of the second target object is greater than the depth of the target object, after the second target object is synthesized into the target object, the proportions of a plurality of objects with different depths can be coordinated, so that the picture is more natural.
Optionally, the display unit 806 is further configured to display the second target object in the preview image in the shooting preview interface of the first electronic device.
In practical application, the first electronic device displays the second target object shot by the second electronic device in advance through the shooting preview interface, so that a user can watch the image synthesis effect in advance, and the user can conveniently adjust the shooting parameters in time.
Optionally, the shooting parameters include a shooting distance between the first target object and the first electronic device;
the processor 810 is specifically configured to send the shooting distance to the second electronic device, so that the second electronic device shoots a second target object that is away from the second electronic device by the shooting distance, and obtains a second picture.
In practical application, when different target objects are shot at the same shooting distance, the shot different target objects have the same proportion, and when a plurality of target objects are synthesized into the same picture, the proportions of the plurality of target objects are coordinated, so that the synthesized picture is more coordinated.
Optionally, the shooting parameters include position information of the first target object in a shooting preview interface of the first electronic device;
the processor 810 is specifically configured to send position information to the second electronic device, so that the second electronic device outputs position prompt information corresponding to the position information, and takes a second target object to obtain a second picture;
the position prompting information is used for prompting a user to move a second target object to a target area, and the target area is other areas except the area corresponding to the position information in a shooting preview interface of the second electronic device.
In practical application, the first electronic device sends the position information of the first target object to the second electronic device, and the second electronic device outputs corresponding position prompt information, so that the user can move the second target object to other areas except the area where the first target object is located in the shooting preview interface. In the process of synthesizing the second target object into the first picture, the first electronic device may synthesize the second target object into another region except for the region where the first target object is located according to the position of the second target object in the second picture, so as to avoid overlapping of the first target object and the second target object.
Optionally, the shooting parameters include illumination intensity information of an environment where the first electronic device is located;
the processor 810 is specifically configured to send the illumination intensity information to the second electronic device, so that the second electronic device adjusts the lighting amount to match the lighting amount of the first electronic device, and takes a second target object to obtain a second picture.
In practical application, the second electronic device adjusts the lighting amount of the second electronic device according to the illumination intensity information sent by the first electronic device, so that the brightness of the picture shot by the second electronic device is equivalent to that shot by the first electronic device, the illumination intensity of the first target object and the illumination intensity of the second target object in the target picture are close, and the effect of the target picture is better.
Optionally, the shooting parameters include motion information of the first target object;
the processor 810 is specifically configured to send the motion information to the second electronic device, so that the second electronic device outputs motion prompt information corresponding to the motion information, and takes a second target object to obtain a second picture;
the action prompt information is used for prompting the user to adjust the action of the second target object to be consistent with the action of the first target object.
In practical application, the second electronic device prompts the user to adjust the action of the second target object to be consistent with the action of the first target object according to the action information sent by the first electronic device, so that the actions of the first target object and the second target object in the target picture can be consistent. At this time, the motions of the target objects included in the target picture are consistent, so that the target picture can be more harmonious.
Optionally, the processor 810 is further configured to determine that a preview image displayed in a shooting preview interface of the first electronic device meets a preset condition; and/or determining that a shooting confirmation operation input by a user is received.
In practical application, the first electronic device and the second electronic device can determine the master device and the slave device in the shooting process based on the preview image and/or user operation, and the master device and the slave device can be flexibly set, so that the shooting by a user is facilitated.
The present embodiment also provides another electronic device as shown in fig. 8.
The processor 810 is configured to receive a shooting parameter sent by a first electronic device;
shooting a second target object based on the shooting parameters to obtain a second picture;
sending image data of a second target object included in the second picture to the first electronic device, so that the first electronic device synthesizes the second target object in the second picture into the first picture, and obtaining a target picture;
the first picture is obtained by shooting at least one first target object by the first electronic device based on shooting parameters.
Optionally, the shooting parameters include a shooting distance between the first target object and the first electronic device;
the processor 810 is specifically configured to capture a second target object that is a distance from the second electronic device to obtain a second picture.
In practical application, when different target objects are shot at the same shooting distance, the shot different target objects have the same proportion, and when a plurality of target objects are synthesized into the same picture, the proportions of the plurality of target objects are coordinated, so that the synthesized picture is more coordinated.
Optionally, the shooting parameters include a shooting distance between the first target object and the first electronic device;
the processor 810 is specifically configured to output location prompt information corresponding to the location information, so as to prompt a user to move the second target object to a target area, where the target area is an area other than the area corresponding to the location information in the shooting preview interface of the second electronic device; and shooting a second target object to obtain a second picture.
In practical application, the first electronic device sends the position information of the first target object to the second electronic device, and the second electronic device outputs corresponding position prompt information, so that the user can move the second target object to other areas except the area where the first target object is located in the shooting preview interface. In the process of synthesizing the second target object into the first picture, the first electronic device may synthesize the second target object into another region except for the region where the first target object is located according to the position of the second target object in the second picture, so as to avoid overlapping of the first target object and the second target object.
Optionally, the shooting parameters include illumination intensity information of an environment where the first electronic device is located;
the processor 810 is specifically configured to, when the illumination intensity of the environment where the first electronic device is located is greater than the illumination intensity of the current environment, perform supplementary lighting to match the lighting amount of the second electronic device with the lighting amount of the first electronic device; or, under the condition that the illumination intensity of the environment where the first electronic device is located is smaller than the illumination intensity of the current environment, reducing the lighting amount so that the lighting amount of the second electronic device is matched with the lighting amount of the first electronic device; and shooting a second target object to obtain a second picture.
In practical application, the second electronic device adjusts the lighting amount of the second electronic device according to the illumination intensity information sent by the first electronic device, so that the brightness of the picture shot by the second electronic device is equivalent to that shot by the first electronic device, the illumination intensity of the first target object and the illumination intensity of the second target object in the target picture are close, and the effect of the target picture is better.
Optionally, the shooting parameters include motion information of the first target object;
the processor 810 is specifically configured to output motion prompt information corresponding to the motion information to prompt a user to adjust the motion of the second target object to be consistent with the motion of the first target object; and shooting a second target object to obtain a second picture.
In practical application, the second electronic device prompts the user to adjust the action of the second target object to be consistent with the action of the first target object according to the action information sent by the first electronic device, so that the actions of the first target object and the second target object in the target picture can be consistent. At this time, the motions of the target objects included in the target picture are consistent, so that the target picture can be more harmonious.
It should be understood that in the embodiment of the present application, the input Unit 804 may include a Graphics Processing Unit (GPU) 8041 and a microphone 8042, and the Graphics Processing Unit 8041 processes image data of a still picture or a video obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The display unit 806 may include a display panel 8061, and the display panel 8061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 807 includes a touch panel 8071 and other input devices 8072. A touch panel 8071, also referred to as a touch screen. The touch panel 8071 may include two portions of a touch detection device and a touch controller. Other input devices 8072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein. The memory 809 may be used to store software programs as well as various data including, but not limited to, application programs and operating systems. The processor 810 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 810.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the above shooting method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and so on.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to execute a program or an instruction to implement each process of the above shooting method embodiment, and can achieve the same technical effect, and the details are not repeated here to avoid repetition.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (15)

1. A shooting method is applied to a first electronic device and comprises the following steps:
determining shooting parameters of a first target object;
sending the shooting parameters to second electronic equipment so that the second electronic equipment shoots a second target object by using the shooting parameters to obtain a second picture;
receiving image data of the second target object included in the second picture sent by the second electronic device;
and synthesizing the second target object in the second picture into a first picture according to the image data of the second target object to obtain a target picture, wherein the first picture is obtained by shooting the first target object based on the shooting parameters.
2. The method of claim 1, wherein the image data of the second target object includes depth information of the second target object;
the synthesizing the second target object in the second picture into the first picture according to the image data of the second target object to obtain a target picture, including:
determining a relative depth relation between the second target object and the target object included in the first picture according to the depth information of the second target object and the depth information of the target object included in the first picture;
and synthesizing the second target object in the second picture into the first picture based on the relative depth relation to obtain the target picture.
3. The method according to claim 2, wherein the synthesizing the second target object in the second picture into the first picture based on the relative depth relationship to obtain the target picture comprises:
synthesizing the second target object behind the target object if the depth of the second target object is greater than the depth of the target object.
4. The method according to claim 1, further comprising, before said obtaining the target picture:
and displaying the second target object in a shooting preview interface of the first electronic equipment.
5. The method according to claim 1, wherein the shooting parameters include a shooting distance between the first target object and the first electronic device;
the sending the shooting parameters to a second electronic device to enable the second electronic device to use the shooting parameters to shoot a second target object to obtain a second picture, includes:
and sending the shooting distance to the second electronic equipment so that the second electronic equipment shoots a second target object which is away from the second electronic equipment by the shooting distance to obtain a second picture.
6. The method according to claim 1, wherein the shooting parameters include position information of the first target object in a shooting preview interface of the first electronic device;
the sending the shooting parameters to a second electronic device to enable the second electronic device to use the shooting parameters to shoot a second target object to obtain a second picture, includes:
sending the position information to the second electronic equipment so that the second electronic equipment outputs position prompt information corresponding to the position information, and shooting the second target object to obtain a second picture;
the position prompting information is used for prompting a user to move the second target object to a target area, and the target area is other areas except the area corresponding to the position information in a shooting preview interface of the second electronic device.
7. The method according to claim 1, wherein the shooting parameters include illumination intensity information of an environment in which the first electronic device is located;
the sending the shooting parameters to a second electronic device to enable the second electronic device to use the shooting parameters to shoot a second target object to obtain a second picture, includes:
and sending the illumination intensity information to the second electronic equipment so that the second electronic equipment adjusts the lighting amount to be matched with the lighting amount of the first electronic equipment, and shooting the second target object to obtain the second picture.
8. The method according to claim 1, wherein the shooting parameters include motion information of the first target object;
the sending the shooting parameters to a second electronic device to enable the second electronic device to use the shooting parameters to shoot a second target object to obtain a second picture, includes:
sending the motion information to the second electronic device to enable the second electronic device to output motion prompt information corresponding to the motion information, and shooting the second target object to obtain a second picture;
and the action prompt information is used for prompting a user to adjust the action of the second target object to be consistent with the action of the first target object.
9. The method according to any one of claims 1-8, wherein before said sending the shooting parameters to the second electronic device to make the second electronic device shoot the second target object using the shooting parameters to obtain the second picture, at least one of the following is further included:
determining that a preview image displayed in a shooting preview interface of the first electronic device meets a preset condition; it is determined that the photographing confirmation operation input by the user is received.
10. A shooting method is applied to a second electronic device and is characterized by comprising the following steps:
receiving shooting parameters sent by first electronic equipment;
shooting a second target object based on the shooting parameters to obtain a second picture;
sending the image data of the second target object included in the second picture to the first electronic device, so that the first electronic device synthesizes the second target object in the second picture into the first picture to obtain a target picture;
the first picture is obtained by shooting at least one first target object by the first electronic device based on the shooting parameters.
11. The method according to claim 10, wherein the shooting parameters include a shooting distance between the first target object and the first electronic device;
the shooting of the second target object based on the shooting parameters to obtain a second picture comprises:
and shooting a second target object which is away from the second electronic equipment by the shooting distance to obtain the second picture.
12. The method according to claim 10, wherein the shooting parameters include position information of the first target object in a shooting preview interface of the first electronic device;
the shooting of the second target object based on the shooting parameters to obtain a second picture comprises:
outputting position prompt information corresponding to the position information to prompt a user to move the second target object to a target area; the target area is other areas except the area corresponding to the position information in a shooting preview interface of the second electronic equipment;
and shooting the second target object to obtain the second picture.
13. A shooting device, which is provided in a first electronic device, the device comprising:
the determining module is used for determining shooting parameters of the first target object;
the sending module is used for sending the shooting parameters to second electronic equipment so that the second electronic equipment can shoot a second target object by using the shooting parameters to obtain a second picture;
a receiving module, configured to receive image data of the second target object included in the second picture sent by the second electronic device;
and the synthesis module is used for synthesizing the second target object in the second picture into a first picture according to the image data of the second target object to obtain a target picture, and the first picture is obtained by shooting the first target object based on the shooting parameters.
14. The utility model provides a shooting device which characterized in that sets up in second electronic equipment, includes:
the receiving module is used for receiving shooting parameters sent by the first electronic equipment;
the shooting module is used for shooting a second target object based on the shooting parameters to obtain a second picture;
a sending module, configured to send, to the first electronic device, image data of the second target object included in the second picture, so that the first electronic device synthesizes the second target object in the second picture into a first picture, and obtains a target picture;
the first picture is obtained by shooting at least one first target object by the first electronic device based on the shooting parameters.
15. An electronic device comprising a processor, a memory and a program or instructions stored on the memory and executable on the processor, the program or instructions when executed by the processor implementing the steps of the photographing method according to any one of claims 1-12.
CN202110105249.9A 2021-01-26 2021-01-26 Shooting method and device and electronic equipment Active CN112887601B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110105249.9A CN112887601B (en) 2021-01-26 2021-01-26 Shooting method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110105249.9A CN112887601B (en) 2021-01-26 2021-01-26 Shooting method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN112887601A true CN112887601A (en) 2021-06-01
CN112887601B CN112887601B (en) 2022-09-16

Family

ID=76053592

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110105249.9A Active CN112887601B (en) 2021-01-26 2021-01-26 Shooting method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN112887601B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114390206A (en) * 2022-02-10 2022-04-22 维沃移动通信有限公司 Shooting method and device and electronic equipment
CN114520878A (en) * 2022-02-11 2022-05-20 维沃移动通信有限公司 Video shooting method and device and electronic equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105635558A (en) * 2015-06-25 2016-06-01 宇龙计算机通信科技(深圳)有限公司 Multi-terminal linked shooting method and device
CN108495032A (en) * 2018-03-26 2018-09-04 广东欧珀移动通信有限公司 Image processing method, device, storage medium and electronic equipment
CN108718383A (en) * 2018-04-24 2018-10-30 天津字节跳动科技有限公司 Cooperate with image pickup method, device, storage medium and terminal device
JP2019180016A (en) * 2018-03-30 2019-10-17 パナソニックIpマネジメント株式会社 Image synthesizer, image composition method and program
CN112188100A (en) * 2020-09-29 2021-01-05 维沃移动通信有限公司 Combined shooting method, combined shooting device and electronic equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105635558A (en) * 2015-06-25 2016-06-01 宇龙计算机通信科技(深圳)有限公司 Multi-terminal linked shooting method and device
CN108495032A (en) * 2018-03-26 2018-09-04 广东欧珀移动通信有限公司 Image processing method, device, storage medium and electronic equipment
JP2019180016A (en) * 2018-03-30 2019-10-17 パナソニックIpマネジメント株式会社 Image synthesizer, image composition method and program
CN108718383A (en) * 2018-04-24 2018-10-30 天津字节跳动科技有限公司 Cooperate with image pickup method, device, storage medium and terminal device
CN112188100A (en) * 2020-09-29 2021-01-05 维沃移动通信有限公司 Combined shooting method, combined shooting device and electronic equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114390206A (en) * 2022-02-10 2022-04-22 维沃移动通信有限公司 Shooting method and device and electronic equipment
CN114520878A (en) * 2022-02-11 2022-05-20 维沃移动通信有限公司 Video shooting method and device and electronic equipment

Also Published As

Publication number Publication date
CN112887601B (en) 2022-09-16

Similar Documents

Publication Publication Date Title
WO2022042776A1 (en) Photographing method and terminal
CN108495032B (en) Image processing method, image processing device, storage medium and electronic equipment
CN109788359B (en) Video data processing method and related device
CN112887601B (en) Shooting method and device and electronic equipment
CN113329172B (en) Shooting method and device and electronic equipment
CN113538696B (en) Special effect generation method and device, storage medium and electronic equipment
CN112422945A (en) Image processing method, mobile terminal and storage medium
CN112437232A (en) Shooting method, shooting device, electronic equipment and readable storage medium
CN112333386A (en) Shooting method and device and electronic equipment
CN108718388B (en) Photographing method and mobile terminal
CN115278084B (en) Image processing method, device, electronic equipment and storage medium
CN112422798A (en) Photographing method and device, electronic equipment and storage medium
CN114500837B (en) Shooting method and device and electronic equipment
CN112784081A (en) Image display method and device and electronic equipment
CN110086998B (en) Shooting method and terminal
CN114025092A (en) Shooting control display method and device, electronic equipment and medium
CN113596574A (en) Video processing method, video processing apparatus, electronic device, and readable storage medium
CN112235510A (en) Shooting method, shooting device, electronic equipment and medium
CN114390206A (en) Shooting method and device and electronic equipment
CN112532904B (en) Video processing method and device and electronic equipment
CN111726531B (en) Image shooting method, processing method, device, electronic equipment and storage medium
CN115016688A (en) Virtual information display method and device and electronic equipment
CN114285988A (en) Display method, display device, electronic equipment and storage medium
CN114241127A (en) Panoramic image generation method and device, electronic equipment and medium
CN111223114A (en) Image area segmentation method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant