CN110545382A - Shooting control method, electronic equipment and related device - Google Patents

Shooting control method, electronic equipment and related device Download PDF

Info

Publication number
CN110545382A
CN110545382A CN201910858036.6A CN201910858036A CN110545382A CN 110545382 A CN110545382 A CN 110545382A CN 201910858036 A CN201910858036 A CN 201910858036A CN 110545382 A CN110545382 A CN 110545382A
Authority
CN
China
Prior art keywords
image
images
brightness
reference image
preset condition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910858036.6A
Other languages
Chinese (zh)
Inventor
张海平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201910858036.6A priority Critical patent/CN110545382A/en
Publication of CN110545382A publication Critical patent/CN110545382A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces

Abstract

the embodiment of the application discloses a shooting control method, electronic equipment and a related device, which are applied to the electronic equipment, wherein the electronic equipment comprises a first display screen and a second display screen which are arranged oppositely, and the method comprises the following steps: when the camera is detected to be started, displaying a first function interface on a first display screen, and displaying a second function interface on a second display screen, wherein the first function interface comprises shooting preview content, and the second function interface comprises shooting preview content; acquiring a first shooting control instruction and a second shooting control instruction, wherein the first shooting control instruction is an instruction generated by a first trigger operation aiming at a first functional interface, and the second shooting control instruction is an instruction generated by a second trigger operation aiming at a second functional interface; and generating a target image according to the first shooting control instruction and the second shooting control instruction. The embodiment of the application is favorable for improving the success rate and efficiency of photographing of the electronic equipment.

Description

Shooting control method, electronic equipment and related device
Technical Field
the present application relates to the field of computer technologies, and in particular, to a shooting control method, an electronic device, and a related apparatus.
Background
With the continuous development of science and technology, various shooting software is developed endlessly, people often record the drips of life by taking pictures, and at present, when an electronic device takes a picture, the shooting is generally completed under the control of a shooting user, for example, the shooting user clicks a shooting function button on a screen to realize real-time shooting, or a delayed shooting function mode is set to perform delayed shooting, and the like.
disclosure of Invention
The embodiment of the application provides a shooting control method, electronic equipment and a related device, so as to improve the success rate and efficiency of shooting by the electronic equipment.
In a first aspect, an embodiment of the present application provides a shooting control method, which is applied to an electronic device, where the electronic device includes a first display screen and a second display screen that are arranged oppositely, and the method includes:
When the starting of a camera is detected, displaying a first function interface on the first display screen, and displaying a second function interface on the second display screen, wherein the first function interface comprises shooting preview content, and the second function interface comprises the shooting preview content;
Acquiring a first shooting control instruction and a second shooting control instruction, wherein the first shooting control instruction is an instruction generated by a first trigger operation for the first functional interface, and the second shooting control instruction is an instruction generated by a second trigger operation for the second functional interface;
And generating a target image according to the first shooting control instruction and the second shooting control instruction.
in a second aspect, an embodiment of the present application provides a shooting control apparatus applied to an electronic device, where the electronic device includes a first display screen and a second display screen that are oppositely disposed, and the apparatus includes a processing unit and a communication unit, where,
the processing unit is used for transmitting a starting signal through the communication unit when the camera is detected to be started, displaying a first function interface on the first display screen, and displaying a second function interface on the second display screen, wherein the first function interface comprises shooting preview content, and the second function interface comprises the shooting preview content; the shooting control method comprises the steps of acquiring a first shooting control instruction and a second shooting control instruction, wherein the first shooting control instruction is generated according to a first trigger operation of a shot object, and the second shooting control instruction is generated according to a second trigger operation of the shot object; and the image processing device is used for generating a target image according to the first shooting control instruction and the second shooting control instruction.
In a third aspect, an embodiment of the present application provides an electronic device, including a processor, a memory, a communication interface, and one or more programs, where the one or more programs are stored in the memory and configured to be executed by the processor, and the program includes instructions for executing the steps in any of the methods of the first aspect of the embodiment of the present application.
in a fourth aspect, this application provides a computer-readable storage medium, where the computer-readable storage medium stores a computer program for electronic data exchange, where the computer program is to make a computer perform part or all of the steps as described in any one of the methods of the first aspect of this application, and the computer includes an electronic device.
In a fifth aspect, the present application provides a computer program product, wherein the computer program product comprises a non-transitory computer-readable storage medium storing a computer program, the computer program being operable to cause a computer to perform some or all of the steps as described in any one of the methods of the first aspect of the embodiments of the present application. The computer program product may be a software installation package.
It can be seen that, in this embodiment of the application, when the electronic device detects that the camera is started, first, a first function interface is displayed on a first display screen, and a second function interface is displayed on a second display screen, where the first function interface includes shooting preview content, and the second function interface includes shooting preview content, then, a first shooting control instruction and a second shooting control instruction are obtained, the first shooting control instruction is an instruction generated by a first trigger operation for the first function interface, the second shooting control instruction is an instruction generated by a second trigger operation for the second function interface, and finally, a target image is generated according to the first shooting control instruction and the second shooting control instruction. Therefore, the electronic equipment can display the shooting preview content on different display screens, and the first display screen and the second display screen are arranged relatively, so that the shooting user can look up the shooting preview content respectively, the shooting control instructions of the two users can be integrated to realize shooting, the shooting success rate and the shooting efficiency are reduced by avoiding shooting under the condition that one user is dissatisfied, and the shooting success rate and the shooting efficiency of the electronic equipment are improved.
Drawings
in order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
fig. 1 is a schematic structural diagram of an electronic device disclosed in an embodiment of the present application;
Fig. 2A is a schematic flowchart of a shooting control method disclosed in an embodiment of the present application;
Fig. 2B is a schematic diagram of an application scenario provided in the embodiment of the present application;
Fig. 2C is a schematic view of a shooting interface of a first display screen according to an embodiment of the present disclosure;
Fig. 2D is a schematic view of a shooting interface of a second display screen according to an embodiment of the present disclosure;
Fig. 2E is a schematic diagram of another application scenario provided in the embodiment of the present application;
fig. 2F is a schematic diagram of another application scenario provided in the embodiment of the present application;
fig. 2G is a schematic diagram of another application scenario provided in the embodiment of the present application;
Fig. 3 is a schematic flowchart of another shooting control method disclosed in an embodiment of the present application;
Fig. 4 is a schematic structural diagram of an electronic device disclosed in an embodiment of the present application;
fig. 5 is a block diagram of functional units of a shooting control apparatus according to an embodiment of the present application.
Detailed Description
in order to make the technical solutions of the present application better understood, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
the terms "first," "second," and the like in the description and claims of the present application and in the above-described drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
The electronic device according to the embodiments of the present application may be an electronic device with communication capability, and the electronic device may include various handheld devices with wireless communication function, vehicle-mounted devices, wearable devices, computing devices or other processing devices connected to a wireless modem, and various forms of User Equipment (UE), Mobile Stations (MS), terminal equipment (terminal), and so on.
at present, the photographing function of the electronic device is generally completed under the control of a photographing user, for example, the photographing user clicks a photographing function button on a screen to perform real-time photographing, or a delayed photographing function mode is set to perform delayed photographing, and the photographing in such a case may reduce the success rate and efficiency of photographing.
In view of the above problems, the present application provides a shooting control method, and embodiments of the present application are described in detail below with reference to the accompanying drawings.
as shown in fig. 1, the electronic device 10 is provided with a first display screen 101 and a second display screen 102, where both the first display screen 101 and the second display screen 102 are connected to a processor, and can simultaneously perform display and implement human-computer interaction, and the sizes and positions of the first display screen 101 and the second display screen 102 are not specifically limited. Referring to fig. 2A, fig. 2A is a schematic flowchart of a shooting control method provided in an embodiment of the present application, and is applied to an electronic device, where the electronic device includes a first display screen and a second display screen that are arranged oppositely, and as shown in the figure, the shooting control method includes:
s101, when the electronic equipment detects that the camera is started, a first function interface is displayed on the first display screen, a second function interface is displayed on the second display screen, the first function interface comprises shooting preview content, and the second function interface comprises the shooting preview content.
the first display screen is a front screen facing a shooting user, the second display screen is a back screen facing a shot object, the first display screen and the second display screen are different, for example, the first display screen and the second display screen are different in content, for example, the first display screen and the second display screen can include preset function buttons except shooting preview content, the second display screen only has shooting preview content, and the first display screen and the second display screen can have the same content and are not limited specifically.
Therefore, in this example, the electronic device displays the preview content on different display screens, one screen faces the shooting object, and the other screen faces the shot object, so that users facing the two screens can see the preview content, and shooting satisfaction of the users is improved.
And S102, the electronic equipment acquires a first shooting control instruction and a second shooting control instruction, wherein the first shooting control instruction is an instruction generated by a first trigger operation aiming at the first functional interface, and the second shooting control instruction is an instruction generated by a second trigger operation aiming at the second functional interface.
the electronic device may obtain the first shooting control instruction and the second shooting control instruction, where the first shooting control instruction is obtained first by the electronic device, and then the second shooting control instruction is obtained; the electronic device may obtain the first shooting control instruction and the second shooting control instruction, or the electronic device may obtain the second shooting control instruction first and then obtain the first shooting control instruction, and the order of obtaining the first shooting control instruction and the second shooting control instruction is not particularly limited. As can be seen, in this example, the electronic device can obtain the operation instructions corresponding to different display screens.
S103, the electronic equipment generates a target image according to the first shooting control instruction and the second shooting control instruction.
and the target image is the image corresponding to the shooting task.
The first control instruction corresponds to a first trigger operation aiming at a first display screen, the second control instruction corresponds to a second trigger operation aiming at a second display screen, and only under the condition that the first trigger operation and the second trigger operation both occur, the current shooting task can be completed, and a target image is generated.
Optionally, the generating, by the electronic device, a target image according to the first shooting control instruction and the second shooting control instruction includes: the electronic equipment acquires a plurality of images of an object to be shot according to the first shooting control instruction and the second shooting control instruction; and the electronic equipment generates a target image according to the plurality of images.
specifically, when the shooting user is satisfied with the preview shooting content, a first trigger operation is executed to determine to shoot, at this time, the content that the shooting user wants to acquire is the shooting preview content in a short time before and after the first trigger operation, when the object to be shot is satisfied with the preview shooting content, executing a second trigger operation to determine shooting, wherein the content which the object to be shot wants to acquire is the shooting preview content corresponding to the second trigger operation, therefore, the electronic device acquiring the plurality of images of the object to be photographed according to the first photographing control instruction and the second photographing control instruction may acquire the plurality of images between a first time point before the first operation occurs and a second time point after the second operation occurs according to the first photographing control instruction and the second photographing control instruction, and then select the plurality of images including the object to be photographed from the plurality of images. Based on this, the electronic device performs processing such as deleting and fusing on the multiple images of the object to be shot to obtain a target image.
Therefore, in this example, the electronic device can acquire a plurality of images according to different shooting control instructions, and output the target image by combining the plurality of images, so that the satisfaction degree of the user on the shot finished product image is improved.
Optionally, the acquiring, by the electronic device, at least one image of the object to be photographed according to the first photographing control instruction and the second photographing control instruction includes: the electronic equipment acquires at least one first image according to the first shooting control instruction, wherein the at least one first image is an image corresponding to the first trigger operation; the electronic device obtains at least one second image according to the second shooting control instruction, the at least one second image is an image corresponding to the second triggering operation, and the second triggering operation includes at least one of the following: the method comprises the steps of triggering operation, sliding operation and multi-point touch operation aiming at preset function buttons.
Wherein, taking the second triggering operation as the triggering operation for the preset function button as an example, after the object is ready to take a picture according to the preview content, the object will click the preset function button to determine to take a picture, and then the object will repeat the previously prepared picture taking gesture again, in the process, the image that the object actually wants to take is the picture taking gesture he takes before clicking the preset function button or the picture taking gesture he takes again before clicking the preset function button, however, it takes a certain time until the object is ready to take a picture according to the preview content and the object will click the preset function button to determine to take a picture, it also takes a certain time until the object clicks the preset function button to determine to repeat the picture again, especially when the object has a certain distance from the electronic device, the more time is spent. Therefore, the at least one second image may be an image within a first preset time period before the second triggering operation occurs and/or within a second preset time period after the second triggering operation occurs, for example, the at least one second image may be an image within 2s to 5s before the second triggering operation occurs, the at least one second image may be an image within 5s to 8s after the second triggering operation occurs, the at least one second image may be at least one image within 3s to 5s before the second triggering operation occurs and within 6s to 8s after the second triggering operation occurs, and the acquiring time and the number of images are not particularly limited.
Similarly, when the second trigger operation is a sliding operation, the at least one second image may be an image within a first preset time period before the sliding operation occurs and/or an image within a second preset time period after the sliding operation occurs. When the second trigger operation is a multi-touch operation, the at least one second image may be an image within a first preset time period before the multi-touch operation occurs and/or an image within a second preset time period after the multi-touch operation occurs.
wherein the first trigger operation may be at least one of: and referring to the related content corresponding to the second trigger operation aiming at the trigger operation, the sliding operation and the multi-point touch operation of the preset function button. The time interval between the time point of acquiring the picture and the first trigger operation may be preset by the system, and it should be noted that the preview content captured by the capturing user in a short time before and after the first trigger operation does not change much, and the second trigger operation performed by the captured object actually intends to acquire the picture presented before the second trigger operation or the image to be presented after the second trigger operation is completed, so the time interval between the time point of acquiring the at least one first image and the time point of the click operation performed by the capturing user may be smaller than the time interval between the time point of acquiring the at least one second image and the time point of the click operation performed by the captured object.
For example, the first trigger operation is a trigger operation of a shooting user for a preset function button on a first display screen, the second trigger operation is a trigger operation of a shot object (here, a shot user) for a preset function button on a second display screen, please refer to fig. 2B, fig. 2B is a schematic view of a shooting scene of an electronic device, as shown in fig. 2B, the electronic device 10 includes a first display interface 101 and a second display interface 102, after the camera is started, the electronic device 10 displays the first function interface on the first display screen 101, displays the second function interface on the second display screen 102, and at the same time, the electronic device records a video of shooting preview content, the first function interface 101 is shown in fig. 2C, the second function interface is shown in fig. 2D, the shooting user performs a "shooting" function selection and clicks an icon on the first display interface 101, when the shooting user clicks the virtual button on the first functional interface, the object to be shot can start the shooting function by clicking the shooting icon or the shooting function key in the interface, that is, the second shooting control instruction is obtained, and the electronic device finally finishes shooting.
Further, the method further comprises: when the electronic equipment detects that the camera is opened, recording shooting preview content; when a second trigger operation is detected, finishing recording according to the second trigger operation to obtain an initial video; and the electronic equipment acquires at least one second image from the video according to the first shooting control instruction and the second shooting control instruction.
Therefore, in the example, the electronic device can not only eliminate poor picture effect caused by shaking of the click operation, but also obtain the reference picture in the optimal time period, so that the satisfaction degree of the user on the shot image is improved.
optionally, the generating, by the electronic device, a target image according to the multiple images includes: the electronic equipment selects at least one image of which the definition and the human eye state meet a first preset condition from the plurality of images, wherein the first preset condition is used for restricting the definition of the image to be within a preset definition range and restricting the human eyes in the image to be in an open state; if the at least one image is a single image, determining that the single image is a target image; if the at least one image is at least two images, determining the reference brightness of the target image according to the brightness of each image in the at least two images; the electronic equipment detects the hand action state of each image in the at least two images; the electronic equipment selects one or more images of which the hand action states meet a second preset condition from the at least two images, wherein the second preset condition is used for restricting the user in the images not to be in the process of the consistency action of the second trigger operation; if the one or more images are single images, adjusting the single images according to the reference brightness to obtain the target image; and if the one or more images are M images and M is an integer larger than 1, generating the target image according to the M images and the reference brightness.
The specific implementation manner of the electronic device selecting at least one image of the plurality of images whose definition and human eye state satisfy the first preset condition may be: the electronic device selects an image with eyes open from the plurality of images, determines that the image is a target image if only one image with eyes open is available, and obtains a sharpness value of each image of the two or more images with eyes open according to an S3 algorithm if the number of the images with eyes open is two or more, wherein the S3 algorithm specifically includes: obtaining a slope of a local magnitude spectrum of the image, producing a first sharpness image by evaluation based on the slope of the local magnitude spectrum: s1 image, and the algorithm is as the formula: wherein x is the slope of the local amplitude spectrum; producing a sharpness image by evaluation based on local gross variation: s2 image, the algorithm is as in the formula where i and j are pixel elements of adjacent 22 patches in image x; combining the S1 and S2 images to produce a third image: s3 image, and the algorithm is as the formula: finally, the image is used for generating a definition value of the original image through a specific algorithm, the larger the value is, the clearer the image is, and the algorithm is as the formula: wherein, the element in the S3 image has the k-th element after the size arrangement, and N is 1% of the total element number of the S3 image. The preset definition can be set according to needs, and is not particularly limited.
Optionally, the determining, by the electronic device, the reference brightness of the target image according to the brightness of each of the at least two images includes: the electronic equipment carries out face recognition on each image in the at least two images; the electronic equipment determines a foreground area and a background area of each image in the at least two images according to the face recognition result; the electronic equipment acquires the brightness of the foreground region of each image in the at least two images, and generates the average brightness of the foreground regions of the at least two images according to the brightness of the foreground region of each image, wherein the average brightness of the foreground regions of the at least two images is the reference brightness of the foreground region in the target image; the electronic equipment acquires the brightness of the background area of each of the at least two images, and generates the average brightness of the background areas of the at least two images according to the brightness of the background area of each image, wherein the average brightness of the background areas of the at least two images is the reference brightness of the background area in the target image.
Wherein, the electronic device determines, according to the face recognition result, that the foreground region and the background region of each of the at least two images may be: the electronic equipment takes a shot user as a foreground area and takes an area except the shot user as a background area.
Optionally, the generating, by the electronic device, the target image according to the M images and the reference brightness includes: the electronic equipment determines that the expression state in the M images meets a third preset condition, wherein the third preset condition is used for restricting the expression of the user in the images to be a smiling face expression; if the M images do not have the image meeting a third preset condition, selecting any one of the M images as a reference image; acquiring edge parts which are lacked by the reference image in background areas in other images except the reference image in the at least M images; stitching the edge portion to a background region of the reference image; cutting the spliced reference image according to the position of the shot object in the spliced reference image; adjusting the cut reference image according to the reference brightness to obtain the target image; if one image meeting a third preset condition exists in the M images, determining the image meeting the third preset condition as a reference image; acquiring edge parts which are lacked by the reference image in background areas in other images except the reference image in the at least M images; stitching the edge portion to a background region of the reference image; cutting the spliced reference image according to the position of the shot object in the spliced reference image; adjusting the cut reference image according to the reference brightness to obtain the target image; if N images meeting a third preset condition exist in the M images, and N is a positive integer which is greater than 1 and less than or equal to M, selecting any one of the N images meeting the third preset condition as a reference image; acquiring edge parts which are lacked by the reference image in background areas in other images except the reference image in the at least M images; stitching the edge portion to a background region of the reference image; cutting the spliced reference image according to the position of the shot object in the spliced reference image; and adjusting the cut reference image according to the reference brightness to obtain the target image.
Wherein, the cropping the spliced reference image according to the position of the shot object in the spliced reference image may be: and taking the position of the shot object as a central position, and cutting the spliced reference image according to a preset size.
optionally, the acquiring, by the electronic device, at least one image of the object to be photographed according to the first photographing control instruction and the second photographing control instruction includes: the electronic equipment acquires at least one first image according to the first shooting control instruction, wherein the at least one first image is an image corresponding to the first trigger operation; and the electronic equipment acquires at least one second image according to the second shooting control instruction, wherein the at least one second image is an image corresponding to the second trigger operation, and the second trigger operation is an instant trigger gesture scratched by the object to be shot in a preset time after the first trigger operation occurs.
Wherein the first trigger operation may be at least one of: for the triggering operation, the sliding operation, and the multi-point touch operation of the preset function button, the second triggering operation is referred to as the related content corresponding to the triggering operation, the sliding operation, or the multi-point touch operation of the preset function button, and details are not repeated here.
For example, after the shooting user completes the first trigger operation, if the electronic device detects that the gesture of the current stroke of the user to be shot is the instant trigger gesture, the shooting preview content image from the time point X seconds before the first trigger operation to the time point Y seconds after the first trigger operation is acquired, where X may be any value from 2 to 5, and X may be any value from 2 to 5, which is not particularly limited.
the instantaneous trigger gesture is a gesture capable of triggering generation of the second shooting control instruction, the instantaneous trigger gesture may be a stroke of the shot person when the shooting object is a single person, if the shooting object is multiple persons, the instantaneous trigger gesture may be a stroke of a person at a specific position in the shot person, the person at the specific position may be a first person on the left in the shooting preview content, the person at the specific position may be a first person on the right in the shooting preview content, the person at the specific position may be a person at a middle position in the shooting preview content, the instantaneous trigger gesture may be a stroke of ok, the instantaneous trigger gesture may be a stroke of v, and the instantaneous trigger gesture may be any other gesture.
Referring to fig. 2E, fig. 2E is another schematic diagram of a shooting scene of an electronic device, as shown in fig. 2E, the electronic device 10 includes a first display interface 101 and a second display interface 102, after a camera is started, the electronic device 10 displays a first function interface on the first display screen 101, displays a second function interface on the second display screen 102, and simultaneously records a video of shooting preview content, a shooting user selects a "shooting" function for the first display interface 101 and clicks a shooting icon, that is, starts to obtain a first shooting control instruction, and within a preset 5s, the electronic device 10 detects that a shot object is scribed with a gesture "v", and the gesture "v" is an instant trigger gesture, and then the electronic device generates a second shooting control instruction, and finally completes shooting.
as can be seen, in this example, the electronic device can confirm the instantaneous trigger gesture drawn by the object to be photographed after the photographing user confirms photographing, and finally complete photographing.
Optionally, the generating, by the electronic device, a target image according to the multiple images includes: the electronic equipment selects at least one image of the plurality of images, the definition and the hand action state of which meet a first preset condition, wherein the first preset condition is used for restraining the definition of the image to be within a preset definition range and restraining the hand action in the image to be used as the instantaneous trigger gesture; if the at least one image is a single image, determining that the single image is a target image; if the at least one image is at least two images, determining the reference brightness of the target image according to the brightness of each image in the at least two images; detecting the human eye state of each image in the at least two images; selecting one or more images of which the human eye states meet a second preset condition from the at least two images, wherein the second preset condition is used for restricting human eyes in the images from being in an open state; if the one or more images are single images, adjusting the single images according to the reference brightness to obtain the target image; and if the one or more images are M images and M is an integer larger than 1, generating the target image according to the M images and the reference brightness.
Optionally, the determining, by the electronic device, the reference brightness of the target image according to the brightness of each of the at least two images includes: the electronic equipment carries out face recognition on each image in the at least two images; the electronic equipment determines a foreground area and a background area of each image in the at least two images according to the face recognition result; the electronic equipment acquires the brightness of the foreground region of each image in the at least two images, and generates the average brightness of the foreground regions of the at least two images according to the brightness of the foreground region of each image, wherein the average brightness of the foreground regions of the at least two images is the reference brightness of the foreground region in the target image; the electronic equipment acquires the brightness of the background area of each of the at least two images, and generates the average brightness of the background areas of the at least two images according to the brightness of the background area of each image, wherein the average brightness of the background areas of the at least two images is the reference brightness of the background area in the target image.
optionally, the generating, by the electronic device, the target image according to the M images and the reference brightness includes: the electronic equipment determines that the expression state in the M images meets a third preset condition, wherein the third preset condition is used for restricting the expression of the user in the images to be a smiling face expression; if the M images do not have the image meeting a third preset condition, selecting any one of the M images as a reference image; acquiring edge parts which are lacked by the reference image in background areas in other images except the reference image in the at least M images; stitching the edge portion to a background region of the reference image; cutting the spliced reference image according to the position of the shot object in the spliced reference image; adjusting the cut reference image according to the reference brightness to obtain the target image; if one image meeting a third preset condition exists in the M images, determining the image meeting the third preset condition as a reference image; acquiring edge parts which are lacked by the reference image in background areas in other images except the reference image in the at least M images; stitching the edge portion to a background region of the reference image; cutting the spliced reference image according to the position of the shot object in the spliced reference image; adjusting the cut reference image according to the reference brightness to obtain the target image; if N images meeting a third preset condition exist in the M images, and N is a positive integer which is greater than 1 and less than or equal to M, selecting any one of the N images meeting the third preset condition as a reference image; acquiring edge parts which are lacked by the reference image in background areas in other images except the reference image in the at least M images; stitching the edge portion to a background region of the reference image; cutting the spliced reference image according to the position of the shot object in the spliced reference image; and adjusting the cut reference image according to the reference brightness to obtain the target image.
Optionally, the acquiring, by the electronic device, at least one image of the object to be photographed according to the first photographing control instruction and the second photographing control instruction includes: the electronic equipment acquires at least one first image according to the first shooting control instruction, wherein the at least one first image is an image corresponding to the first trigger operation; and the electronic equipment acquires at least one second image according to the second shooting control instruction, wherein the at least one second image is an image corresponding to the second trigger operation, and the second trigger operation is a time-delay shooting gesture which is scratched by the object to be shot in a preset time after the first trigger operation occurs.
the time-delay shooting gesture refers to a gesture of taking a picture after triggering a time-delay preset time, when a shooting object is a single person, the time-delay shooting gesture may be a stroke of the shot person, when the shooting object is a plurality of persons, the time-delay shooting gesture may be a stroke of a person at a specific position in the shot person, the person at the specific position may be a first person on the left in shooting preview content, the person at the specific position may be a first person on the right in shooting preview content, the person at the specific position may be a person at a middle position in shooting preview content, the time-delay shooting gesture may be a stroke of "ok", the preset time-delay shooting gesture may be a stroke of "v", the time-delay shooting gesture may be a dynamic gesture, such as a stroke of "w", and the time-delay shooting gesture may be any other gesture.
If the user to be shot marks out the time-delay shooting gesture, the image obtained by the user type to be shot is a prepared gesture after the time-delay shooting gesture is completed.
For example, after the shooting user completes the first trigger operation, if the electronic device detects that the gesture of the current stroke of the user to be shot is the delay trigger gesture, the shooting preview content image from a time point X seconds before the first trigger operation to a time point Z seconds after the completion of the first delay trigger gesture is acquired, where X may be any value from 2 to 5, and Z may be any value from 5 to 10, which is not particularly limited.
referring to fig. 2F, fig. 2F is a schematic view of another shooting scene of the electronic device, as shown in fig. 2F, the electronic device 10 includes a first display interface 101 and a second display interface 102, after the camera is started, the electronic device 10 displays the first function interface on the first display screen 101, displays the second function interface on the second display screen 102, meanwhile, the electronic device records a video of the shooting preview content, the shooting user selects the "shooting" function of the first display interface 101 and clicks the shooting icon, i.e., starts to acquire the first shooting control instruction, the object to be shot is scribed with a gesture "ok" within a preset time, and the gesture "ok" is a preset delay shooting gesture, i.e., acquires the second shooting control instruction, and the electronic device finally completes shooting.
Optionally, the electronic device may obtain the first touch operation of the shooting user after detecting that the shooting user marks out the delay shooting gesture, and obtain the plurality of images according to the delay shooting gesture marked out by the shooting user and the first touch operation of the shooting user.
As can be seen, in this example, after the shooting user confirms shooting, the electronic device can confirm the time-delay shooting gesture that the object to be shot is stroked, and finally complete shooting.
optionally, the generating, by the electronic device, a target image according to the multiple images includes: the electronic equipment selects at least one image of which the definition and the human eye state meet a first preset condition from the plurality of images, wherein the first preset condition is used for restricting the definition of the image to be within a preset definition range and restricting the human eyes in the image to be in an open state; if the at least one image is a single image, determining that the single image is a target image; if the at least one image is at least two images, determining the reference brightness of the target image according to the brightness of each image in the at least two images; the electronic equipment detects the hand action state of each image in the at least two images; the electronic equipment selects one or more images of which the hand action states meet a second preset condition from the at least two images, wherein the second preset condition is used for restricting the user in the images not to be in the process of the consistency action of the second trigger operation; if the one or more images are single images, adjusting the single images according to the reference brightness to obtain the target image; and if the one or more images are M images and M is an integer larger than 1, generating the target image according to the M images and the reference brightness.
in a specific implementation, the electronic device selects one or more of the at least two images whose hand motion state satisfies a second preset condition, that is, the electronic device screens out, from the at least two images, an image in a process from a shooting user's raising of a hand-stroke delay shooting gesture to a fixed posture after a closed-loop delay forward gesture is finished.
Optionally, the determining, by the electronic device, the reference brightness of the target image according to the brightness of each of the at least two images includes: performing face recognition on each image in the at least two images; determining a foreground region and a background region of each image in the at least two images according to the face recognition result; acquiring the brightness of a foreground region of each of the at least two images, and generating the average brightness of the foreground regions of the at least two images according to the brightness of the foreground region of each image, wherein the average brightness of the foreground regions of the at least two images is the reference brightness of the foreground region in the target image; and acquiring the brightness of the background area of each of the at least two images, and generating the average brightness of the background areas of the at least two images according to the brightness of the background area of each image, wherein the average brightness of the background areas of the at least two images is the reference brightness of the background area in the target image.
optionally, the generating, by the electronic device, the target image according to the M images and the reference brightness includes: the electronic equipment determines that the expression state in the M images meets a third preset condition, wherein the third preset condition is used for restricting the expression of the user in the images to be a smiling face expression; if the M images do not have the image meeting a third preset condition, selecting any one of the M images as a reference image; acquiring edge parts which are lacked by the reference image in background areas in other images except the reference image in the at least M images; stitching the edge portion to a background region of the reference image; cutting the spliced reference image according to the position of the shot object in the spliced reference image; adjusting the cut reference image according to the reference brightness to obtain the target image; if one image meeting a third preset condition exists in the M images, determining the image meeting the third preset condition as a reference image; acquiring edge parts which are lacked by the reference image in background areas in other images except the reference image in the at least M images; stitching the edge portion to a background region of the reference image; cutting the spliced reference image according to the position of the shot object in the spliced reference image; adjusting the cut reference image according to the reference brightness to obtain the target image; if N images meeting a third preset condition exist in the M images, and N is a positive integer which is greater than 1 and less than or equal to M, selecting any one of the N images meeting the third preset condition as a reference image; acquiring edge parts which are lacked by the reference image in background areas in other images except the reference image in the at least M images; stitching the edge portion to a background region of the reference image; cutting the spliced reference image according to the position of the shot object in the spliced reference image; and adjusting the cut reference image according to the reference brightness to obtain the target image.
optionally, the acquiring, by the electronic device, at least one image of the object to be photographed according to the first photographing control instruction and the second photographing control instruction includes: the electronic equipment acquires at least one first image according to the first shooting control instruction, wherein the at least one first image is an image corresponding to a first trigger operation; and the electronic equipment acquires at least one second image according to the second shooting control instruction, wherein the at least one second image is an image corresponding to the second trigger operation, and the second trigger operation is that the object to be shot is static within a preset time after the first trigger operation occurs.
for example, after the shooting user completes the first trigger operation, if the electronic device detects that the position of the user to be shot does not change, the shooting preview content image from a time point X seconds before the first trigger operation to a time point W seconds after the first trigger operation is acquired, where X may be any value from 2 to 5, and W may be any positive value less than 6, which is not particularly limited.
for example, after the shooting user clicks the virtual button on the first display interface, the electronic device generates a first shooting control instruction, and at the same time, the electronic device detects the state of the object to be shot, where the object to be shot may be a person or an animal, and if the object to be shot is detected to be still within a preset time interval, a second shooting control instruction is generated, at this time, the electronic device may obtain at least one image corresponding to the first shooting control instruction and at least one image corresponding to the second shooting control instruction, it should be noted that the first image corresponding to the first shooting instruction may be one image before the shooting user clicks the virtual button on the first display interface, the one image may be one image after the shooting user clicks the virtual button on the first display interface, and the first image may be one image obtained from multiple images, if the image is generated according to an image before the shooting user clicks the virtual button on the first display interface and an image after the shooting user clicks the virtual button on the first display interface, the first image may be multiple images. The second image corresponding to the second photographing control command is an image corresponding to the subject when the subject is still.
referring to fig. 2G, fig. 2G is a schematic view of a shooting scene of an electronic device, as shown in fig. 2G, the electronic device 10 includes a first display interface 101 and a second display interface 102, after a camera is started, the electronic device 10 displays a first function interface on the first display screen 101, displays a second function interface on the second display screen 102, and meanwhile, the electronic device records a video of shooting preview content, a shooting user selects a "shooting" function on the first display interface 101 and clicks a shooting icon, that is, starts to obtain a first shooting control instruction, and detects that a shot object is in a static state within a preset time, that is, obtains the second shooting control instruction, and finally finishes shooting.
As can be seen, in this example, the electronic device can confirm that the object to be photographed is still after the photographing user confirms photographing, and finally completes photographing.
Optionally, the generating, by the electronic device, a target image according to the multiple images includes: the electronic equipment selects at least one image of which the definition and the human eye state meet a first preset condition from the plurality of images, wherein the first preset condition is used for restricting the definition of the image to be within a preset definition range and restricting the human eyes in the image to be in an open state; if the at least one image is a single image, determining that the single image is a target image; and if the at least one image is at least two images, determining the reference brightness of the target image according to the brightness of each image of the at least two images, and generating the target image according to the at least two images and the reference brightness.
Optionally, the determining, by the electronic device, the reference brightness of the target image according to the brightness of each of the at least two images includes: the electronic equipment carries out face recognition on each image in the at least two images; the electronic equipment determines a foreground area and a background area of each image in the at least two images according to the face recognition result; the electronic equipment acquires the brightness of the foreground region of each image in the at least two images, and generates the average brightness of the foreground regions of the at least two images according to the brightness of the foreground region of each image, wherein the average brightness of the foreground regions of the at least two images is the reference brightness of the foreground region in the target image; the electronic equipment acquires the brightness of the background area of each of the at least two images, and generates the average brightness of the background areas of the at least two images according to the brightness of the background area of each image, wherein the average brightness of the background areas of the at least two images is the reference brightness of the background area in the target image.
Optionally, the generating the target image according to the at least two images and the reference brightness includes: the electronic equipment determines that the expression states in the at least two images meet a third preset condition, wherein the third preset condition is used for restricting the expression of the user in the images to be a smiling face expression; if the image meeting the third preset condition does not exist in the at least two images, selecting any one of the at least two images as a reference image; acquiring edge parts lacking from the reference image in background areas in other images except the reference image in the at least two images; the electronic device stitching the edge portion to a background region of the reference image; the electronic equipment cuts the spliced reference image according to the position of the shot object in the spliced reference image; the electronic equipment adjusts the cut reference image according to the reference brightness to obtain the target image; if one image meeting a third preset condition exists in the at least two images, determining the image meeting the third preset condition as a reference image; the electronic equipment acquires the edge part which is lacked by the reference image in the background area in the other images except the reference image in the at least two images; the electronic device stitching the edge portion to a background region of the reference image; the electronic equipment cuts the spliced reference image according to the position of the shot object in the spliced reference image; the electronic equipment adjusts the cut reference image according to the reference brightness to obtain the target image; if M images meeting a third preset condition exist in the at least two images, and M is a positive integer greater than 1, selecting any one of the M images meeting the third preset condition as a reference image; the electronic equipment acquires the edge part which is lacked by the reference image in the background area in the other images except the reference image in the at least two images; the electronic device stitching the edge portion to a background region of the reference image; the electronic equipment cuts the spliced reference image according to the position of the shot object in the spliced reference image; and the electronic equipment adjusts the cut reference image according to the reference brightness to obtain the target image.
Therefore, in the example, the electronic device generates the target image through two control instructions, and the satisfaction degree of the user is improved.
It can be seen that, in this embodiment of the application, when the electronic device detects that the camera is started, first, a first function interface is displayed on a first display screen, and a second function interface is displayed on a second display screen, where the first function interface includes shooting preview content, and the second function interface includes shooting preview content, then, a first shooting control instruction and a second shooting control instruction are obtained, the first shooting control instruction is an instruction generated by a first trigger operation for the first function interface, the second shooting control instruction is an instruction generated by a second trigger operation for the second function interface, and finally, a target image is generated according to the first shooting control instruction and the second shooting control instruction. Therefore, the electronic equipment can display the shooting preview content on different display screens, and the first display screen and the second display screen are arranged relatively, so that the shooting user can look up the shooting preview content respectively, the shooting control instructions of the two users can be integrated to realize shooting, the shooting success rate and the shooting efficiency are reduced by avoiding shooting under the condition that one user is dissatisfied, and the shooting success rate and the shooting efficiency of the electronic equipment are improved.
Referring to fig. 3, fig. 3 is a schematic flow chart of another shooting control method according to an embodiment of the present application, and as shown in fig. 3, the shooting control method includes:
s201, when the starting of a camera is detected, the electronic equipment displays a first function interface on a first display screen, and displays a second function interface on a second display screen, wherein the first function interface comprises shooting preview content, and the second function interface comprises the shooting preview content;
S202, the electronic equipment acquires a first shooting control instruction and a second shooting control instruction, wherein the first shooting control instruction is an instruction generated by a first trigger operation aiming at the first functional interface, and the second shooting control instruction is an instruction generated by a second trigger operation aiming at the second functional interface;
s203, the electronic equipment acquires at least one first image according to the first shooting control instruction, wherein the at least one first image is an image corresponding to the first trigger operation;
S204, the electronic equipment acquires at least one second image according to the second shooting control instruction, wherein the at least one second image is an image corresponding to the second trigger operation, and the second trigger operation is an instant trigger gesture which is drawn by the object to be shot in a preset time after the first trigger operation occurs;
S205, the electronic equipment generates a target image according to the at least one first image and the at least one second image.
It can be seen that, in this embodiment of the application, when the electronic device detects that the camera is started, first, a first function interface is displayed on a first display screen, and a second function interface is displayed on a second display screen, where the first function interface includes shooting preview content, and the second function interface includes shooting preview content, then, a first shooting control instruction and a second shooting control instruction are obtained, the first shooting control instruction is an instruction generated by a first trigger operation for the first function interface, the second shooting control instruction is an instruction generated by a second trigger operation for the second function interface, and finally, a target image is generated according to the first shooting control instruction and the second shooting control instruction. Therefore, the electronic equipment can display the shooting preview content on different display screens, and the first display screen and the second display screen are arranged relatively, so that the shooting user can look up the shooting preview content respectively, the shooting control instructions of the two users can be integrated to realize shooting, the shooting success rate and the shooting efficiency are reduced by avoiding shooting under the condition that one user is dissatisfied, and the shooting success rate and the shooting efficiency of the electronic equipment are improved.
In addition, the electronic equipment can confirm that the object to be shot marks out the instant trigger gesture after the shooting user confirms shooting, and complete shooting according to the operation of the shooting user and the object to be shot, so that the shooting intelligence of the electronic equipment is improved.
Referring to fig. 4 in accordance with the embodiment shown in fig. 2A, fig. 4 is a schematic structural diagram of an electronic device 400 provided in an embodiment of the present application, as shown in fig. 4, the electronic device 400 includes an application processor 410, a memory 420, a communication interface 430, and one or more programs 421, where the one or more programs 421 are stored in the memory 420 and configured to be executed by the application processor 410, and the one or more programs 421 include instructions for performing the following steps;
When the starting of a camera is detected, displaying a first function interface on the first display screen, and displaying a second function interface on the second display screen, wherein the first function interface comprises shooting preview content, and the second function interface comprises the shooting preview content;
Acquiring a first shooting control instruction and a second shooting control instruction, wherein the first shooting control instruction is an instruction generated by a first trigger operation for the first functional interface, and the second shooting control instruction is an instruction generated by a second trigger operation for the second functional interface;
and generating a target image according to the first shooting control instruction and the second shooting control instruction.
It can be seen that, in this embodiment of the application, when the electronic device detects that the camera is started, first, a first function interface is displayed on a first display screen, and a second function interface is displayed on a second display screen, where the first function interface includes shooting preview content, and the second function interface includes shooting preview content, then, a first shooting control instruction and a second shooting control instruction are obtained, the first shooting control instruction is an instruction generated by a first trigger operation for the first function interface, the second shooting control instruction is an instruction generated by a second trigger operation for the second function interface, and finally, a target image is generated according to the first shooting control instruction and the second shooting control instruction. Therefore, the electronic equipment can display the shooting preview content on different display screens, and the first display screen and the second display screen are arranged relatively, so that the shooting user can look up the shooting preview content respectively, the shooting control instructions of the two users can be integrated to realize shooting, the shooting success rate and the shooting efficiency are reduced by avoiding shooting under the condition that one user is dissatisfied, and the shooting success rate and the shooting efficiency of the electronic equipment are improved.
in one possible example, in said generating the target image according to the first shooting control instruction and the second shooting control instruction, the instructions of the one or more programs 421 are specifically configured to: acquiring a plurality of images of an object to be shot according to the first shooting control instruction and the second shooting control instruction; and generating a target image according to the plurality of images.
In one possible example, in said acquiring at least one image of the object to be photographed according to said first photographing control instruction and said second photographing control instruction, the instructions of said one or more programs 421 are in particular for: acquiring at least one first image according to the first shooting control instruction, wherein the at least one first image is an image corresponding to the first trigger operation; acquiring at least one second image according to the second shooting control instruction, wherein the at least one second image is an image corresponding to the second triggering operation, and the second triggering operation comprises at least one of the following operations: the method comprises the steps of triggering operation, sliding operation and multi-point touch operation aiming at preset function buttons.
In one possible example, in the generating the target image from the plurality of images, the instructions of the one or more programs 421 are specifically for: selecting at least one image of which the definition and the human eye state meet a first preset condition from the plurality of images, wherein the first preset condition is used for restricting the definition of the image to be within a preset definition range and restricting the human eyes in the image to be in an open state; if the at least one image is a single image, determining that the single image is a target image; if the at least one image is at least two images, determining the reference brightness of the target image according to the brightness of each image in the at least two images; detecting the hand action state of each image in the at least two images; selecting one or more images of which the hand action states meet a second preset condition from the at least two images, wherein the second preset condition is used for restricting the user in the images not to be in the process of the consistency action of the second trigger operation; if the one or more images are single images, adjusting the single images according to the reference brightness to obtain the target image; and if the one or more images are M images and M is an integer larger than 1, generating the target image according to the M images and the reference brightness.
in one possible example, in the determining the reference brightness of the target image according to the brightness of each of the at least two images, the instructions of the one or more programs 421 are specifically configured to: performing face recognition on each image in the at least two images; determining a foreground region and a background region of each image in the at least two images according to the face recognition result; acquiring the brightness of a foreground region of each of the at least two images, and generating the average brightness of the foreground regions of the at least two images according to the brightness of the foreground region of each image, wherein the average brightness of the foreground regions of the at least two images is the reference brightness of the foreground region in the target image; and acquiring the brightness of the background area of each of the at least two images, and generating the average brightness of the background areas of the at least two images according to the brightness of the background area of each image, wherein the average brightness of the background areas of the at least two images is the reference brightness of the background area in the target image.
in one possible example, in the generating the target image from the M images and the reference luminance, the instructions of the one or more programs 421 are specifically for: determining the condition that the expression states in the M images meet a third preset condition, wherein the third preset condition is used for restricting the expression of the user in the images to be smiling face expression; if the M images do not have the image meeting a third preset condition, selecting any one of the M images as a reference image; acquiring edge parts which are lacked by the reference image in background areas in other images except the reference image in the at least M images; stitching the edge portion to a background region of the reference image; cutting the spliced reference image according to the position of the shot object in the spliced reference image; adjusting the cut reference image according to the reference brightness to obtain the target image; if one image meeting a third preset condition exists in the M images, determining the image meeting the third preset condition as a reference image; acquiring edge parts which are lacked by the reference image in background areas in other images except the reference image in the at least M images; stitching the edge portion to a background region of the reference image; cutting the spliced reference image according to the position of the shot object in the spliced reference image; adjusting the cut reference image according to the reference brightness to obtain the target image; if N images meeting a third preset condition exist in the M images, and N is a positive integer which is greater than 1 and less than or equal to M, selecting any one of the N images meeting the third preset condition as a reference image; acquiring edge parts which are lacked by the reference image in background areas in other images except the reference image in the at least M images; stitching the edge portion to a background region of the reference image; cutting the spliced reference image according to the position of the shot object in the spliced reference image; and adjusting the cut reference image according to the reference brightness to obtain the target image.
In one possible example, in said acquiring at least one image of the object to be photographed according to said first photographing control instruction and said second photographing control instruction, the instructions of said one or more programs 421 are in particular for: acquiring at least one first image according to the first shooting control instruction, wherein the at least one first image is an image corresponding to the first trigger operation; and acquiring at least one second image according to the second shooting control instruction, wherein the at least one second image is an image corresponding to the second trigger operation, and the second trigger operation is an instant trigger gesture drawn by the object to be shot in a preset time after the first trigger operation occurs.
In one possible example, in the generating the target image from the plurality of images, the instructions of the one or more programs 421 are specifically for: selecting at least one image of the plurality of images, the definition and the hand action state of which meet a first preset condition, wherein the first preset condition is used for restricting the definition of the image to be within a preset definition range and restricting the hand action in the image to be used as the instantaneous trigger gesture; if the at least one image is a single image, determining that the single image is a target image; if the at least one image is at least two images, determining the reference brightness of the target image according to the brightness of each image in the at least two images; detecting the human eye state of each image in the at least two images; selecting one or more images of which the human eye states meet a second preset condition from the at least two images, wherein the second preset condition is used for restricting human eyes in the images from being in an open state; if the one or more images are single images, adjusting the single images according to the reference brightness to obtain the target image; and if the one or more images are M images and M is an integer larger than 1, generating the target image according to the M images and the reference brightness.
in one possible example, in the determining the reference brightness of the target image according to the brightness of each of the at least two images, the instructions of the one or more programs 421 are specifically configured to: performing face recognition on each image in the at least two images; determining a foreground region and a background region of each image in the at least two images according to the face recognition result; acquiring the brightness of a foreground region of each of the at least two images, and generating the average brightness of the foreground regions of the at least two images according to the brightness of the foreground region of each image, wherein the average brightness of the foreground regions of the at least two images is the reference brightness of the foreground region in the target image; and acquiring the brightness of the background area of each of the at least two images, and generating the average brightness of the background areas of the at least two images according to the brightness of the background area of each image, wherein the average brightness of the background areas of the at least two images is the reference brightness of the background area in the target image.
In one possible example, in the generating the target image from the M images and the reference luminance, the instructions of the one or more programs 421 are specifically for: determining the condition that the expression states in the M images meet a third preset condition, wherein the third preset condition is used for restricting the expression of the user in the images to be smiling face expression; if the M images do not have the image meeting a third preset condition, selecting any one of the M images as a reference image; acquiring edge parts which are lacked by the reference image in background areas in other images except the reference image in the at least M images; stitching the edge portion to a background region of the reference image; cutting the spliced reference image according to the position of the shot object in the spliced reference image; adjusting the cut reference image according to the reference brightness to obtain the target image; if one image meeting a third preset condition exists in the M images, determining the image meeting the third preset condition as a reference image; acquiring edge parts which are lacked by the reference image in background areas in other images except the reference image in the at least M images; stitching the edge portion to a background region of the reference image; cutting the spliced reference image according to the position of the shot object in the spliced reference image; adjusting the cut reference image according to the reference brightness to obtain the target image; if N images meeting a third preset condition exist in the M images, and N is a positive integer which is greater than 1 and less than or equal to M, selecting any one of the N images meeting the third preset condition as a reference image; acquiring edge parts which are lacked by the reference image in background areas in other images except the reference image in the at least M images; stitching the edge portion to a background region of the reference image; cutting the spliced reference image according to the position of the shot object in the spliced reference image; and adjusting the cut reference image according to the reference brightness to obtain the target image.
In one possible example, in said acquiring at least one image of the object to be photographed according to said first photographing control instruction and said second photographing control instruction, the instructions of said one or more programs 421 are in particular for: acquiring at least one first image according to the first shooting control instruction, wherein the at least one first image is an image corresponding to the first trigger operation; and acquiring at least one second image according to the second shooting control instruction, wherein the at least one second image is an image corresponding to the second trigger operation, and the second trigger operation is a delayed shooting gesture which is scratched by the object to be shot in a preset time after the first trigger operation occurs.
In one possible example, in the generating the target image from the plurality of images, the instructions of the one or more programs 421 are specifically for: selecting at least one image of which the definition and the human eye state meet a first preset condition from the plurality of images, wherein the first preset condition is used for restricting the definition of the image to be within a preset definition range and restricting the human eyes in the image to be in an open state; if the at least one image is a single image, determining that the single image is a target image; if the at least one image is at least two images, determining the reference brightness of the target image according to the brightness of each image in the at least two images; detecting the hand action state of each image in the at least two images; selecting one or more images of which the hand action states meet a second preset condition from the at least two images, wherein the second preset condition is used for restricting the user in the images not to be in the process of the consistency action of the second trigger operation; if the one or more images are single images, adjusting the single images according to the reference brightness to obtain the target image; and if the one or more images are M images and M is an integer larger than 1, generating the target image according to the M images and the reference brightness.
In one possible example, in the determining the reference brightness of the target image according to the brightness of each of the at least two images, the instructions of the one or more programs 421 are specifically configured to: performing face recognition on each image in the at least two images; determining a foreground region and a background region of each image in the at least two images according to the face recognition result; acquiring the brightness of a foreground region of each of the at least two images, and generating the average brightness of the foreground regions of the at least two images according to the brightness of the foreground region of each image, wherein the average brightness of the foreground regions of the at least two images is the reference brightness of the foreground region in the target image; and acquiring the brightness of the background area of each of the at least two images, and generating the average brightness of the background areas of the at least two images according to the brightness of the background area of each image, wherein the average brightness of the background areas of the at least two images is the reference brightness of the background area in the target image.
In one possible example, in the generating the target image from the M images and the reference luminance, the instructions of the one or more programs 421 are specifically for: determining the condition that the expression states in the M images meet a third preset condition, wherein the third preset condition is used for restricting the expression of the user in the images to be smiling face expression; if the M images do not have the image meeting a third preset condition, selecting any one of the M images as a reference image; acquiring edge parts which are lacked by the reference image in background areas in other images except the reference image in the at least M images; stitching the edge portion to a background region of the reference image; cutting the spliced reference image according to the position of the shot object in the spliced reference image; adjusting the cut reference image according to the reference brightness to obtain the target image; if one image meeting a third preset condition exists in the M images, determining the image meeting the third preset condition as a reference image; acquiring edge parts which are lacked by the reference image in background areas in other images except the reference image in the at least M images; stitching the edge portion to a background region of the reference image; cutting the spliced reference image according to the position of the shot object in the spliced reference image; adjusting the cut reference image according to the reference brightness to obtain the target image; if N images meeting a third preset condition exist in the M images, and N is a positive integer which is greater than 1 and less than or equal to M, selecting any one of the N images meeting the third preset condition as a reference image; acquiring edge parts which are lacked by the reference image in background areas in other images except the reference image in the at least M images; stitching the edge portion to a background region of the reference image; cutting the spliced reference image according to the position of the shot object in the spliced reference image; and adjusting the cut reference image according to the reference brightness to obtain the target image.
In one possible example, in said acquiring at least one image of the object to be photographed according to said first photographing control instruction and said second photographing control instruction, the instructions of said one or more programs 421 are in particular for: acquiring at least one first image according to the first shooting control instruction, wherein the at least one first image is an image corresponding to a first trigger operation; and acquiring at least one second image according to the second shooting control instruction, wherein the at least one second image is an image corresponding to the second trigger operation, and the second trigger operation is that the object to be shot is static within a preset time after the first trigger operation occurs.
In one possible example, in the generating the target image from the plurality of images, the instructions of the one or more programs 421 are specifically for: selecting at least one image of which the definition and the human eye state meet a first preset condition from the plurality of images, wherein the first preset condition is used for restricting the definition of the image to be within a preset definition range and restricting the human eyes in the image to be in an open state; if the at least one image is a single image, determining that the single image is a target image; and if the at least one image is at least two images, determining the reference brightness of the target image according to the brightness of each image of the at least two images, and generating the target image according to the at least two images and the reference brightness.
In one possible example, in the determining the reference brightness of the target image according to the brightness of each of the at least two images, the instructions of the one or more programs 421 are specifically configured to: performing face recognition on each image in the at least two images; determining a foreground region and a background region of each image in the at least two images according to the face recognition result; acquiring the brightness of a foreground region of each of the at least two images, and generating the average brightness of the foreground regions of the at least two images according to the brightness of the foreground region of each image, wherein the average brightness of the foreground regions of the at least two images is the reference brightness of the foreground region in the target image; and acquiring the brightness of the background area of each of the at least two images, and generating the average brightness of the background areas of the at least two images according to the brightness of the background area of each image, wherein the average brightness of the background areas of the at least two images is the reference brightness of the background area in the target image.
in one possible example, in the aspect of generating the target image according to the at least two images and the reference brightness, the instructions of the one or more programs 421 are specifically configured to: determining the condition that the expression states of the at least two images meet a third preset condition, wherein the third preset condition is used for restricting the expression of the user in the images to be a smiling face expression; if the image meeting the third preset condition does not exist in the at least two images, selecting any one of the at least two images as a reference image; acquiring edge parts lacking from the reference image in background areas in other images except the reference image in the at least two images; stitching the edge portion to a background region of the reference image; cutting the spliced reference image according to the position of the shot object in the spliced reference image; adjusting the cut reference image according to the reference brightness to obtain the target image; if one image meeting a third preset condition exists in the at least two images, determining the image meeting the third preset condition as a reference image; acquiring edge parts lacking from the reference image in background areas in other images except the reference image in the at least two images; stitching the edge portion to a background region of the reference image; cutting the spliced reference image according to the position of the shot object in the spliced reference image; adjusting the cut reference image according to the reference brightness to obtain the target image; if M images meeting a third preset condition exist in the at least two images, and M is a positive integer greater than 1, selecting any one of the M images meeting the third preset condition as a reference image; acquiring edge parts lacking from the reference image in background areas in other images except the reference image in the at least two images; stitching the edge portion to a background region of the reference image; cutting the spliced reference image according to the position of the shot object in the spliced reference image; and adjusting the cut reference image according to the reference brightness to obtain the target image.
It should be noted that, in the above description, overlapping portions exist and are referred to each other, and the description is not repeated.
Referring to fig. 5, in accordance with the embodiment shown in fig. 2A, fig. 5 is a block diagram of functional units of a shooting control apparatus provided in an embodiment of the present application, where the shooting control apparatus 500 is applied to an electronic device, and the electronic device is connected to the electronic device; as shown in fig. 5, the photographing control apparatus 500 includes a processing unit 501 and a communication unit 502, wherein,
The processing unit 501 is configured to transmit a start signal through the communication unit 502 when detecting that the camera is started, display a first function interface on the first display screen, and display a second function interface on the second display screen, where the first function interface includes shooting preview content, and the second function interface includes the shooting preview content; the shooting control method comprises the steps of acquiring a first shooting control instruction and a second shooting control instruction, wherein the first shooting control instruction is generated according to a first trigger operation of a shot object, and the second shooting control instruction is generated according to a second trigger operation of the shot object; and the image processing device is used for generating a target image according to the first shooting control instruction and the second shooting control instruction.
the apparatus 500 may further include a storage unit 503 for storing program codes and data of the electronic device. The processing unit 501 may be a processor, the communication unit 502 may be an internal communication interface, and the storage unit 503 may be a memory.
it can be seen that, in this embodiment of the application, when the electronic device detects that the camera is started, first, a first function interface is displayed on a first display screen, and a second function interface is displayed on a second display screen, where the first function interface includes shooting preview content, and the second function interface includes shooting preview content, then, a first shooting control instruction and a second shooting control instruction are obtained, the first shooting control instruction is an instruction generated by a first trigger operation for the first function interface, the second shooting control instruction is an instruction generated by a second trigger operation for the second function interface, and finally, a target image is generated according to the first shooting control instruction and the second shooting control instruction. Therefore, the electronic equipment can display the shooting preview content on different display screens, and the first display screen and the second display screen are arranged relatively, so that the shooting user can look up the shooting preview content respectively, the shooting control instructions of the two users can be integrated to realize shooting, the shooting success rate and the shooting efficiency are reduced by avoiding shooting under the condition that one user is dissatisfied, and the shooting success rate and the shooting efficiency of the electronic equipment are improved.
In one possible example, in the aspect of generating the target image according to the first shooting control instruction and the second shooting control instruction, the processing unit 501 is specifically configured to: acquiring a plurality of images of an object to be shot according to the first shooting control instruction and the second shooting control instruction; and generating a target image according to the plurality of images.
In a possible example, in the aspect of acquiring at least one image of the object to be photographed according to the first photographing control instruction and the second photographing control instruction, the processing unit 501 is specifically configured to: acquiring at least one first image according to the first shooting control instruction, wherein the at least one first image is an image corresponding to the first trigger operation; acquiring at least one second image according to the second shooting control instruction, wherein the at least one second image is an image corresponding to the second triggering operation, and the second triggering operation comprises at least one of the following operations: the method comprises the steps of triggering operation, sliding operation and multi-point touch operation aiming at preset function buttons.
In one possible example, in the aspect of generating the target image according to the multiple images, the processing unit 501 is specifically configured to: selecting at least one image of which the definition and the human eye state meet a first preset condition from the plurality of images, wherein the first preset condition is used for restricting the definition of the image to be within a preset definition range and restricting the human eyes in the image to be in an open state; if the at least one image is a single image, determining that the single image is a target image; if the at least one image is at least two images, determining the reference brightness of the target image according to the brightness of each image in the at least two images; detecting the hand action state of each image in the at least two images; selecting one or more images of which the hand action states meet a second preset condition from the at least two images, wherein the second preset condition is used for restricting the user in the images not to be in the process of the consistency action of the second trigger operation; if the one or more images are single images, adjusting the single images according to the reference brightness to obtain the target image; and if the one or more images are M images and M is an integer larger than 1, generating the target image according to the M images and the reference brightness.
In a possible example, in the aspect that the reference brightness of the target image is determined according to the brightness of each of the at least two images, the processing unit 501 is specifically configured to: performing face recognition on each image in the at least two images; determining a foreground region and a background region of each image in the at least two images according to the face recognition result; acquiring the brightness of a foreground region of each of the at least two images, and generating the average brightness of the foreground regions of the at least two images according to the brightness of the foreground region of each image, wherein the average brightness of the foreground regions of the at least two images is the reference brightness of the foreground region in the target image; and acquiring the brightness of the background area of each of the at least two images, and generating the average brightness of the background areas of the at least two images according to the brightness of the background area of each image, wherein the average brightness of the background areas of the at least two images is the reference brightness of the background area in the target image.
In one possible example, in the aspect of generating the target image according to the M images and the reference brightness, the processing unit 501 is specifically configured to: determining the condition that the expression states in the M images meet a third preset condition, wherein the third preset condition is used for restricting the expression of the user in the images to be smiling face expression; if the M images do not have the image meeting a third preset condition, selecting any one of the M images as a reference image; acquiring edge parts which are lacked by the reference image in background areas in other images except the reference image in the at least M images; stitching the edge portion to a background region of the reference image; cutting the spliced reference image according to the position of the shot object in the spliced reference image; adjusting the cut reference image according to the reference brightness to obtain the target image; if one image meeting a third preset condition exists in the M images, determining the image meeting the third preset condition as a reference image; acquiring edge parts which are lacked by the reference image in background areas in other images except the reference image in the at least M images; stitching the edge portion to a background region of the reference image; cutting the spliced reference image according to the position of the shot object in the spliced reference image; adjusting the cut reference image according to the reference brightness to obtain the target image; if N images meeting a third preset condition exist in the M images, and N is a positive integer which is greater than 1 and less than or equal to M, selecting any one of the N images meeting the third preset condition as a reference image; acquiring edge parts which are lacked by the reference image in background areas in other images except the reference image in the at least M images; stitching the edge portion to a background region of the reference image; cutting the spliced reference image according to the position of the shot object in the spliced reference image; and adjusting the cut reference image according to the reference brightness to obtain the target image.
In a possible example, in the aspect of acquiring at least one image of the object to be photographed according to the first photographing control instruction and the second photographing control instruction, the processing unit 501 is specifically configured to: acquiring at least one first image according to the first shooting control instruction, wherein the at least one first image is an image corresponding to the first trigger operation; and acquiring at least one second image according to the second shooting control instruction, wherein the at least one second image is an image corresponding to the second trigger operation, and the second trigger operation is an instant trigger gesture drawn by the object to be shot in a preset time after the first trigger operation occurs.
In one possible example, in the aspect of generating the target image according to the multiple images, the processing unit 501 is specifically configured to: selecting at least one image of the plurality of images, the definition and the hand action state of which meet a first preset condition, wherein the first preset condition is used for restricting the definition of the image to be within a preset definition range and restricting the hand action in the image to be used as the instantaneous trigger gesture; if the at least one image is a single image, determining that the single image is a target image; if the at least one image is at least two images, determining the reference brightness of the target image according to the brightness of each image in the at least two images; detecting the human eye state of each image in the at least two images; selecting one or more images of which the human eye states meet a second preset condition from the at least two images, wherein the second preset condition is used for restricting human eyes in the images from being in an open state; if the one or more images are single images, adjusting the single images according to the reference brightness to obtain the target image; and if the one or more images are M images and M is an integer larger than 1, generating the target image according to the M images and the reference brightness.
In a possible example, in the aspect that the reference brightness of the target image is determined according to the brightness of each of the at least two images, the processing unit 501 is specifically configured to: performing face recognition on each image in the at least two images; determining a foreground region and a background region of each image in the at least two images according to the face recognition result; acquiring the brightness of a foreground region of each of the at least two images, and generating the average brightness of the foreground regions of the at least two images according to the brightness of the foreground region of each image, wherein the average brightness of the foreground regions of the at least two images is the reference brightness of the foreground region in the target image; and acquiring the brightness of the background area of each of the at least two images, and generating the average brightness of the background areas of the at least two images according to the brightness of the background area of each image, wherein the average brightness of the background areas of the at least two images is the reference brightness of the background area in the target image.
In one possible example, in the aspect of generating the target image according to the M images and the reference brightness, the processing unit 501 is specifically configured to: determining the condition that the expression states in the M images meet a third preset condition, wherein the third preset condition is used for restricting the expression of the user in the images to be smiling face expression; if the M images do not have the image meeting a third preset condition, selecting any one of the M images as a reference image; acquiring edge parts which are lacked by the reference image in background areas in other images except the reference image in the at least M images; stitching the edge portion to a background region of the reference image; cutting the spliced reference image according to the position of the shot object in the spliced reference image; adjusting the cut reference image according to the reference brightness to obtain the target image; if one image meeting a third preset condition exists in the M images, determining the image meeting the third preset condition as a reference image; acquiring edge parts which are lacked by the reference image in background areas in other images except the reference image in the at least M images; stitching the edge portion to a background region of the reference image; cutting the spliced reference image according to the position of the shot object in the spliced reference image; adjusting the cut reference image according to the reference brightness to obtain the target image; if N images meeting a third preset condition exist in the M images, and N is a positive integer which is greater than 1 and less than or equal to M, selecting any one of the N images meeting the third preset condition as a reference image; acquiring edge parts which are lacked by the reference image in background areas in other images except the reference image in the at least M images; stitching the edge portion to a background region of the reference image; cutting the spliced reference image according to the position of the shot object in the spliced reference image; and adjusting the cut reference image according to the reference brightness to obtain the target image.
In a possible example, in the aspect of acquiring at least one image of the object to be photographed according to the first photographing control instruction and the second photographing control instruction, the processing unit 501 is specifically configured to: acquiring at least one first image according to the first shooting control instruction, wherein the at least one first image is an image corresponding to the first trigger operation; and acquiring at least one second image according to the second shooting control instruction, wherein the at least one second image is an image corresponding to the second trigger operation, and the second trigger operation is a delayed shooting gesture which is scratched by the object to be shot in a preset time after the first trigger operation occurs.
in one possible example, in the aspect of generating the target image according to the multiple images, the processing unit 501 is specifically configured to: selecting at least one image of which the definition and the human eye state meet a first preset condition from the plurality of images, wherein the first preset condition is used for restricting the definition of the image to be within a preset definition range and restricting the human eyes in the image to be in an open state; if the at least one image is a single image, determining that the single image is a target image; if the at least one image is at least two images, determining the reference brightness of the target image according to the brightness of each image in the at least two images; detecting the hand action state of each image in the at least two images; selecting one or more images of which the hand action states meet a second preset condition from the at least two images, wherein the second preset condition is used for restricting the user in the images not to be in the process of the consistency action of the second trigger operation; if the one or more images are single images, adjusting the single images according to the reference brightness to obtain the target image; and if the one or more images are M images and M is an integer larger than 1, generating the target image according to the M images and the reference brightness.
in a possible example, in the aspect that the reference brightness of the target image is determined according to the brightness of each of the at least two images, the processing unit 501 is specifically configured to: performing face recognition on each image in the at least two images; determining a foreground region and a background region of each image in the at least two images according to the face recognition result; acquiring the brightness of a foreground region of each of the at least two images, and generating the average brightness of the foreground regions of the at least two images according to the brightness of the foreground region of each image, wherein the average brightness of the foreground regions of the at least two images is the reference brightness of the foreground region in the target image; and acquiring the brightness of the background area of each of the at least two images, and generating the average brightness of the background areas of the at least two images according to the brightness of the background area of each image, wherein the average brightness of the background areas of the at least two images is the reference brightness of the background area in the target image.
In one possible example, in the aspect of generating the target image according to the M images and the reference brightness, the processing unit 501 is specifically configured to: determining the condition that the expression states in the M images meet a third preset condition, wherein the third preset condition is used for restricting the expression of the user in the images to be smiling face expression; if the M images do not have the image meeting a third preset condition, selecting any one of the M images as a reference image; acquiring edge parts which are lacked by the reference image in background areas in other images except the reference image in the at least M images; stitching the edge portion to a background region of the reference image; cutting the spliced reference image according to the position of the shot object in the spliced reference image; adjusting the cut reference image according to the reference brightness to obtain the target image; if one image meeting a third preset condition exists in the M images, determining the image meeting the third preset condition as a reference image; acquiring edge parts which are lacked by the reference image in background areas in other images except the reference image in the at least M images; stitching the edge portion to a background region of the reference image; cutting the spliced reference image according to the position of the shot object in the spliced reference image; adjusting the cut reference image according to the reference brightness to obtain the target image; if N images meeting a third preset condition exist in the M images, and N is a positive integer which is greater than 1 and less than or equal to M, selecting any one of the N images meeting the third preset condition as a reference image; acquiring edge parts which are lacked by the reference image in background areas in other images except the reference image in the at least M images; stitching the edge portion to a background region of the reference image; cutting the spliced reference image according to the position of the shot object in the spliced reference image; and adjusting the cut reference image according to the reference brightness to obtain the target image.
in a possible example, in the aspect of acquiring at least one image of the object to be photographed according to the first photographing control instruction and the second photographing control instruction, the processing unit 501 is specifically configured to: acquiring at least one first image according to the first shooting control instruction, wherein the at least one first image is an image corresponding to a first trigger operation; and acquiring at least one second image according to the second shooting control instruction, wherein the at least one second image is an image corresponding to the second trigger operation, and the second trigger operation is that the object to be shot is static within a preset time after the first trigger operation occurs.
In one possible example, in the aspect of generating the target image according to the multiple images, the processing unit 501 is specifically configured to: selecting at least one image of which the definition and the human eye state meet a first preset condition from the plurality of images, wherein the first preset condition is used for restricting the definition of the image to be within a preset definition range and restricting the human eyes in the image to be in an open state; if the at least one image is a single image, determining that the single image is a target image; and if the at least one image is at least two images, determining the reference brightness of the target image according to the brightness of each image of the at least two images, and generating the target image according to the at least two images and the reference brightness.
In a possible example, in the aspect that the reference brightness of the target image is determined according to the brightness of each of the at least two images, the processing unit 501 is specifically configured to: performing face recognition on each image in the at least two images; determining a foreground region and a background region of each image in the at least two images according to the face recognition result; acquiring the brightness of a foreground region of each of the at least two images, and generating the average brightness of the foreground regions of the at least two images according to the brightness of the foreground region of each image, wherein the average brightness of the foreground regions of the at least two images is the reference brightness of the foreground region in the target image; and acquiring the brightness of the background area of each of the at least two images, and generating the average brightness of the background areas of the at least two images according to the brightness of the background area of each image, wherein the average brightness of the background areas of the at least two images is the reference brightness of the background area in the target image.
In one possible example, in the aspect of generating the target image according to the at least two images and the reference brightness, the processing unit 501 is specifically configured to: determining the condition that the expression states of the at least two images meet a third preset condition, wherein the third preset condition is used for restricting the expression of the user in the images to be a smiling face expression; if the image meeting the third preset condition does not exist in the at least two images, selecting any one of the at least two images as a reference image; acquiring edge parts lacking from the reference image in background areas in other images except the reference image in the at least two images; stitching the edge portion to a background region of the reference image; cutting the spliced reference image according to the position of the shot object in the spliced reference image; adjusting the cut reference image according to the reference brightness to obtain the target image; if one image meeting a third preset condition exists in the at least two images, determining the image meeting the third preset condition as a reference image; acquiring edge parts lacking from the reference image in background areas in other images except the reference image in the at least two images; stitching the edge portion to a background region of the reference image; cutting the spliced reference image according to the position of the shot object in the spliced reference image; adjusting the cut reference image according to the reference brightness to obtain the target image; if M images meeting a third preset condition exist in the at least two images, and M is a positive integer greater than 1, selecting any one of the M images meeting the third preset condition as a reference image; acquiring edge parts lacking from the reference image in background areas in other images except the reference image in the at least two images; stitching the edge portion to a background region of the reference image; cutting the spliced reference image according to the position of the shot object in the spliced reference image; and adjusting the cut reference image according to the reference brightness to obtain the target image.
It can be understood that, since the method embodiment and the apparatus embodiment are different presentation forms of the same technical concept, the content of the method embodiment portion in the present application should be synchronously adapted to the apparatus embodiment portion, and is not described herein again.
embodiments of the present application also provide a computer storage medium, wherein the computer storage medium stores a computer program for electronic data exchange, the computer program causes a computer to execute part or all of the steps of any one of the methods described in the method embodiments, and the computer includes an electronic device.
Embodiments of the present application also provide a computer program product comprising a non-transitory computer readable storage medium storing a computer program operable to cause a computer to perform some or all of the steps of any of the methods as recited in the method embodiments. The computer program product may be a software installation package, said computer comprising electronic means.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and elements referred to are not necessarily required in this application.
In the embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions when actually implementing, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some interfaces, devices or units, and may be an electric or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable memory. Based on such understanding, the technical solution of the present application may be substantially implemented or a part of or all or part of the technical solution contributing to the prior art may be embodied in the form of a software product stored in a memory, and including several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the above-mentioned method of the embodiments of the present application. And the aforementioned memory comprises: a U-disk, a Read-only memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable memory, which may include: flash memory disks, Read-only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
the foregoing detailed description of the embodiments of the present application has been presented to illustrate the principles and implementations of the present application, and the above description of the embodiments is only provided to help understand the method and the core concept of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.
the terms "first," "second," and the like in the description and claims of the present application and in the above-described drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
the electronic device according to the embodiments of the present application may include various handheld devices, vehicle-mounted devices, wearable devices, computing devices or other processing devices connected to a wireless modem, and various forms of User Equipment (UE), Mobile Stations (MS), terminals (terminal), and so on.

Claims (21)

1. a shooting control method is applied to electronic equipment, wherein the electronic equipment comprises a first display screen and a second display screen which are arranged oppositely, and the method comprises the following steps:
when the starting of a camera is detected, displaying a first function interface on the first display screen, and displaying a second function interface on the second display screen, wherein the first function interface comprises shooting preview content, and the second function interface comprises the shooting preview content;
Acquiring a first shooting control instruction and a second shooting control instruction, wherein the first shooting control instruction is an instruction generated by a first trigger operation for the first functional interface, and the second shooting control instruction is an instruction generated by a second trigger operation for the second functional interface;
And generating a target image according to the first shooting control instruction and the second shooting control instruction.
2. The method of claim 1, wherein the generating a target image according to the first and second photographing control instructions comprises:
Acquiring a plurality of images of an object to be shot according to the first shooting control instruction and the second shooting control instruction;
And generating a target image according to the plurality of images.
3. The method according to claim 2, wherein the acquiring at least one image of an object to be photographed according to the first photographing control instruction and the second photographing control instruction comprises:
Acquiring at least one first image according to the first shooting control instruction, wherein the at least one first image is an image corresponding to the first trigger operation;
acquiring at least one second image according to the second shooting control instruction, wherein the at least one second image is an image corresponding to the second triggering operation, and the second triggering operation comprises at least one of the following operations: the method comprises the steps of triggering operation, sliding operation and multi-point touch operation aiming at preset function buttons.
4. The method of claim 3, wherein generating the target image from the plurality of images comprises:
Selecting at least one image of which the definition and the human eye state meet a first preset condition from the plurality of images, wherein the first preset condition is used for restricting the definition of the image to be within a preset definition range and restricting the human eyes in the image to be in an open state;
If the at least one image is a single image, determining that the single image is a target image;
If the at least one image is at least two images, determining the reference brightness of the target image according to the brightness of each image in the at least two images; detecting the hand action state of each image in the at least two images; selecting one or more images of which the hand action states meet a second preset condition from the at least two images, wherein the second preset condition is used for restricting the user in the images not to be in the process of the consistency action of the second trigger operation; if the one or more images are single images, adjusting the single images according to the reference brightness to obtain the target image; and if the one or more images are M images and M is an integer larger than 1, generating the target image according to the M images and the reference brightness.
5. The method of claim 4, wherein determining the reference brightness of the target image according to the brightness of each of the at least two images comprises:
Performing face recognition on each image in the at least two images;
determining a foreground region and a background region of each image in the at least two images according to the face recognition result;
acquiring the brightness of a foreground region of each of the at least two images, and generating the average brightness of the foreground regions of the at least two images according to the brightness of the foreground region of each image, wherein the average brightness of the foreground regions of the at least two images is the reference brightness of the foreground region in the target image;
And acquiring the brightness of the background area of each of the at least two images, and generating the average brightness of the background areas of the at least two images according to the brightness of the background area of each image, wherein the average brightness of the background areas of the at least two images is the reference brightness of the background area in the target image.
6. The method of claim 5, wherein generating the target image from the M images and the reference luminance comprises:
Determining the condition that the expression states in the M images meet a third preset condition, wherein the third preset condition is used for restricting the expression of the user in the images to be smiling face expression;
If the M images do not have the image meeting a third preset condition, selecting any one of the M images as a reference image; acquiring edge parts which are lacked by the reference image in background areas in other images except the reference image in the at least M images; stitching the edge portion to a background region of the reference image; cutting the spliced reference image according to the position of the shot object in the spliced reference image; adjusting the cut reference image according to the reference brightness to obtain the target image;
If one image meeting a third preset condition exists in the M images, determining the image meeting the third preset condition as a reference image; acquiring edge parts which are lacked by the reference image in background areas in other images except the reference image in the at least M images; stitching the edge portion to a background region of the reference image; cutting the spliced reference image according to the position of the shot object in the spliced reference image; adjusting the cut reference image according to the reference brightness to obtain the target image;
If N images meeting a third preset condition exist in the M images, and N is a positive integer which is greater than 1 and less than or equal to M, selecting any one of the N images meeting the third preset condition as a reference image; acquiring edge parts which are lacked by the reference image in background areas in other images except the reference image in the at least M images; stitching the edge portion to a background region of the reference image; cutting the spliced reference image according to the position of the shot object in the spliced reference image; and adjusting the cut reference image according to the reference brightness to obtain the target image.
7. The method according to claim 2, wherein the acquiring at least one image of an object to be photographed according to the first photographing control instruction and the second photographing control instruction comprises:
acquiring at least one first image according to the first shooting control instruction, wherein the at least one first image is an image corresponding to the first trigger operation;
And acquiring at least one second image according to the second shooting control instruction, wherein the at least one second image is an image corresponding to the second trigger operation, and the second trigger operation is an instant trigger gesture drawn by the object to be shot in a preset time after the first trigger operation occurs.
8. the method of claim 7, wherein generating the target image from the plurality of images comprises:
selecting at least one image of the plurality of images, the definition and the hand action state of which meet a first preset condition, wherein the first preset condition is used for restricting the definition of the image to be within a preset definition range and restricting the hand action in the image to be used as the instantaneous trigger gesture;
if the at least one image is a single image, determining that the single image is a target image;
If the at least one image is at least two images, determining the reference brightness of the target image according to the brightness of each image in the at least two images; detecting the human eye state of each image in the at least two images; selecting one or more images of which the human eye states meet a second preset condition from the at least two images, wherein the second preset condition is used for restricting human eyes in the images from being in an open state; if the one or more images are single images, adjusting the single images according to the reference brightness to obtain the target image; and if the one or more images are M images and M is an integer larger than 1, generating the target image according to the M images and the reference brightness.
9. The method of claim 8, wherein determining the reference brightness of the target image according to the brightness of each of the at least two images comprises:
Performing face recognition on each image in the at least two images;
Determining a foreground region and a background region of each image in the at least two images according to the face recognition result;
acquiring the brightness of a foreground region of each of the at least two images, and generating the average brightness of the foreground regions of the at least two images according to the brightness of the foreground region of each image, wherein the average brightness of the foreground regions of the at least two images is the reference brightness of the foreground region in the target image;
and acquiring the brightness of the background area of each of the at least two images, and generating the average brightness of the background areas of the at least two images according to the brightness of the background area of each image, wherein the average brightness of the background areas of the at least two images is the reference brightness of the background area in the target image.
10. the method of claim 9, wherein generating the target image from the M images and the reference luminance comprises:
Determining the condition that the expression states in the M images meet a third preset condition, wherein the third preset condition is used for restricting the expression of the user in the images to be smiling face expression;
if the M images do not have the image meeting a third preset condition, selecting any one of the M images as a reference image; acquiring edge parts which are lacked by the reference image in background areas in other images except the reference image in the at least M images; stitching the edge portion to a background region of the reference image; cutting the spliced reference image according to the position of the shot object in the spliced reference image; adjusting the cut reference image according to the reference brightness to obtain the target image;
if one image meeting a third preset condition exists in the M images, determining the image meeting the third preset condition as a reference image; acquiring edge parts which are lacked by the reference image in background areas in other images except the reference image in the at least M images; stitching the edge portion to a background region of the reference image; cutting the spliced reference image according to the position of the shot object in the spliced reference image; adjusting the cut reference image according to the reference brightness to obtain the target image;
if N images meeting a third preset condition exist in the M images, and N is a positive integer which is greater than 1 and less than or equal to M, selecting any one of the N images meeting the third preset condition as a reference image; acquiring edge parts which are lacked by the reference image in background areas in other images except the reference image in the at least M images; stitching the edge portion to a background region of the reference image; cutting the spliced reference image according to the position of the shot object in the spliced reference image; and adjusting the cut reference image according to the reference brightness to obtain the target image.
11. The method according to claim 2, wherein the acquiring at least one image of an object to be photographed according to the first photographing control instruction and the second photographing control instruction comprises:
Acquiring at least one first image according to the first shooting control instruction, wherein the at least one first image is an image corresponding to the first trigger operation;
and acquiring at least one second image according to the second shooting control instruction, wherein the at least one second image is an image corresponding to the second trigger operation, and the second trigger operation is a delayed shooting gesture which is scratched by the object to be shot in a preset time after the first trigger operation occurs.
12. The method of claim 11, wherein generating the target image from the plurality of images comprises:
Selecting at least one image of which the definition and the human eye state meet a first preset condition from the plurality of images, wherein the first preset condition is used for restricting the definition of the image to be within a preset definition range and restricting the human eyes in the image to be in an open state;
If the at least one image is a single image, determining that the single image is a target image;
If the at least one image is at least two images, determining the reference brightness of the target image according to the brightness of each image in the at least two images; detecting the hand action state of each image in the at least two images; selecting one or more images of which the hand action states meet a second preset condition from the at least two images, wherein the second preset condition is used for restricting the user in the images not to be in the process of the consistency action of the second trigger operation; if the one or more images are single images, adjusting the single images according to the reference brightness to obtain the target image; and if the one or more images are M images and M is an integer larger than 1, generating the target image according to the M images and the reference brightness.
13. The method of claim 12, wherein determining the reference brightness of the target image according to the brightness of each of the at least two images comprises:
performing face recognition on each image in the at least two images;
determining a foreground region and a background region of each image in the at least two images according to the face recognition result;
Acquiring the brightness of a foreground region of each of the at least two images, and generating the average brightness of the foreground regions of the at least two images according to the brightness of the foreground region of each image, wherein the average brightness of the foreground regions of the at least two images is the reference brightness of the foreground region in the target image;
And acquiring the brightness of the background area of each of the at least two images, and generating the average brightness of the background areas of the at least two images according to the brightness of the background area of each image, wherein the average brightness of the background areas of the at least two images is the reference brightness of the background area in the target image.
14. The method of claim 13, wherein generating the target image from the M images and the reference luminance comprises:
determining the condition that the expression states in the M images meet a third preset condition, wherein the third preset condition is used for restricting the expression of the user in the images to be smiling face expression;
If the M images do not have the image meeting a third preset condition, selecting any one of the M images as a reference image; acquiring edge parts which are lacked by the reference image in background areas in other images except the reference image in the at least M images; stitching the edge portion to a background region of the reference image; cutting the spliced reference image according to the position of the shot object in the spliced reference image; adjusting the cut reference image according to the reference brightness to obtain the target image;
if one image meeting a third preset condition exists in the M images, determining the image meeting the third preset condition as a reference image; acquiring edge parts which are lacked by the reference image in background areas in other images except the reference image in the at least M images; stitching the edge portion to a background region of the reference image; cutting the spliced reference image according to the position of the shot object in the spliced reference image; adjusting the cut reference image according to the reference brightness to obtain the target image;
If N images meeting a third preset condition exist in the M images, and N is a positive integer which is greater than 1 and less than or equal to M, selecting any one of the N images meeting the third preset condition as a reference image; acquiring edge parts which are lacked by the reference image in background areas in other images except the reference image in the at least M images; stitching the edge portion to a background region of the reference image; cutting the spliced reference image according to the position of the shot object in the spliced reference image; and adjusting the cut reference image according to the reference brightness to obtain the target image.
15. The method according to claim 2, wherein the acquiring at least one image of an object to be photographed according to the first photographing control instruction and the second photographing control instruction comprises:
Acquiring at least one first image according to the first shooting control instruction, wherein the at least one first image is an image corresponding to a first trigger operation;
and acquiring at least one second image according to the second shooting control instruction, wherein the at least one second image is an image corresponding to the second trigger operation, and the second trigger operation is that the object to be shot is static within a preset time after the first trigger operation occurs.
16. The method of claim 15, wherein generating the target image from the plurality of images comprises:
selecting at least one image of which the definition and the human eye state meet a first preset condition from the plurality of images, wherein the first preset condition is used for restricting the definition of the image to be within a preset definition range and restricting the human eyes in the image to be in an open state;
if the at least one image is a single image, determining that the single image is a target image;
And if the at least one image is at least two images, determining the reference brightness of the target image according to the brightness of each image of the at least two images, and generating the target image according to the at least two images and the reference brightness.
17. the method of claim 16, wherein determining the reference brightness of the target image according to the brightness of each of the at least two images comprises:
Performing face recognition on each image in the at least two images;
Determining a foreground region and a background region of each image in the at least two images according to the face recognition result;
acquiring the brightness of a foreground region of each of the at least two images, and generating the average brightness of the foreground regions of the at least two images according to the brightness of the foreground region of each image, wherein the average brightness of the foreground regions of the at least two images is the reference brightness of the foreground region in the target image;
And acquiring the brightness of the background area of each of the at least two images, and generating the average brightness of the background areas of the at least two images according to the brightness of the background area of each image, wherein the average brightness of the background areas of the at least two images is the reference brightness of the background area in the target image.
18. the method of claim 17, wherein generating the target image from the at least two images and the reference luminance comprises:
Determining the condition that the expression states of the at least two images meet a third preset condition, wherein the third preset condition is used for restricting the expression of the user in the images to be a smiling face expression;
If the image meeting the third preset condition does not exist in the at least two images, selecting any one of the at least two images as a reference image; acquiring edge parts lacking from the reference image in background areas in other images except the reference image in the at least two images; stitching the edge portion to a background region of the reference image; cutting the spliced reference image according to the position of the shot object in the spliced reference image; adjusting the cut reference image according to the reference brightness to obtain the target image;
If one image meeting a third preset condition exists in the at least two images, determining the image meeting the third preset condition as a reference image; acquiring edge parts lacking from the reference image in background areas in other images except the reference image in the at least two images; stitching the edge portion to a background region of the reference image; cutting the spliced reference image according to the position of the shot object in the spliced reference image; adjusting the cut reference image according to the reference brightness to obtain the target image;
If M images meeting a third preset condition exist in the at least two images, and M is a positive integer greater than 1, selecting any one of the M images meeting the third preset condition as a reference image; acquiring edge parts lacking from the reference image in background areas in other images except the reference image in the at least two images; stitching the edge portion to a background region of the reference image; cutting the spliced reference image according to the position of the shot object in the spliced reference image; and adjusting the cut reference image according to the reference brightness to obtain the target image.
19. The shooting control device is applied to electronic equipment, wherein the electronic equipment comprises a first display screen and a second display screen which are arranged oppositely, and the shooting control device comprises: a processing unit and a communication unit, wherein,
the processing unit is used for transmitting a starting signal through the communication unit when the camera is detected to be started, displaying a first function interface on the first display screen, and displaying a second function interface on the second display screen, wherein the first function interface comprises shooting preview content, and the second function interface comprises the shooting preview content; the shooting control method comprises the steps of acquiring a first shooting control instruction and a second shooting control instruction, wherein the first shooting control instruction is generated according to a first trigger operation of a shot object, and the second shooting control instruction is generated according to a second trigger operation of the shot object; and the image processing device is used for generating a target image according to the first shooting control instruction and the second shooting control instruction.
20. an electronic device comprising a processor, a memory, a communication interface, and one or more programs stored in the memory and configured to be executed by the processor, the programs comprising instructions for performing the steps in the method of any of claims 1-18.
21. A computer-readable storage medium, characterized in that a computer program for electronic data exchange is stored, wherein the computer program causes a computer to perform the method according to any one of claims 1-18.
CN201910858036.6A 2019-09-10 2019-09-10 Shooting control method, electronic equipment and related device Pending CN110545382A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910858036.6A CN110545382A (en) 2019-09-10 2019-09-10 Shooting control method, electronic equipment and related device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910858036.6A CN110545382A (en) 2019-09-10 2019-09-10 Shooting control method, electronic equipment and related device

Publications (1)

Publication Number Publication Date
CN110545382A true CN110545382A (en) 2019-12-06

Family

ID=68713427

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910858036.6A Pending CN110545382A (en) 2019-09-10 2019-09-10 Shooting control method, electronic equipment and related device

Country Status (1)

Country Link
CN (1) CN110545382A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104539843A (en) * 2014-12-18 2015-04-22 深圳市金立通信设备有限公司 Terminal shooting method
CN104954678A (en) * 2015-06-15 2015-09-30 联想(北京)有限公司 Image processing method, image processing device and electronic equipment
CN105554372A (en) * 2015-10-30 2016-05-04 东莞酷派软件技术有限公司 Photographing method and device
CN106570110A (en) * 2016-10-25 2017-04-19 北京小米移动软件有限公司 De-overlapping processing method and apparatus of image
CN106603917A (en) * 2016-12-16 2017-04-26 努比亚技术有限公司 Shooting device and method
CN109862258A (en) * 2018-12-27 2019-06-07 维沃移动通信有限公司 A kind of image display method and terminal device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104539843A (en) * 2014-12-18 2015-04-22 深圳市金立通信设备有限公司 Terminal shooting method
CN104954678A (en) * 2015-06-15 2015-09-30 联想(北京)有限公司 Image processing method, image processing device and electronic equipment
CN105554372A (en) * 2015-10-30 2016-05-04 东莞酷派软件技术有限公司 Photographing method and device
CN106570110A (en) * 2016-10-25 2017-04-19 北京小米移动软件有限公司 De-overlapping processing method and apparatus of image
CN106603917A (en) * 2016-12-16 2017-04-26 努比亚技术有限公司 Shooting device and method
CN109862258A (en) * 2018-12-27 2019-06-07 维沃移动通信有限公司 A kind of image display method and terminal device

Similar Documents

Publication Publication Date Title
US11114130B2 (en) Method and device for processing video
CN112135046B (en) Video shooting method, video shooting device and electronic equipment
CN106161939B (en) Photo shooting method and terminal
EP2981061A1 (en) Method and apparatus for displaying self-taken images
EP2779628A1 (en) Image processing method and device
CN109040474B (en) Photo display method, device, terminal and storage medium
CN113840070B (en) Shooting method, shooting device, electronic equipment and medium
CN113794834B (en) Image processing method and device and electronic equipment
CN112887617B (en) Shooting method and device and electronic equipment
EP2939411B1 (en) Image capture
CN111669495B (en) Photographing method, photographing device and electronic equipment
CN111064930B (en) Split screen display method, display terminal and storage device
CN110086998B (en) Shooting method and terminal
WO2024061134A1 (en) Photographing method and apparatus, electronic device, and medium
WO2024022349A1 (en) Image processing method and apparatus, and electronic device and storage medium
WO2023174009A1 (en) Photographic processing method and apparatus based on virtual reality, and electronic device
CN108683847B (en) Photographing method, device, terminal and storage medium
CN107743272B (en) Screenshot method and equipment
CN112653841B (en) Shooting method and device and electronic equipment
CN110545382A (en) Shooting control method, electronic equipment and related device
CN115134532A (en) Image processing method, image processing device, storage medium and electronic equipment
CN114245017A (en) Shooting method and device and electronic equipment
CN112165584A (en) Video recording method, video recording device, electronic equipment and readable storage medium
CN113873160B (en) Image processing method, device, electronic equipment and computer storage medium
CN114915730B (en) Shooting method and shooting device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20191206