CN114339047B - Shooting control method, shooting control device, electronic equipment and medium - Google Patents

Shooting control method, shooting control device, electronic equipment and medium Download PDF

Info

Publication number
CN114339047B
CN114339047B CN202111671019.5A CN202111671019A CN114339047B CN 114339047 B CN114339047 B CN 114339047B CN 202111671019 A CN202111671019 A CN 202111671019A CN 114339047 B CN114339047 B CN 114339047B
Authority
CN
China
Prior art keywords
image
special effect
input
images
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111671019.5A
Other languages
Chinese (zh)
Other versions
CN114339047A (en
Inventor
刘心怡
黄春成
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202111671019.5A priority Critical patent/CN114339047B/en
Publication of CN114339047A publication Critical patent/CN114339047A/en
Application granted granted Critical
Publication of CN114339047B publication Critical patent/CN114339047B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Studio Devices (AREA)

Abstract

The application discloses a shooting control method, a shooting control device, electronic equipment and a medium, and belongs to the technical field of communication. The method comprises the following steps: controlling the cradle head to move along at least one direction, and controlling the camera to acquire at least two images in the cradle head moving process; and synthesizing at least two images, and outputting a first image.

Description

Shooting control method, shooting control device, electronic equipment and medium
Technical Field
The application belongs to the technical field of communication, and particularly relates to a shooting control method, a shooting control device, electronic equipment and a medium.
Background
With the development of electronic devices, the functions of the electronic devices are becoming more and more abundant, for example, the electronic devices can perform special effect processing on photographed images.
Specifically, in the related art, a user may trigger the electronic device to capture an image through input, and then the user may trigger the electronic device to perform special effect processing on the captured image, for example, the electronic device may perform local stretching processing on the image by changing local pixels of the image.
However, according to the above method, since the electronic device performs the special effect processing on the image by changing the pixel points of the image itself, the display effect of the image after the special effect processing is not good.
Disclosure of Invention
The embodiment of the application aims to provide a shooting control method, a shooting control device, electronic equipment and a medium, which can solve the problem that the display effect of an image subjected to special effect processing in the related art is poor.
In a first aspect, an embodiment of the present application provides a photographing control method, including: controlling the cradle head to move along at least one direction, and controlling the camera to acquire at least two images in the cradle head moving process; and synthesizing at least two images, and outputting a first image.
In a second aspect, an embodiment of the present application provides a photographing control apparatus, including a control module and a processing module; the control module is used for controlling the cradle head to move along at least one direction and controlling the camera to acquire at least two images in the cradle head moving process; and the processing module is used for synthesizing at least two images and outputting a first image.
In a third aspect, embodiments of the present application provide an electronic device comprising a processor, a memory and a program or instructions stored on the memory and executable on the processor, which when executed by the processor, implement the steps of the method as in the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium having stored thereon a program or instructions which when executed by a processor perform the steps of the method as in the first aspect.
In a fifth aspect, embodiments of the present application provide a chip comprising a processor and a communication interface, the communication interface being coupled to the processor, the processor being configured to execute programs or instructions to implement a method as in the first aspect.
In a sixth aspect, embodiments of the present application provide a computer program product stored in a storage medium, the program product being executable by at least one processor to implement the method according to the first aspect.
In the embodiment of the application, the electronic equipment can control the cradle head to move along at least one direction and control the camera to acquire at least two images in the movement process of the cradle head; and combining at least two images and outputting a first image. According to the scheme, since the electronic equipment can synthesize at least two images acquired by the camera into the first image, compared with the scheme of carrying out special effect processing on the image by changing the pixel points of the image in the related technology, the shooting control method provided by the embodiment of the application can improve the display effect of special effect processing on the image.
Drawings
Fig. 1 is a flowchart of a shooting control method provided in an embodiment of the present application;
fig. 2 is one of interface schematic diagrams of an application of a shooting control method according to an embodiment of the present application;
FIG. 3 is a second schematic diagram of an interface to which the photographing control method according to the embodiment of the present application is applied;
FIG. 4 is a third exemplary interface diagram of an application of the photographing control method according to the embodiment of the present application;
FIG. 5 is a diagram illustrating an interface of an application of a photographing control method according to an embodiment of the present application;
FIG. 6 is a fifth exemplary diagram of an interface for application of a photographing control method according to an embodiment of the present application;
fig. 7 is a schematic diagram of a shooting control apparatus according to an embodiment of the present application;
fig. 8 is a schematic diagram of an electronic device according to an embodiment of the present application;
fig. 9 is a schematic hardware diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions of the embodiments of the present application will be clearly described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which are obtained by a person skilled in the art based on the embodiments of the present application, fall within the scope of protection of the present application.
The terms first, second and the like in the description and in the claims, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that embodiments of the application may be practiced otherwise than as specifically illustrated or described herein. Furthermore, in the description and claims, "and/or" means at least one of the connected objects, and the character "/", generally means that the associated object is an "or" relationship.
The shooting control method, the shooting control device, the electronic equipment and the medium provided by the embodiment of the application are described in detail through specific embodiments and application scenes thereof by combining the attached drawings.
The shooting control method provided by the embodiment of the application can be applied to controlling a camera arranged on a cradle head to acquire an image.
Referring to fig. 1, an embodiment of the present application provides a photographing control method, which may include the following steps 101 and 102. The method is exemplarily described below by taking an electronic device as an example.
In the embodiment of the application, the electronic equipment is electronic equipment with a cradle head and a camera, and the camera is arranged on the cradle head in the electronic equipment.
Step 101, the electronic equipment controls the cradle head to move along at least one direction, and controls the camera to acquire at least two images in the cradle head moving process.
Specifically, the electronic device may control the pan-tilt to move in at least one direction when a special effect shooting mode of the camera application program is started and an image preview interface corresponding to the special effect shooting mode is displayed.
It may be appreciated that in the embodiment of the present application, a preview image may be displayed in the image preview interface, for example, the first preview image or the second preview image in the embodiment of the present application, and the preview image may include the target object.
Optionally, in the embodiment of the present application, the electronic device may include at least one pan-tilt, and at least one camera may be disposed on each pan-tilt, so that the pan-tilt may drive the camera on the pan-tilt to move in a moving process, and it can be understood that a moving member capable of driving the camera to move is within a protection scope of the present application.
Optionally, in the embodiment of the present application, the electronic device may control one pan-tilt to move along the at least one direction, that is, one pan-tilt may correspond to multiple directions; or each cradle head can be controlled to move along one direction of the at least one direction respectively, namely, different cradle heads can correspond to different directions.
Optionally, in the embodiment of the present application, the electronic device may control the camera to collect the target object.
Alternatively, in the embodiment of the present application, the target object may be a human, an animal, a plant, or the like.
In the embodiment of the application, the number of the target objects is not limited, and the number of the target objects can be one or a plurality of target objects, which are specifically determined according to actual requirements, but the embodiment of the application is not limited.
For convenience of explanation, the number of holders, the number of cameras and the direction are all one for illustration, and for the example that the number of holders, the number of cameras and the direction are all one when the number of holders, the number of cameras and the direction are plural, the description thereof will be omitted herein for avoiding repetition.
Step 102, the electronic equipment synthesizes at least two images and outputs a first image.
Optionally, in the embodiment of the present application, the electronic device may synthesize the at least two images, generate the first image, and output the first image.
In the embodiment of the application, the first image may be an image with a special effect; specifically, the first image may be an image after special effect processing on a partial region of the target object, for example, the target region in the present embodiment.
Alternatively, in the embodiment of the present application, the number of the first images may be less than the number of the at least two images, for example, the number of the first images is 1.
In the shooting control method provided by the embodiment of the application, the electronic equipment can control the cradle head to move along at least one direction and control the camera to acquire at least two images in the movement process of the cradle head; and at least two images are synthesized, and a first image is output, namely, the electronic equipment can synthesize the at least two images acquired by the camera into the first image, so that the display effect of special effect processing on the images can be improved.
In order to more clearly describe the photographing control method provided by the embodiment of the present application, the photographing control method provided by the embodiment of the present application is described in detail below with reference to one possible implementation manner and another possible implementation manner.
One possible implementation is.
Optionally, in the embodiment of the present application, before step 101, the photographing control method provided in the embodiment of the present application may further include steps 103 to 105 described below.
Step 103, the electronic device receives a first input of a special effect control from a user.
Wherein the special effects control may be used to indicate special effects areas and special effects features.
Optionally, in the embodiment of the present application, the special effect area may be at least one shooting location in the target object.
Illustratively, taking a target object as an example, each photographing part may include any one of the following: hands, arms, head, legs, neck, face, etc. Alternatively, in the embodiment of the present application, the special effects feature may be a way to process the special effects area, for example, stretch, widen, etc.
Optionally, in an embodiment of the present application, at least one special effect control may be included in the image preview interface.
In the embodiment of the present application, each special effect control in the at least one special effect control may be used to indicate a special effect.
Illustratively, as shown in fig. 2 (a): the at least one special effects control may include a special effects control indicated by the identifier 22: a long face special effect control, a long neck special effect control, a long arm special effect control and the like; the special effect indicated by the long-face special effect control is the special effect of the face; the special effect indicated by the long neck special effect control is a special effect which increases the neck.
In the embodiment of the present application, the first input may be an input of a user to at least one special effect control of the at least one special effect control.
Optionally, in the embodiment of the present application, the first input may be a touch input, a hover input, a preset gesture input or a voice input, which is specifically determined according to actual needs, and the embodiment of the present application is not limited.
For example, the first input may be a user click input to 1 special effects control in the image preview interface.
Step 104, the electronic device determines a first region in the first preview image and a first special effect feature of the object in the first region in response to the first input.
For example, the first input is a click input of a "long neck" control in the image preview interface by the user, and then the electronic device may determine, in response to the first input, that the first region in the first preview image is a region where the neck is located and process the first special effect feature of the neck: and (5) elongation.
In the embodiment of the application, after the electronic device receives the first input, the first preview image can be identified, and the shooting object including the shooting part corresponding to the special effect control selected by the user is determined as the target object. If the first preview image does not include the shooting part corresponding to the special effect control selected by the user, prompting information can be output to prompt the user to input again.
Step 105, the electronic device determines at least one direction according to the first region and the first special effect feature.
Specifically, in the embodiment of the present application, the electronic device may determine at least one direction according to the extending direction of the object in the first area and the first special effect feature.
For example, when the electronic device determines that the first area is the neck according to the first input, and the determined first special effect feature is: the electronic device may be elongated, and if the extending direction of the neck is a vertical direction, the direction determined by the first region and the first special effect feature may be an upward or downward direction; if the extending direction of the neck is a horizontal direction, the electronic apparatus may determine the direction to be a left or right direction according to the first region and the first special feature.
The up-down/left-right directions are each illustrated by taking a screen of the electronic device to display a preview image, and the screen is shown vertically toward the user as an example.
In the embodiment of the application, the user can trigger the electronic equipment to determine the at least one direction through the input of the special effect control, so that the operation process of the electronic equipment for determining the at least one direction can be simplified.
Another possible implementation.
Optionally, in the embodiment of the present application, before the step 101, the photographing control method provided in the embodiment of the present application may further include the following steps 106 to 109.
Step 106, the electronic device receives a second input from the user for a second preview image.
Optionally, in the embodiment of the present application, the second input may be a touch input, a hover input, a preset gesture input or a voice input, which is specifically determined according to actual needs, and the embodiment of the present application is not limited.
Step 107, the electronic device determines a second area corresponding to an input position of the second input in the second preview image in response to the second input.
Optionally, in the embodiment of the present application, before the electronic device receives the second input, the electronic device may first receive an input of a user, where the input is used to trigger the electronic device to display a "custom area" control and a "custom direction" control; then, after the user inputs the "custom region" control, the user inputs a region in the second preview image, i.e. the second input, where special effects are required to be added.
Illustratively, as shown in fig. 3 (a): the user may enter the ".." control in the image preview interface, as shown in (b) of fig. 3: the electronic equipment can display a custom region control and a custom direction control in the image preview interface; the user can click on the control of the 'custom region', and click on the arm in the second preview image, namely the second input, so as to trigger the electronic device to determine the second region as the region where the arm in the preview image is located.
Step 108, the electronic device receives a third input from the user on the second area, the third input including at least one sub-input.
Optionally, in the embodiment of the present application, the third input may be a touch input, a hover input, a preset gesture input or a voice input, which is specifically determined according to actual needs, and the embodiment of the present application is not limited.
For example, the third input is a user sliding input on the second area.
Step 109, the electronic device determines an input direction of the at least one sub-input as at least one direction in response to the third input.
Wherein each sub-input may correspond to a direction.
Illustratively, as shown in fig. 4: after clicking the "custom direction" control for input, the user can slide down left on the preview image, namely, third input, so as to trigger the direction determined by the electronic device to be the lower left direction.
It can be understood that, in the embodiment of the present application, when the special effect indicated by the at least one special effect control does not meet the shooting requirement of the user, the user can customize the special effect through the second input and the third input.
Alternatively, in the embodiment of the present application, the number of the second areas may be one or more, and when the number of the second areas is plural, each second area in the plural second areas corresponds to one direction, and the directions corresponding to each second area may be the same or different.
For example, taking a case that the directions corresponding to the second areas are different, the user may first select one second area, then input the direction corresponding to the one second area, then select another second area, and input the direction corresponding to the other second area.
Optionally, in the embodiment of the present application, in another possible implementation manner, after the step 102, the photographing control method provided in the embodiment of the present application may further include the following step a.
Step A, the electronic equipment generates and stores a first special effect control according to a second area corresponding to a second input and a direction corresponding to a third input; the first special effect control may correspond to the second region and special effect features corresponding to the second region.
For example, if the second region corresponding to the second input is an "arm" of the target object, and the third input is a sliding input of the user along the arm direction, the special effect feature corresponding to the second region is an "elongated arm", and the electronic device may generate a "long arm" control according to the second input and the third input.
In the embodiment of the application, the user can select the second area and at least one direction through the second input and the third input, namely, the user can customize the special effect by selecting the special effect area and the special effect direction, so that the operation flexibility and the man-machine interaction performance can be improved.
Alternatively, in the embodiment of the present application, the above step 102 may be specifically implemented by the following steps 102a to 102 c.
Step 102a, the electronic device obtains first feature information of an area corresponding to the target area in each of the at least two images.
The target area may be the first area or the second area.
Alternatively, in the embodiment of the present application, the first feature information may be information corresponding to the target area.
Optionally, in the embodiment of the present application, the electronic device may extract, according to the target area selected by the user, first feature information in the target area in each image through a feature detection algorithm.
For example, assuming that the target area determined by the electronic device is a neck area and the electronic device acquires 4 images, the 4 images being the images a to d shown in fig. 5, the electronic device may acquire the images of the neck area indicated by the identifiers 61, 62, 63, 64 in the images a to d, respectively, as shown in fig. 6, to obtain 4 neck area images, that is, first feature information.
Step 102b, the electronic device acquires second feature information of the region except the third region in the second image.
The second image may be at least one image of the at least two images, and the third region may be a region corresponding to the target region in the second image.
Alternatively, in the embodiment of the present application, the second feature information may be information corresponding to all areas in the area except the third area in the second image, or may be information corresponding to a part of areas in the area except the third area in the second image, which is specifically determined according to actual needs, and the embodiment of the present application is not limited to this embodiment
Illustratively, the target area determined by the electronic device is a neck area, and the electronic device acquires 4 images, the 4 images being respectively the image a to the image d shown in fig. 5, and if the second image is the image a and the image d, the electronic device may acquire the area above the neck indicated by the reference numeral 65 in the image a, the area below the neck indicated by the reference numeral 66 in the image d, that is, the second characteristic information, as shown in fig. 6. Alternatively, in the embodiment of the present application, the execution order of the step 102a and the step 102b is not limited, that is, the electronic device may execute the step 102a first and then execute the step 102b; or the electronic device may perform step 102b first and then step 102a; or the electronic device may perform step 102a and step 102b simultaneously, which is specifically determined according to actual requirements, and the embodiment of the present application is not limited.
Step 102c, the electronic device performs reprojection processing on the at least two second feature information and the first feature information to obtain and output a first image.
The re-projection processing technology is the prior art, and in order to avoid repetition, the description is omitted here.
Optionally, in the embodiment of the present application, the electronic device may further group the at least two images with a target area or a direction as a unit, to obtain at least one image group, where each target area or each direction corresponds to one image group; for each image group, acquiring characteristic information corresponding to a target area in each image of one image group, acquiring first characteristic information corresponding to the one image group, and acquiring second characteristic information of areas except the target area in at least one image of the target image group, wherein the target image group is one image group in the at least one image group; and then, a re-projection technology is adopted, and a first image is generated according to the second characteristic information and the first characteristic information.
In the embodiment of the application, the electronic equipment can acquire the first characteristic information of the region corresponding to the target region in each of at least two images and the second characteristic information of the region except the third region in the second image, and re-project the at least two second characteristic information and the first characteristic information to obtain and output the first image, so that the display effect of the special effect processing of the images can be improved.
Optionally, in the embodiment of the present application, before step 102c, the photographing control method provided in the embodiment of the present application may further include step 110 described below.
And 110, the electronic equipment performs feature alignment processing on at least two pieces of second feature information and the first feature information. Specifically, in the embodiment of the present application, the electronic device may perform feature alignment processing on at least two second feature information and the first feature information through a feature matching algorithm.
The feature matching algorithm and the feature alignment process are both the prior art, and are not repeated here.
In the embodiment of the application, the electronic equipment can process at least two second characteristic information and the first characteristic information in a characteristic alignment mode, so that the display effect of special effect processing on the image can be improved.
The photographing control method provided by the present application is exemplarily described below with reference to the accompanying drawings.
Illustratively, as shown in fig. 6, the electronic device may determine that the first characteristic information includes: the marks 61, 62, 63, 64 indicate images of the areas, and the second characteristic information includes: the marks 65, 66 indicate images of the areas. Therefore, the electronic device may perform feature alignment processing on the first feature information to obtain an image e shown in fig. 6, and then perform feature alignment processing on the second feature information and the image e to obtain an image f shown in fig. 6, that is, a first image.
Optionally, in the embodiment of the present application, before the step 101, the photographing control method provided in the embodiment of the present application may further include a step 111 and a step 112, where the step 101 may be specifically implemented by a step 101a described below.
Step 111, the electronic device determines the special effect intensity of special effect processing on the image acquired by the camera.
Optionally, in the embodiment of the present application, the special effect strength may be default for the system of the electronic device or determined according to the input of the user, specifically determined according to the actual requirement, which is not limited by the embodiment of the present application.
Alternatively, in the embodiment of the present application, the special effect intensity may be a proportion of the stretching target area.
For example, the special effect strength is that the neck region, i.e., the target region, is stretched by 200%.
Optionally, in the embodiment of the present application, when the special effect intensity is determined according to the input of the user, the user may trigger the electronic device to determine the special effect intensity in the following two ways.
Mode 1
Optionally, in an embodiment of the present application, after determining the first area and the direction, the electronic device may display, as shown in (b) in fig. 2, a special effects intensity adjustment control 24 in the image preview interface, where the special effects intensity adjustment control 24 includes: the electronic equipment comprises a sliding area and a sliding block arranged on the sliding area, wherein each position in the sliding area corresponds to one special effect intensity, so that a user can trigger the electronic equipment to determine that the special effect intensity is the special effect intensity corresponding to the position of the sliding block through inputting the sliding block.
It can be understood that, in the embodiment of the present application, the maximum special effect intensity corresponding to the sliding area is determined according to the maximum rotation angle of the pan-tilt, and the minimum special effect intensity corresponding to the sliding area is determined according to the minimum rotation angle of the pan-tilt.
Illustratively, assuming that the rotation angle range of the pan-tilt is ±3°, and the minimum rotation angle of the pan-tilt is 0.3 °, then: the maximum rotation angle of the tripod head is 6 degrees, so that in the rotation angle range of the tripod head, the camera arranged on the tripod head can respectively acquire one image in 20 angles, namely, the camera can acquire 20 images in the rotation angle range of the tripod head.
Mode 2
Optionally, in an embodiment of the present application, in another possible implementation manner, that is, when the target area is determined by the user by selecting the second input of the second area on the preview image displayed on the image preview interface, the electronic device may determine the special effect intensity according to the length of the third input track of the user in the second area, and the input track of the third input may be proportional to the special effect intensity.
Step 112, the electronic device determines the motion amplitude of the cradle head and the image quantity of at least two images acquired by the camera according to the special effect intensity.
In step 101a, after the pan-tilt moves the target amplitude along each direction of at least one direction, the electronic device controls the camera to acquire an image.
Alternatively, in the embodiment of the present application, the special effect intensity is proportional to the motion amplitude and the number of images, respectively.
In the embodiment of the application, the electronic equipment can determine the motion amplitude of the cradle head and the image quantity of at least two images acquired by the camera according to the special effect intensity, so that the accuracy of the images acquired by the electronic equipment can be improved.
It should be noted that, in the photographing control method provided in the embodiment of the present application, the execution subject may be a photographing control device, or a control module in the photographing control device for executing the photographing control method. In the embodiment of the present application, taking a method for executing shooting control by a shooting control device as an example, the shooting control device provided in the embodiment of the present application is described.
Referring to fig. 7, an embodiment of the present application provides a photographing control apparatus 700, and the photographing control apparatus 700 may include: a control module 701 and a processing module 702. The control module 701 can be used for controlling the cradle head to move along at least one direction and controlling the camera to acquire at least two images in the cradle head moving process; the processing module 702 may be configured to synthesize at least two images and output a first image.
Optionally, in an embodiment of the present application, the photographing control apparatus 700 may further include: a determining module; the determining module can be used for determining the special effect intensity of special effect processing on the image acquired by the camera before the control module 701 controls the cradle head to move along at least one direction; the determining module is also used for determining the motion amplitude of the cradle head and the image quantity of at least two images acquired by the camera according to the special effect intensity; the control module 701 is specifically configured to control the camera to collect an image after the pan-tilt moves the target amplitude along each of at least one direction.
Alternatively, in the embodiment of the present application, the special effect intensity is proportional to the motion amplitude and the number of images, respectively.
Optionally, in an embodiment of the present application, the photographing control apparatus 700 may further include: a receiving module and a determining module; the receiving module may be configured to receive a first input of a special effect control by a user before the control module 701 controls the pan-tilt to move along at least one direction, where the special effect control is used to indicate a special effect area and a special effect feature; a determining module operable to determine a first region in the first preview image and a first special effect feature of the object in the first region in response to the first input received by the receiving module; the determining module is further configured to determine at least one direction according to the first region and the first special effect feature.
Optionally, in an embodiment of the present application, the photographing control apparatus 700 may further include: a receiving module and a determining module; the receiving module may be configured to receive a second input from a user on a second preview image before the control module 701 controls the pan-tilt to move along at least one direction; the determining module can be used for responding to the second input received by the receiving module and determining a second area corresponding to the input position of the second input in the second preview image; the receiving module is further used for receiving a third input of a user on the second area, wherein the third input comprises at least one sub-input; the determining module may be further configured to determine, in response to the third input, an input direction of at least one sub-input as at least one direction, one direction for each sub-input.
Optionally, in the embodiment of the present application, the processing module 702 is specifically configured to obtain first feature information of an area corresponding to a target area in each of at least two images, where the target area is a first area or a second area; the processing module 702 is specifically configured to obtain second feature information of an area other than a third area in the second image, where the second image is at least one image of the at least two images, and the third area is an area corresponding to the target area in the second image; the processing module 702 is specifically configured to perform a reprojection process on at least two second feature information and the first feature information, so as to obtain and output a first image.
Optionally, in the embodiment of the present application, the processing module 702 is further configured to perform feature alignment processing on at least two second feature information and the first feature information before performing reprojection processing on at least two second feature information and the first feature information to obtain and output the first image.
In the shooting control device provided by the embodiment of the application, the shooting control device can control the cradle head to move along at least one direction and control the camera to acquire at least two images in the movement process of the cradle head; and at least two images are synthesized, and a first image is output, namely, the shooting control device can synthesize the at least two images acquired by the camera into the first image, so that the display effect of special effect processing on the images can be improved. The beneficial effects of the various implementation manners in this embodiment may be specifically referred to the beneficial effects of the corresponding implementation manners in the foregoing method embodiment, and in order to avoid repetition, the description is omitted here.
The image display device in the embodiment of the application can be an electronic device or a component in the electronic device, such as an integrated circuit or a chip. The electronic device may be a terminal, or may be other devices than a terminal. The Mobile electronic device may be a Mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted electronic device, a Mobile internet appliance (Mobile INTERNET DEVICE, MID), an augmented reality (augmented reality, AR)/Virtual Reality (VR) device, a robot, a wearable device, an ultra-Mobile personal computer (UMPC), a netbook or a personal digital assistant (personal DIGITAL ASSISTANT, PDA), etc., and may also be a server, a network attached storage (Network Attached Storage, NAS), a personal computer (personal computer, PC), a Television (TV), a teller machine, a self-service machine, etc., which are not particularly limited in the embodiments of the present application.
The image display device in the embodiment of the application may be a device having an operating system. The operating system may be an Android operating system, an ios operating system, or other possible operating systems, and the embodiment of the present application is not limited specifically.
The image display device provided in the embodiment of the present application can implement each process implemented by the method embodiment of fig. 1, and in order to avoid repetition, a description thereof will not be repeated here.
As shown in fig. 8, the embodiment of the present application further provides an electronic device 800, including a processor 802 and a memory 801, where the memory 801 stores a program or instructions that can be executed on the processor 802, and the program or instructions implement the steps of the embodiment of the image display method when executed by the processor 802, and achieve the same technical effects, so that repetition is avoided and redundant description is omitted.
It should be noted that, the electronic device in the embodiment of the present application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 9 is a schematic hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 900 includes, but is not limited to: radio frequency unit 901, network module 902, audio output unit 903, input unit 904, sensor 905, display unit 906, user input unit 907, interface unit 908, memory 909, and processor 910.
Those skilled in the art will appreciate that the electronic device 900 may also include a power source (e.g., a battery) for powering the various components, which may be logically connected to the processor 910 by a power management system to perform functions such as managing charge, discharge, and power consumption by the power management system. The electronic device structure shown in fig. 9 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than shown, or may combine certain components, or may be arranged in different components, which are not described in detail herein.
The processor 910 may be configured to control the pan-tilt to move along at least one direction, and control the camera to acquire at least two images during the movement of the pan-tilt; the processor 910 may also be configured to synthesize at least two images and output a first image.
Optionally, in the embodiment of the present application, the processor 910 may be further configured to determine, before controlling the pan-tilt to move along at least one direction, a special effect intensity for performing special effect processing on an image acquired by the camera; the processor 910 may be further configured to determine, according to the special effect intensity, a motion amplitude of the pan-tilt and an image number of at least two images acquired by the camera; the processor 910 is specifically configured to control the camera to acquire an image after the pan-tilt moves the target amplitude along each of at least one direction.
Alternatively, in the embodiment of the present application, the special effect intensity is proportional to the motion amplitude and the number of images, respectively.
Optionally, in the embodiment of the present application, the user input unit 907 may be configured to receive, before the processor 910 controls the pan-tilt to move in at least one direction, a first input of a special effect control, where the special effect control is used to indicate a special effect area and a special effect feature; the processor 910 is further configured to determine a first region in the first preview image and a first special effect feature of the object in the first region in response to a first input received by the user input unit 907; the processor 910 is further configured to determine at least one direction based on the first region and the first special effects feature.
Optionally, in an embodiment of the present application, the user input unit 907 may be configured to receive a second input of a second preview image from a user before the processor 910 controls the pan/tilt head to move in at least one direction; a processor 910, which is configured to determine, in response to a second input received by the user input unit 907, a second region in the second preview image corresponding to an input position of the second input; a user input unit 907, further operable to receive a third input from the user on the second area, the third input comprising at least one sub-input; the processor 910 may be further configured to determine an input direction of at least one sub-input as at least one direction in response to the third input, each sub-input corresponding to one direction.
Optionally, in the embodiment of the present application, the processor 910 is specifically configured to obtain first feature information of an area corresponding to a target area in each of at least two images, where the target area is a first area or a second area; the processor 910 is specifically configured to obtain second feature information of an area other than a third area in the second image, where the second image is at least one image of the at least two images, and the third area is an area corresponding to the target area in the second image; the processor 910 is specifically configured to perform a reprojection process on at least two second feature information and the first feature information, so as to obtain and output a first image.
Optionally, in an embodiment of the present application, the processor 910 is further configured to perform feature alignment processing on at least two second feature information and the first feature information before performing reprojection processing on at least two second feature information and the first feature information to obtain and output the first image.
In the electronic equipment provided by the embodiment of the application, the electronic equipment can control the cradle head to move along at least one direction, and the camera is controlled to acquire at least two images in the movement process of the cradle head; and at least two images are synthesized, and a first image is output, namely, the electronic equipment can synthesize the at least two images acquired by the camera into the first image, so that the display effect of special effect processing on the images can be improved.
The beneficial effects of the various implementation manners in this embodiment may be specifically referred to the beneficial effects of the corresponding implementation manners in the foregoing method embodiment, and in order to avoid repetition, the description is omitted here.
It should be appreciated that in embodiments of the present application, the input unit 904 may include a graphics processor (Graphics Processing Unit, GPU) 9041 and a microphone 9042, with the graphics processor 9041 processing image data of still pictures or video obtained by an image capture device (e.g., a camera) in a video capture mode or an image capture mode. The display unit 906 may include a display panel 9061, and the display panel 9061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 907 includes at least one of a touch panel 9071 and other input devices 9072. Touch panel 9071, also referred to as a touch screen. The touch panel 9071 may include two parts, a touch detection device and a touch controller. Other input devices 9072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and so forth, which are not described in detail herein.
The memory 909 may be used to store software programs and various data, and the memory 909 may mainly include a first storage area storing programs or instructions and a second storage area storing data, wherein the first storage area may store an operating system, application programs or instructions (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like. Further, the memory 909 may include a volatile memory or a nonvolatile memory, or the memory 909 may include both volatile and nonvolatile memories. The nonvolatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable EPROM (EEPROM), or a flash Memory. The volatile memory may be random access memory (Random Access Memory, RAM), static random access memory (STATIC RAM, SRAM), dynamic random access memory (DYNAMIC RAM, DRAM), synchronous Dynamic Random Access Memory (SDRAM), double data rate Synchronous dynamic random access memory (Double DATA RATE SDRAM, DDRSDRAM), enhanced Synchronous dynamic random access memory (ENHANCED SDRAM, ESDRAM), synchronous link dynamic random access memory (SYNCH LINK DRAM, SLDRAM), and Direct random access memory (DRRAM). Memory 909 in embodiments of the application includes, but is not limited to, these and any other suitable types of memory.
Processor 910 may include one or more processing units; optionally, the processor 910 integrates an application processor that primarily processes operations involving an operating system, user interface, application programs, etc., and a modem processor that primarily processes wireless communication signals, such as a baseband processor. It will be appreciated that the modem processor described above may not be integrated into the processor 910.
The embodiment of the application also provides a readable storage medium, on which a program or an instruction is stored, which when executed by a processor, implements each process of the above embodiment of the image display method, and can achieve the same technical effects, so that repetition is avoided, and no further description is given here.
The processor is a processor in the electronic device in the above embodiment. The readable storage medium includes a computer readable storage medium such as a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, an optical disk, or the like. The embodiment of the application further provides a chip, which comprises a processor and a communication interface, wherein the communication interface is coupled with the processor, and the processor is used for running programs or instructions to realize the processes of the embodiment of the image display method and achieve the same technical effects, so that repetition is avoided and redundant description is omitted.
It should be understood that the chips referred to in the embodiments of the present application may also be referred to as system-on-chip chips, chip systems, or system-on-chip chips, etc.
Embodiments of the present application provide a computer program product stored in a storage medium, where the program product is executed by at least one processor to implement the respective processes of the above-described shooting control method embodiments, and achieve the same technical effects, and for avoiding repetition, a detailed description is omitted herein.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Furthermore, it should be noted that the scope of the methods and apparatus in the embodiments of the present application is not limited to performing the functions in the order shown or discussed, but may also include performing the functions in a substantially simultaneous manner or in an opposite order depending on the functions involved, e.g., the described methods may be performed in an order different from that described, and various steps may be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a computer software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method according to the embodiments of the present application.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those having ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are to be protected by the present application.

Claims (11)

1. The shooting control method is applied to a shooting device comprising a cradle head and a camera, wherein the camera is arranged on the cradle head, and is characterized by comprising the following steps:
Controlling the cradle head to move along at least one direction, and controlling the camera to acquire at least two images in the moving process of the cradle head;
synthesizing the at least two images, and outputting a first image, wherein the first image is an image with special effect;
The motion amplitude of the cradle head and the number of images acquired by the camera are determined according to special effect intensity, wherein the special effect intensity is special effect intensity of special effect processing of the shooting device on the images acquired by the camera;
before the control of the movement of the pan-tilt head in at least one direction, the method further comprises:
and determining the motion amplitude of the cradle head and the image quantity of the at least two images acquired by the camera according to the special effect intensity.
2. The method of claim 1, wherein controlling the pan-tilt to move in at least one direction and controlling the camera to capture at least two images during the pan-tilt movement comprises:
after the cradle head moves the target amplitude along each direction of the at least one direction, controlling the camera to acquire an image;
wherein the target amplitude is determined by the motion amplitude and the number of images.
3. The method of claim 2, wherein the special effect intensity is proportional to the motion amplitude and the number of images, respectively.
4. The method of claim 1, wherein prior to said controlling movement of said pan and tilt head in at least one direction, said method further comprises:
receiving a first input of a user to a special effect control, wherein the special effect control is used for indicating a special effect area and special effect characteristics;
determining a first region in a first preview image and a first special effect feature of an object in the first region in response to the first input;
the at least one direction is determined based on the first region and the first special effects feature.
5. The method of claim 1, wherein prior to said controlling movement of said pan and tilt head in said at least one direction, said method further comprises:
Receiving a second input of a user to a second preview image;
Responsive to the second input, determining a second region in the second preview image corresponding to an input location of the second input;
receiving a third input from a user on the second area, the third input including at least one sub-input;
In response to the third input, determining an input direction of the at least one sub-input as the at least one direction, one direction for each sub-input.
6. The method of claim 4, wherein synthesizing the at least two images to output a first image comprises:
acquiring first characteristic information of a region corresponding to a target region in each image of the at least two images, wherein the target region is the first region;
Acquiring second characteristic information of regions except a third region in a second image, wherein the second image is at least one image of the at least two images, and the third region is a region corresponding to the target region in the second image;
And carrying out re-projection processing on at least two pieces of second characteristic information and the first characteristic information to obtain and output the first image.
7. The method of claim 5, wherein synthesizing the at least two images to output a first image comprises:
acquiring first characteristic information of a region corresponding to a target region in each image of the at least two images, wherein the target region is the second region;
Acquiring second characteristic information of regions except a third region in a second image, wherein the second image is at least one image of the at least two images, and the third region is a region corresponding to the target region in the second image;
And carrying out re-projection processing on at least two pieces of second characteristic information and the first characteristic information to obtain and output the first image.
8. The method according to claim 6 or 7, wherein before the re-projecting the at least two second feature information and the first feature information to obtain and output the first image, the method further comprises:
and performing feature alignment processing on the at least two pieces of second feature information and the first feature information.
9. A shooting control device is characterized by comprising a control module, a processing module and a determining module;
The control module is used for controlling the cradle head to move along at least one direction and controlling the camera to acquire at least two images in the movement process of the cradle head;
The processing module is used for synthesizing the at least two images and outputting a first image, wherein the first image is an image with a special effect;
the motion amplitude of the cradle head and the number of images acquired by the camera are determined according to special effect intensity, wherein the special effect intensity is special effect intensity for carrying out special effect processing on the images acquired by the camera;
the determining module is further configured to determine, according to the special effect intensity, a motion amplitude of the pan-tilt and an image number of the at least two images acquired by the camera.
10. A photographing apparatus comprising a processor, a memory and a program or instructions stored on the memory and executable on the processor, which when executed by the processor, implement the steps of the photographing control method according to any one of claims 1 to 8.
11. A readable storage medium, wherein a program or instructions are stored thereon, which when executed by a processor, implement the steps of the photographing control method according to any one of claims 1 to 8.
CN202111671019.5A 2021-12-31 2021-12-31 Shooting control method, shooting control device, electronic equipment and medium Active CN114339047B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111671019.5A CN114339047B (en) 2021-12-31 2021-12-31 Shooting control method, shooting control device, electronic equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111671019.5A CN114339047B (en) 2021-12-31 2021-12-31 Shooting control method, shooting control device, electronic equipment and medium

Publications (2)

Publication Number Publication Date
CN114339047A CN114339047A (en) 2022-04-12
CN114339047B true CN114339047B (en) 2024-08-13

Family

ID=81020920

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111671019.5A Active CN114339047B (en) 2021-12-31 2021-12-31 Shooting control method, shooting control device, electronic equipment and medium

Country Status (1)

Country Link
CN (1) CN114339047B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112788244A (en) * 2021-02-09 2021-05-11 维沃移动通信(杭州)有限公司 Shooting method, shooting device and electronic equipment
CN113114933A (en) * 2021-03-30 2021-07-13 维沃移动通信有限公司 Image shooting method and device, electronic equipment and readable storage medium

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106095264B (en) * 2016-05-26 2019-10-01 努比亚技术有限公司 Special display effect device and method
CN109194865A (en) * 2018-08-06 2019-01-11 光锐恒宇(北京)科技有限公司 Image generating method, device, intelligent terminal and computer readable storage medium
CN209373441U (en) * 2018-12-29 2019-09-10 深圳市大疆创新科技有限公司 Clouds terrace system, mobile platform and fighting system
CN111010510B (en) * 2019-12-10 2021-11-16 维沃移动通信有限公司 Shooting control method and device and electronic equipment
CN112492214B (en) * 2020-12-07 2022-05-24 维沃移动通信有限公司 Image shooting method and device, electronic equipment and readable storage medium
CN112672050A (en) * 2020-12-24 2021-04-16 维沃移动通信有限公司 Shooting method and device based on holder and electronic equipment
CN112702497B (en) * 2020-12-28 2022-04-26 维沃移动通信有限公司 Shooting method and device

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112788244A (en) * 2021-02-09 2021-05-11 维沃移动通信(杭州)有限公司 Shooting method, shooting device and electronic equipment
CN113114933A (en) * 2021-03-30 2021-07-13 维沃移动通信有限公司 Image shooting method and device, electronic equipment and readable storage medium

Also Published As

Publication number Publication date
CN114339047A (en) 2022-04-12

Similar Documents

Publication Publication Date Title
CN106716302B (en) Method, apparatus, and computer-readable medium for displaying image
US7215322B2 (en) Input devices for augmented reality applications
KR102124617B1 (en) Method for composing image and an electronic device thereof
KR20120118583A (en) Apparatus and method for compositing image in a portable terminal
CN114125179B (en) Shooting method and device
US20230209204A1 (en) Display apparatus and camera tracking method
CN112954212B (en) Video generation method, device and equipment
CN115278084B (en) Image processing method, device, electronic equipment and storage medium
CN106412432A (en) Photographing method and mobile terminal
CN112637500A (en) Image processing method and device
CN113329172A (en) Shooting method and device and electronic equipment
CN112784081A (en) Image display method and device and electronic equipment
CN114390206A (en) Shooting method and device and electronic equipment
CN107566724B (en) Panoramic image shooting method and mobile terminal
CN114339047B (en) Shooting control method, shooting control device, electronic equipment and medium
CN114143455B (en) Shooting method and device and electronic equipment
US10939070B1 (en) Systems and methods for generating video images in a centered view mode
CN114615426A (en) Shooting method, shooting device, electronic equipment and readable storage medium
CN114339051A (en) Shooting method, shooting device, electronic equipment and readable storage medium
CN115103113B (en) Image processing method and electronic device
CN117097982B (en) Target detection method and system
CN115278053B (en) Image shooting method and electronic equipment
CN112287155B (en) Picture processing method and device
CN115840552A (en) Display method and device and first electronic equipment
CN114885101A (en) Image generation method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant