CN113572961A - Shooting processing method and electronic equipment - Google Patents
Shooting processing method and electronic equipment Download PDFInfo
- Publication number
- CN113572961A CN113572961A CN202110837287.3A CN202110837287A CN113572961A CN 113572961 A CN113572961 A CN 113572961A CN 202110837287 A CN202110837287 A CN 202110837287A CN 113572961 A CN113572961 A CN 113572961A
- Authority
- CN
- China
- Prior art keywords
- elements
- photos
- input
- information
- condition
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 26
- 238000000034 method Methods 0.000 claims abstract description 21
- 238000012545 processing Methods 0.000 claims description 28
- 230000002194 synthesizing effect Effects 0.000 claims description 8
- 238000003384 imaging method Methods 0.000 abstract description 2
- 230000004044 response Effects 0.000 description 8
- 238000004891 communication Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 5
- 230000006870 function Effects 0.000 description 5
- 230000000694 effects Effects 0.000 description 4
- 238000012790 confirmation Methods 0.000 description 3
- 238000003825 pressing Methods 0.000 description 3
- 230000015572 biosynthetic process Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000003786 synthesis reaction Methods 0.000 description 2
- 241000422846 Sequoiadendron giganteum Species 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000007599 discharging Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/63—Control of cameras or camera modules by using electronic viewfinders
- H04N23/631—Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
- H04N23/632—Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Human Computer Interaction (AREA)
- Studio Devices (AREA)
Abstract
The application discloses a shooting processing method and electronic equipment, and belongs to the technical field of imaging. The method comprises the following steps: under the condition that a preview image is displayed, identifying elements in the preview image and acquiring parameter information of the elements; receiving a first input of a user; responding to the first input, and generating M first photos according to the parameter information of the elements; wherein the parameter information of the element comprises at least one of: type information and depth information; m is a positive integer greater than 1.
Description
Technical Field
The application belongs to the technical field of imaging, and particularly relates to a shooting processing method and electronic equipment.
Background
At present, along with the development of intelligent terminals, the performance of the intelligent terminals is stronger and stronger, complex images can be easily processed, and more photos are shot by the intelligent terminals. The generated photo is shot, elements in the photo cannot be edited, for example, the element is shot with a watermark, the watermark is removed later, the element can only be processed by image editing software, and the effect is not good; or the content which can be edited again is too little, for example, only blurring and partial background can be adjusted, and the personalized requirements of the user cannot be met.
Disclosure of Invention
The embodiment of the application aims to provide a shooting processing method and electronic equipment, and the problem that a shot and generated photo cannot be edited any more or the editable content is too little, so that the user requirement cannot be met can be solved.
In a first aspect, an embodiment of the present application provides a shooting processing method, including:
under the condition that a preview image is displayed, identifying elements in the preview image and acquiring parameter information of the elements;
receiving a first input of a user;
responding to the first input, and generating M first photos according to the parameter information of the elements;
wherein the parameter information of the element comprises at least one of: type information and depth information; m is a positive integer greater than 1.
In a second aspect, an embodiment of the present application provides a shooting processing apparatus, including:
the device comprises an identification unit, a display unit and a control unit, wherein the identification unit is used for identifying elements in a preview image and acquiring parameter information of the elements under the condition that the preview image is displayed;
a first receiving unit for receiving a first input of a user;
the first processing unit is used for responding to the first input and generating M first photos according to the parameter information of the elements;
wherein the parameter information of the element comprises at least one of: type information and depth information; m is a positive integer greater than 1.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor, where the program or instructions, when executed by the processor, implement the steps of the shooting processing method according to the first aspect.
In a fourth aspect, the present application provides a readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the shooting processing method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the shooting processing method according to the first aspect.
In the embodiment of the application, the elements in the preview image are identified, the first input of the user is responded, and the M first photos are generated according to the parameter information of the elements, so that the user can edit the image conveniently and quickly, and the personalized requirements of the user are met.
Drawings
Fig. 1 is a schematic flow chart of a shooting processing method provided in an embodiment of the present application;
fig. 2 is a schematic structural diagram of a shooting processing apparatus according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 4 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described clearly below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments that can be derived by one of ordinary skill in the art from the embodiments given herein are intended to be within the scope of the present disclosure.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced in sequences other than those illustrated or described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense and do not limit the number of terms, e.g., the first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
In the embodiments of the present application, the term "plurality" means two or more, and other terms are similar thereto.
The shooting processing method provided by the embodiment of the present application is described in detail below with reference to the accompanying drawings by some embodiments and application scenarios thereof. According to the shooting processing method provided by the embodiment of the application, an execution main body is electronic equipment, and the electronic equipment provided by the embodiment of the application includes but is not limited to a mobile phone, a tablet computer, a computer, wearable equipment and the like.
Fig. 1 is a schematic flow chart of a shooting processing method provided in an embodiment of the present application, and as shown in fig. 1, the method includes:
Alternatively, in the case where a preview image is displayed on the electronic device, the image recognition sensor recognizes an element in the preview image and acquires parameter information of the element.
The preview image may be obtained by: and clicking the camera application by the user, starting the camera preview by the electronic equipment, and acquiring and displaying the preview image on the electronic equipment.
Here, the element may be understood as an element constituting the preview image, such as a person 1, a person 2, a tree, a dog, a sky, or the like.
The parameter information of the element includes at least one of: type information and depth information. For example, the type information includes: main characters, backgrounds and sundries; the depth of field information includes: short shot, medium shot and long shot.
Alternatively, the first input is an operation of confirming photographing.
And 102, responding to the first input, and generating M first photos according to the parameter information of the elements.
Alternatively, if the first input is an operation of confirming the shooting, in response to the first input of the user, the M first photos are generated according to the parameter information of the elements acquired in step 100.
Wherein M is a positive integer greater than 1, and the size of M is related to the number of elements in the preview image.
The first photo is editable, and for example, operations such as moving, scaling, and deleting may be performed on the first photo.
In the embodiment of the application, the elements in the preview image are identified, the first input of the user is responded, and the M first photos are generated according to the parameter information of the elements, so that the user can edit the image conveniently and quickly, and the personalized requirements of the user are met.
In one embodiment, the type information or the depth information of the elements in each of the first photographs is matched.
Optionally, the elements are stored in the first photos according to the elements in the preview image and the type information or depth information corresponding to the elements, each first photo may include one or more elements, and the type information or depth information of the elements in each first photo is matched.
For example, an element whose genre information is the main person is saved to the first photograph 1; saving the element with the type information as background and the depth information as long shot to the first picture 2; saving the element with the type information as background and the depth information as medium scene to the first photo 3; saving the element with the type information as background and the depth information as close shot to the first picture 4; the element whose type information is sundries is saved to the first photograph 5. Or, saving the element of which the depth information is a long-range view to the first photo 1; saving the element of which the depth of field information is the middle scene to the first picture 2; the element whose depth information is close-up is saved to the first photograph 3.
In the embodiment of the application, the type information or the depth information of the elements in each first photo is matched, and the elements in the preview image are classified and displayed in different first photos, so that a user can edit the image conveniently.
In one embodiment, after the generating M first photos according to the parameter information of the element, the method further includes:
synthesizing the N first photos to obtain a target image;
displaying the target image;
wherein N is a positive integer less than or equal to M.
Optionally, the electronic device synthesizes the N first photos according to a second input of the user, so as to obtain a target image.
Alternatively, the second input may be an operation of the user to edit and synthesize at least one of the M first photos.
Illustratively, the second input is an operation of pressing a first photo for a long time to delete the first photo and clicking a confirmation composition button, and then other first photos (the first photos that are not deleted) are synthesized to generate the target image, or an operation of dragging the first photo for a first preset time to move the first photo and clicking a confirmation composition button, and then the first photos (the moved first photos) are synthesized to generate the target image.
It is understood that, in response to the second input, the M first photos are processed in relation to the second input to obtain N first photos, and the N first photos are synthesized to obtain the target image.
Alternatively, the second input may be a composition operation. That is, the user can directly synthesize the M first photos without editing them.
Illustratively, a second input of the user is received, and in response to the second input, the M first photos are synthesized to obtain a target image, and the target image is displayed.
Optionally, the second input is an operation of the user to edit at least one of the M first photos, where the editing may be a long press or a drag. After the user edits, the electronic device automatically synthesizes the edited multiple first photos to obtain the target image without clicking a synthesis confirmation button after the first preset time.
Optionally, after the M first photos are generated, the electronic device may screen N photos from the M photos for synthesis, so as to obtain the target image.
For example, after generating M first photos, the electronic device may identify N photos that meet a preset scene or a preset condition from the M photos, and synthesize the N photos to obtain a target image.
Different target images can be obtained according to different selections or scenes, one picture can be taken at a certain place, and a plurality of different target images can be obtained. The elements contained in each target image may be different.
Optionally, the synthesizing the N first photos includes synthesizing the N first photos with the system information.
Alternatively, the system information may be watermark information.
Alternatively, the system information is stored in one system photo, i.e. the target image is synthesized from the N first photos and one system photo, and the system photo is editable.
In the embodiment of the application, the preview image is subjected to image recognition to obtain elements in the preview image, M first photos are generated based on the elements, and then N first photos are synthesized to obtain the target image, so that a user can edit the image conveniently and quickly, and personalized requirements of the user are met.
In one embodiment, the generating M first photos according to the parameter information of the element includes:
under the condition that the number of elements in the preview image is larger than a preset number, generating M first photos according to the type information or the depth information of the elements, wherein the type information or the depth information of the elements in each first photo is the same;
or, in the case that the number of elements in the preview image is less than or equal to the preset number, each element corresponds to one first photo respectively to generate M first photos.
Optionally, when the number of elements in the preview image is too large, if each element corresponds to one first photo when the first photos are generated, the number of the first photos is too large, and difficulty of editing the images by a user is increased, so that when the number of elements in the preview image is greater than a preset number, M first photos are generated according to the type information or depth information of the elements, where the type information or depth information of the elements in each first photo is the same.
For example, 20 elements are included in the preview image, and the 20 elements can be identified, and the element type information is stored in the same first photo, for example, all animal elements in the 20 elements are stored in one first photo. Alternatively, the depth information of the 20 elements may be stored in the same first photo, for example, the 20 elements all in the foreground depth are stored in one first photo, the 20 elements all in the middle depth are stored in another first photo, and the 20 elements all in the background depth are stored in another first photo.
Optionally, in a case that the number of elements in the preview image is less than or equal to the preset number, each element corresponds to one first photo respectively to generate M first photos.
In the embodiment of the application, the generation rule of the first photo is determined according to the number of the elements in the preview image, and the first photo is generated according to the type information or the depth information of the elements under the condition that the number of the elements in the preview image is larger than the preset number.
In one embodiment, after displaying the target image, the method further includes:
under the condition that the first input meets a preset condition, saving the target image and deleting the M first photos;
or, under the condition that the first input does not meet the preset condition, saving the target image and the M first photos.
Since the first photograph is editable, the target image obtained by synthesizing the N first photographs is also editable.
Alternatively, the preset condition may be a preset specific action, and when the first input matches the specific action, the first input is considered to satisfy the preset condition.
As in the above example, 3 first photos are generated after the first input shooting, and if the 3 first photos are combined, a target image identical to the shooting preview interface can be obtained, and the target image is displayed, at this time, the 3 first photos are still stored in the electronic device, so that any number of first photos can be combined through the combining operation to generate different target images.
In the present embodiment, if the preset condition is the operation of pressing the shooting button for a long time, when the first input is the long time pressing of the shooting button to shoot, 3 first photos are generated, and after the 3 first photos are processed to generate the target image, the 3 first photos can be automatically deleted, and the target image can be saved.
And under the condition that the first input does not meet the preset condition, saving the target image and the M first photos, so that the user can continuously obtain an updated target image based on the M first photos.
For example, the target image is composed of 5 first photographs, the element on the first photograph 1 is a subject person of the shooting target, the element on the first photograph 2 is a background person, the element on the first photograph 3 is a large tree of the background, the element on the first photograph 4 is a blue sky, the first photograph 5 is watermark information, and different first photographs are displayed with different number marks; the user can edit the target image according to the requirements of the composition, such as moving the main person of the first photo 1 to the left and right of the big tree; the first photo 2 can be pressed for a long time, and background people can be deleted; because the watermark information is added behind the system, when the composition of the whole picture is influenced by the shielding of the watermark information, the position of the watermark can be adjusted or the watermark can be deleted. And the electronic equipment performs corresponding processing on the target image according to the third input of the user, updates the target image and displays the target image. And the third input is the operation of editing the target image by the user.
In the embodiment of the application, the preview image is subjected to image recognition to obtain elements in the preview image, a plurality of first photos are generated based on the elements, then the first input of a user is received, the first photos are synthesized in response to the first input to obtain a target image, the target image is displayed, and under the condition that the first input of the user meets a preset condition, the target image is stored, the generated first photos are deleted, so that the user can conveniently edit the image, and the personalized requirements of the user are met.
In the shooting processing method provided by the embodiment of the present application, the execution subject may be a shooting processing apparatus, or a control module in the shooting processing apparatus for executing the shooting processing method. The embodiment of the present application takes a shooting processing apparatus executing a shooting processing method as an example, and describes a shooting processing apparatus provided in the embodiment of the present application.
Fig. 2 is a schematic structural diagram of a shooting processing apparatus according to an embodiment of the present application. As shown in fig. 2, the apparatus includes: an identification unit 210, a first receiving unit 220, a first processing unit 230, wherein,
a recognition unit 210 configured to, when a preview image is displayed, recognize an element in the preview image and acquire parameter information of the element;
a first receiving unit 220 for receiving a first input of a user;
a first processing unit 230, configured to generate, in response to the first input, M first photos according to parameter information of the element;
wherein the parameter information of the element comprises at least one of: type information and depth information; m is a positive integer greater than 1.
In the embodiment of the application, the elements in the preview image are identified, the first input of the user is responded, and the M first photos are generated according to the parameter information of the elements, so that the user can edit the image conveniently and quickly, and the personalized requirements of the user are met.
Optionally, the type information or the depth information of the elements in each of the first photos matches.
In the embodiment of the application, the type information or the depth information of the elements in the first photos are matched, the elements in the preview images are classified and displayed in different first photos, and therefore a user can edit the images conveniently and quickly.
Optionally, the shooting processing apparatus further includes:
the second processing unit is used for synthesizing the N first photos to obtain a target image;
a display unit for displaying the target image;
wherein N is a positive integer less than or equal to M.
In the embodiment of the application, the preview image is subjected to image recognition to obtain elements in the preview image, a plurality of first photos are generated based on the elements, and then the N first photos are synthesized according to the first input of a user to obtain a target image, so that the user can edit the image conveniently and quickly, and the personalized requirements of the user are met.
Optionally, the first processing unit is configured to,
under the condition that the number of elements in the preview image is larger than a preset number, generating M first photos according to the type information or the depth information of the elements, wherein the type information or the depth information of the elements in each first photo is the same;
or, in the case that the number of elements in the preview image is less than or equal to the preset number, each element corresponds to one first photo respectively to generate M first photos.
In the embodiment of the application, the generation rule of the first photo is determined according to the number of the elements in the preview image, and the first photo is generated according to the type information or the depth information of the elements under the condition that the number of the elements in the preview image is larger than the preset number.
Optionally, the shooting processing apparatus further includes:
the third processing unit is used for saving the target image and deleting the M first photos under the condition that the first input meets a preset condition;
or, under the condition that the first input does not meet the preset condition, saving the target image and the M first photos.
In the embodiment of the application, the preview image is subjected to image recognition to obtain elements in the preview image, a plurality of first photos are generated based on the elements, then the first input of a user is received, the first photos are synthesized in response to the first input to obtain a target image, the target image is displayed, and under the condition that the first input of the user meets a preset condition, the target image is stored, the generated first photos are deleted, so that the user can conveniently edit the image, and the personalized requirements of the user are met. The shooting processing device in the embodiment of the present application may be a device, or may be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine or a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The shooting processing apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present application are not limited specifically.
The shooting processing apparatus provided in the embodiment of the present application can implement each process implemented in the method embodiment of fig. 1, and is not described here again to avoid repetition.
Optionally, as shown in fig. 3, an electronic device 300 is further provided in this embodiment of the present application, and includes a processor 301, a memory 302, and a program or an instruction stored in the memory 302 and capable of being executed on the processor 301, where the program or the instruction is executed by the processor 301 to implement each process of the above-mentioned shooting processing method embodiment, and can achieve the same technical effect, and in order to avoid repetition, it is not described here again.
It should be noted that the electronic device in the embodiment of the present application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 4 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 400 includes, but is not limited to: radio unit 401, network module 402, audio output unit 403, input unit 404, sensor 405, display unit 406, user input unit 407, interface unit 408, memory 409, and processor 410.
Those skilled in the art will appreciate that the electronic device 400 may further include a power source (e.g., a battery) for supplying power to various components, and the power source may be logically connected to the processor 410 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system. The electronic device structure shown in fig. 4 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is omitted here.
The processor 410 is configured to, when a preview image is displayed, identify an element in the preview image and acquire parameter information of the element;
the user input unit 407 is configured to receive a first input of a user;
wherein, the processor 410 is configured to generate M first photos according to the parameter information of the element in response to the first input;
wherein the parameter information of the element comprises at least one of: type information and depth information; m is a positive integer greater than 1.
In the embodiment of the application, the elements in the preview image are identified, the first input of the user is responded, and the M first photos are generated according to the parameter information of the elements, so that the user can edit the image conveniently and quickly, and the personalized requirements of the user are met.
Optionally, the type information or the depth information of the elements in each of the first photos matches.
In the embodiment of the application, the type information or the depth information of the elements in each first photo is matched, and the elements in the preview image are classified and displayed in different first photos, so that a user can edit the image conveniently.
Optionally, the processor 410 is further configured to:
synthesizing the N first photos to obtain a target image;
displaying the target image;
wherein N is a positive integer less than or equal to M.
In the embodiment of the application, the preview image is subjected to image recognition to obtain elements in the preview image, M first photos are generated based on the elements, and then N first photos are synthesized to obtain the target image, so that a user can edit the image conveniently and quickly, and personalized requirements of the user are met.
Optionally, the generating M first photos according to the parameter information of the element includes:
under the condition that the number of elements in the preview image is larger than a preset number, generating M first photos according to the type information or the depth information of the elements, wherein the type information or the depth information of the elements in each first photo is the same;
or, in the case that the number of elements in the preview image is less than or equal to the preset number, each element corresponds to one first photo respectively to generate M first photos.
In the embodiment of the application, the generation rule of the first photo is determined according to the number of the elements in the preview image, and the first photo is generated according to the type information or the depth information of the elements under the condition that the number of the elements in the preview image is larger than the preset number.
Optionally, the processor 410 is further configured to:
under the condition that the first input meets a preset condition, saving the target image and deleting the M first photos;
or, under the condition that the first input does not meet the preset condition, saving the target image and the M first photos.
In the embodiment of the application, the preview image is subjected to image recognition to obtain elements in the preview image, a plurality of first photos are generated based on the elements, then the first input of a user is received, the first photos are synthesized in response to the first input to obtain a target image, the target image is displayed, and under the condition that the first input of the user meets a preset condition, the target image is stored, the generated first photos are deleted, so that the user can conveniently edit the image, and the personalized requirements of the user are met.
It should be understood that in the embodiment of the present application, the input Unit 404 may include a Graphics Processing Unit (GPU) 4041 and a microphone 4042, and the Graphics processor 4041 processes image data of a still picture or a video obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The display unit 406 may include a display panel 4061, and the display panel 4061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 407 includes a touch panel 4071 and other input devices 4072. A touch panel 4071, also referred to as a touch screen. The touch panel 4071 may include two parts, a touch detection device and a touch controller. Other input devices 4072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein. The memory 409 may be used to store software programs as well as various data including, but not limited to, application programs and an operating system. The processor 410 may integrate an application processor, which primarily handles operating systems, user interfaces, applications, etc., and a modem processor, which primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 410.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the above shooting processing method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and so on.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to run a program or an instruction to implement each process of the above shooting processing method embodiment, and can achieve the same technical effect, and the details are not repeated here to avoid repetition.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a computer software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.
Claims (12)
1. A shooting processing method, characterized by comprising:
under the condition that a preview image is displayed, identifying elements in the preview image and acquiring parameter information of the elements;
receiving a first input of a user;
responding to the first input, and generating M first photos according to the parameter information of the elements;
wherein the parameter information of the element comprises at least one of: type information and depth information; m is a positive integer greater than 1.
2. The photographic processing method of claim 1, wherein the type information or the depth information of the elements in each of the first photographs match.
3. The shooting processing method according to claim 1, wherein after generating M first photos according to the parameter information of the element, the method further comprises:
synthesizing the N first photos to obtain a target image;
displaying the target image;
wherein N is a positive integer less than or equal to M.
4. The shooting processing method according to claim 1, wherein the generating M first photographs according to the parameter information of the element includes:
under the condition that the number of elements in the preview image is larger than a preset number, generating M first photos according to the type information or the depth information of the elements, wherein the type information or the depth information of the elements in each first photo is the same;
or, in the case that the number of elements in the preview image is less than or equal to the preset number, each element corresponds to one first photo respectively to generate M first photos.
5. The shooting processing method according to claim 3, further comprising, after said displaying the target image:
under the condition that the first input meets a preset condition, saving the target image and deleting the M first photos;
or, under the condition that the first input does not meet the preset condition, saving the target image and the M first photos.
6. A shooting processing apparatus characterized by comprising:
the device comprises an identification unit, a display unit and a control unit, wherein the identification unit is used for identifying elements in a preview image and acquiring parameter information of the elements under the condition that the preview image is displayed;
a first receiving unit for receiving a first input of a user;
the first processing unit is used for responding to the first input and generating M first photos according to the parameter information of the elements;
wherein the parameter information of the element comprises at least one of: type information and depth information; m is a positive integer greater than 1.
7. The apparatus according to claim 6, wherein the type information or the depth information of the element in each of the first photographs matches.
8. The shooting processing apparatus according to claim 6, characterized by further comprising:
the second processing unit is used for synthesizing the N first photos to obtain a target image;
a display unit for displaying the target image;
wherein N is a positive integer less than or equal to M.
9. The shooting processing apparatus according to claim 6, wherein the first processing unit is specifically configured to:
under the condition that the number of elements in the preview image is larger than a preset number, generating M first photos according to the type information or the depth information of the elements, wherein the type information or the depth information of the elements in each first photo is the same;
or, in the case that the number of elements in the preview image is less than or equal to the preset number, each element corresponds to one first photo respectively to generate M first photos.
10. The shooting processing apparatus according to claim 8, characterized by further comprising:
the third processing unit is used for saving the target image and deleting the M first photos under the condition that the first input meets a preset condition;
or, under the condition that the first input does not meet the preset condition, saving the target image and the M first photos.
11. An electronic device comprising a processor, a memory, and a program or instructions stored on the memory and executable on the processor, the program or instructions when executed by the processor implementing the steps of the shooting processing method of any one of claims 1 to 5.
12. A readable storage medium, characterized in that a program or instructions are stored thereon, which when executed by a processor, implement the steps of the photographing processing method according to any one of claims 1-5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110837287.3A CN113572961A (en) | 2021-07-23 | 2021-07-23 | Shooting processing method and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110837287.3A CN113572961A (en) | 2021-07-23 | 2021-07-23 | Shooting processing method and electronic equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113572961A true CN113572961A (en) | 2021-10-29 |
Family
ID=78166868
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110837287.3A Pending CN113572961A (en) | 2021-07-23 | 2021-07-23 | Shooting processing method and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113572961A (en) |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103903213A (en) * | 2012-12-24 | 2014-07-02 | 联想(北京)有限公司 | Shooting method and electronic device |
JP2015069616A (en) * | 2013-10-01 | 2015-04-13 | コニカミノルタ株式会社 | Preview image generation method and preview image generation program, and preview image generation device |
JP2015142320A (en) * | 2014-01-30 | 2015-08-03 | 株式会社バンダイナムコエンターテインメント | Imaging printing system, server system and program |
CN105744168A (en) * | 2016-03-28 | 2016-07-06 | 联想(北京)有限公司 | Information processing method and electronic device |
CN106878606A (en) * | 2015-12-10 | 2017-06-20 | 北京奇虎科技有限公司 | A kind of image generating method and electronic equipment based on electronic equipment |
CN110418056A (en) * | 2019-07-16 | 2019-11-05 | Oppo广东移动通信有限公司 | A kind of image processing method, device, storage medium and electronic equipment |
CN110650289A (en) * | 2019-09-27 | 2020-01-03 | 努比亚技术有限公司 | Shooting depth of field control method and device and computer readable storage medium |
CN110661971A (en) * | 2019-09-03 | 2020-01-07 | RealMe重庆移动通信有限公司 | Image shooting method and device, storage medium and electronic equipment |
CN111885307A (en) * | 2020-07-30 | 2020-11-03 | 努比亚技术有限公司 | Depth-of-field shooting method and device and computer readable storage medium |
CN112532881A (en) * | 2020-11-26 | 2021-03-19 | 维沃移动通信有限公司 | Image processing method and device and electronic equipment |
-
2021
- 2021-07-23 CN CN202110837287.3A patent/CN113572961A/en active Pending
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103903213A (en) * | 2012-12-24 | 2014-07-02 | 联想(北京)有限公司 | Shooting method and electronic device |
JP2015069616A (en) * | 2013-10-01 | 2015-04-13 | コニカミノルタ株式会社 | Preview image generation method and preview image generation program, and preview image generation device |
JP2015142320A (en) * | 2014-01-30 | 2015-08-03 | 株式会社バンダイナムコエンターテインメント | Imaging printing system, server system and program |
CN106878606A (en) * | 2015-12-10 | 2017-06-20 | 北京奇虎科技有限公司 | A kind of image generating method and electronic equipment based on electronic equipment |
CN105744168A (en) * | 2016-03-28 | 2016-07-06 | 联想(北京)有限公司 | Information processing method and electronic device |
CN110418056A (en) * | 2019-07-16 | 2019-11-05 | Oppo广东移动通信有限公司 | A kind of image processing method, device, storage medium and electronic equipment |
CN110661971A (en) * | 2019-09-03 | 2020-01-07 | RealMe重庆移动通信有限公司 | Image shooting method and device, storage medium and electronic equipment |
CN110650289A (en) * | 2019-09-27 | 2020-01-03 | 努比亚技术有限公司 | Shooting depth of field control method and device and computer readable storage medium |
CN111885307A (en) * | 2020-07-30 | 2020-11-03 | 努比亚技术有限公司 | Depth-of-field shooting method and device and computer readable storage medium |
CN112532881A (en) * | 2020-11-26 | 2021-03-19 | 维沃移动通信有限公司 | Image processing method and device and electronic equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111612873B (en) | GIF picture generation method and device and electronic equipment | |
CN113093968A (en) | Shooting interface display method and device, electronic equipment and medium | |
US9137461B2 (en) | Real-time camera view through drawn region for image capture | |
CN111857512A (en) | Image editing method and device and electronic equipment | |
CN112437232A (en) | Shooting method, shooting device, electronic equipment and readable storage medium | |
CN111601012B (en) | Image processing method and device and electronic equipment | |
CN112449110B (en) | Image processing method and device and electronic equipment | |
CN113079316A (en) | Image processing method, image processing device and electronic equipment | |
CN112672061A (en) | Video shooting method and device, electronic equipment and medium | |
CN113794834A (en) | Image processing method and device and electronic equipment | |
CN114025100B (en) | Shooting method, shooting device, electronic equipment and readable storage medium | |
CN113194256B (en) | Shooting method, shooting device, electronic equipment and storage medium | |
CN112383708B (en) | Shooting method and device, electronic equipment and readable storage medium | |
CN112330728A (en) | Image processing method, image processing device, electronic equipment and readable storage medium | |
CN113794831A (en) | Video shooting method and device, electronic equipment and medium | |
WO2022247766A1 (en) | Image processing method and apparatus, and electronic device | |
CN114143455B (en) | Shooting method and device and electronic equipment | |
CN112367487B (en) | Video recording method and electronic equipment | |
CN114584704A (en) | Shooting method and device and electronic equipment | |
CN115037874A (en) | Photographing method and device and electronic equipment | |
CN111325656B (en) | Image processing method, image processing device and terminal equipment | |
CN113572961A (en) | Shooting processing method and electronic equipment | |
CN114500844A (en) | Shooting method and device and electronic equipment | |
CN113139367A (en) | Document generation method and device and electronic equipment | |
CN112261483A (en) | Video output method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20211029 |