CN113225477A - Shooting method and device and camera application - Google Patents
Shooting method and device and camera application Download PDFInfo
- Publication number
- CN113225477A CN113225477A CN202110384887.9A CN202110384887A CN113225477A CN 113225477 A CN113225477 A CN 113225477A CN 202110384887 A CN202110384887 A CN 202110384887A CN 113225477 A CN113225477 A CN 113225477A
- Authority
- CN
- China
- Prior art keywords
- processing
- image
- interest
- region
- user
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 77
- 238000012545 processing Methods 0.000 claims abstract description 108
- 238000003672 processing method Methods 0.000 claims abstract description 69
- 230000008569 process Effects 0.000 claims abstract description 28
- 230000004044 response Effects 0.000 claims description 12
- 238000009432 framing Methods 0.000 claims description 10
- 238000009877 rendering Methods 0.000 claims description 9
- 230000015654 memory Effects 0.000 description 9
- 230000006870 function Effects 0.000 description 6
- 238000010586 diagram Methods 0.000 description 5
- 230000000694 effects Effects 0.000 description 5
- 238000012958 reprocessing Methods 0.000 description 5
- 238000011282 treatment Methods 0.000 description 5
- 241000282472 Canis lupus familiaris Species 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 3
- 230000009471 action Effects 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000012544 monitoring process Methods 0.000 description 2
- 239000002245 particle Substances 0.000 description 2
- 238000011268 retreatment Methods 0.000 description 2
- 238000007493 shaping process Methods 0.000 description 2
- 102100021662 Baculoviral IAP repeat-containing protein 3 Human genes 0.000 description 1
- 101000896224 Homo sapiens Baculoviral IAP repeat-containing protein 3 Proteins 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000004069 differentiation Effects 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000012805 post-processing Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/63—Control of cameras or camera modules by using electronic viewfinders
- H04N23/631—Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
- H04N23/632—Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/63—Control of cameras or camera modules by using electronic viewfinders
- H04N23/633—Control of cameras or camera modules by using electronic viewfinders for displaying additional information relating to control or operation of the camera
- H04N23/635—Region indicators; Field of view indicators
Abstract
The invention discloses a shooting method, a shooting device and camera application. The method comprises the following steps: responding to the region selection operation of a user, and determining an interest region of the image; and processing the interest region of the shot image according to a first processing method, and processing the outside of the interest region according to a second processing method. The user can independently select the interest area, the images are subjected to personalized processing according to the interest area in real time in the shooting process, the processed images are directly obtained, secondary processing is not needed, and time and labor are saved.
Description
Technical Field
The present invention relates to the field of imaging technologies, and in particular, to a shooting method, a shooting device, and a camera application.
Background
Electronic mobile terminals such as digital cameras, smart phones and tablet computers generally have a photographing function, and the requirement that people can take photos anytime and anywhere is met.
In the prior art, the camera cannot perform personalized processing on images in the shooting process, so that personalized requirements are met, namely, the images after personalized processing cannot be directly obtained through shooting. For example, when a user wants to take a full-body photograph, the user needs to perform body treatment, beautification, background blurring, and other treatments; when the tourist is travelling, the tourist needs to filter surrounding tourists and emphasize scenes of landscapes and people; for shooting a specific real object, the real object needs to be automatically identified, and a scene outside the real object needs to be virtualized. In the prior art, secondary processing needs to be performed on the shot image to meet the personalized requirements, time and labor are wasted, and the processing effect is limited by the mastery degree of the user on the relevant processing software.
Disclosure of Invention
In view of the above, the present invention has been made to provide a photographing method, apparatus and camera application that overcome or at least partially solve the above problems.
In a first aspect, an embodiment of the present invention provides a shooting method, including:
responding to the region selection operation of a user, and determining an interest region of the image;
and processing the interest region of the shot image according to a first processing method, and processing the outside of the interest region according to a second processing method.
In a second aspect, an embodiment of the present invention provides a shooting apparatus, including:
the determining module is used for responding to the region selection operation of the user and determining the interest region of the image;
and the processing module is used for processing the interest area of the shot image according to a first processing method, and processing the outside of the interest area according to a second processing method.
In a third aspect, an embodiment of the present invention provides a camera application, where the camera application includes the above-described photographing apparatus.
The technical scheme provided by the embodiment of the invention has the beneficial effects that at least:
(1) according to the shooting method provided by the embodiment of the invention, the interest area of the image is determined in response to the area selection operation of a user; and processing the interest region of the shot image according to a first processing method, and processing the outside of the interest region according to a second processing method. The images are processed in real time in the shooting process, so that the processed images are directly obtained after shooting, the processing requirements of users can be met without secondary processing, and time and labor are saved; meanwhile, the image processing effect is prevented from being limited by the mastery degree of the user on the related image processing software and the processing function of the software.
(2) The interest area is determined according to the area selection operation of the user, so that the user can independently select the interest area when previewing the image, and differential processing is carried out on the shot image in the interest area and the shot image out of the interest area instead of automatically selecting focusing things by a camera, and the processing requirement is better met.
The technical solution of the present invention is further described in detail by the accompanying drawings and embodiments.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings:
FIG. 1 is a flowchart illustrating a photographing method according to an embodiment of the present invention;
FIG. 2A is a diagram illustrating an exemplary continuous moving track according to an embodiment of the present invention;
FIG. 2B is a diagram illustrating another exemplary continuous moving track according to one embodiment of the present invention;
FIG. 2C is a diagram illustrating another exemplary continuous moving track according to a first embodiment of the present invention;
FIG. 3 is a flowchart illustrating a specific implementation of a second shooting method according to the present invention;
FIG. 4 is a flow chart of another embodiment of a photographing method in accordance with the present invention;
FIG. 5 is a diagram of a data transmission mode between the system and the camera;
fig. 6 is a schematic structural diagram of a shooting device in an embodiment of the invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
In order to solve the problem that personalized processing cannot be performed on an image in the shooting process in the prior art, the embodiment of the invention provides a shooting method, a shooting device and a camera application, which can enable a user to independently select an interest area, perform personalized processing on the image according to the interest area in real time in the shooting process, directly obtain the processed image, do not need secondary processing, and save time and labor.
Example one
The embodiment of the invention provides a shooting method, the flow of which is shown in figure 1, and the shooting method comprises the following steps:
step S11: and determining the interest area of the image in response to the area selection operation of the user.
Specifically, the region of interest of the image is determined in response to the last region selection operation by the user during the framing process. Namely, the region selection operation is the latest region selection operation received by the user in the framing process.
In one embodiment, the method may include detecting a set contact event with a touch sensitive display screen; acquiring a continuous moving track of a contact on a touch-sensitive display screen, and determining a closed curve according to the continuous moving track; the inside of the closed curve is determined as the region of interest.
Referring to fig. 2A, when the continuous moving trajectory is a closed curve, coordinates of each node on the curve are directly identified, so that the closed curve can be determined; referring to fig. 2B, when the continuous moving track is a cross curve, identifying a closed part of the curve, and determining the closed curve according to coordinates of nodes of the closed part, i.e. deleting AB and AC parts of the moving track (keeping the cross node a); referring to fig. 2C, when the continuous moving track is a non-closed curve, the track prediction is performed outward along at least one end point of the track until the current track is a closed curve, and coordinates of each node on the curve are identified, so that the closed curve can be determined.
It can be seen that the closed curve may be any shape or size determined by the autonomous operation of the user, and the shape may be regular or irregular.
Step S12: and processing the interest region of the shot image according to a first processing method, and processing the outside of the interest region according to a second processing method.
In one embodiment, before processing the captured image, the method further includes determining a processing method according to a shooting mode selected by a user and a preset corresponding relationship between the shooting mode and an image processing method, where the processing method includes a first processing method for a region of interest of the image and a second processing method outside the region of interest. I.e. performing differentiation processing inside and outside the region of interest.
Determining a processing method according to the shooting mode selected by the user and the corresponding relation between the preset shooting mode and the image processing method, wherein if the shooting mode selected by the user is a scenic region portrait shooting mode, the corresponding image processing method comprises portrait thinning processing of an interested region of the image and portrait removing processing outside the interested region; and if the shooting mode selected by the user is the real object shooting mode, determining that the corresponding preview image processing method comprises real object thinning processing of an interest area of the image and blurring processing outside the interest area.
When the shooting mode selected by the user is a scenic region portrait shooting mode, a first processing method for determining the interested region of the image comprises portrait thinning processing, specifically comprising recognizing human faces and human bodies in the interested region, and performing processing such as beautifying, body beautifying and character detail thinning; the second processing method for determining the outside of the interest area of the image comprises portrait removing processing, and original background restoration processing of the scenic spot is required while the portrait is removed by P. When too many figures are needed to be removed, if the background cannot be restored, a prompt window can be popped up to prompt that the figures outside the interesting area are too many and the background cannot be perfectly restored, and prompt that shooting places are converted or shooting is performed when people are few.
When the shooting mode selected by the user is the real object shooting mode, the real object thinning processing of the determined interest area of the image can be specifically realized by increasing details, adjusting a filter and other operations, increasing picture pixels and improving definition; the blurring process outside the region of interest of the image is determined, and may specifically be blurring fog or mosaic process.
The free selection of the interest areas can meet more personalized processing requirements, the interest areas of the processing method in the prior art can only be automatically identified areas or areas with set shapes and sizes generated according to click positions of users, for example, when people and dogs take photos in scenic spots at the same time, people are often only refined and dogs are blurred in the prior art, which is certainly not the processing effect required by the users, but the shooting method provided by the embodiment of the invention can be used for the users to automatically determine the interest areas according to requirements, so that the simultaneous refinement processing of the people and the dogs is realized, and the satisfaction degree of the users is improved.
In one embodiment, the captured image is an image obtained by performing focusing processing according to the region of interest; optionally, the image may not be an image subjected to focusing processing according to the region of interest; the focusing process may be automatic focusing or focusing according to a user instruction.
According to the shooting method provided by the embodiment of the invention, the interest area of the image is determined in response to the area selection operation of a user; and processing the interest region of the shot image according to a first processing method, and processing the outside of the interest region according to a second processing method. The images are processed in real time in the shooting process, so that the processed images are directly obtained after shooting, the processing requirements of users can be met without secondary processing, and time and labor are saved; meanwhile, the image processing effect is prevented from being limited by the mastery degree of the user on the related image processing software and the processing function of the software.
The interest area is determined according to the area selection operation of the user, so that the user can independently select the interest area when previewing the image, and differential processing is carried out on the shot image in the interest area and the shot image out of the interest area instead of automatically selecting focusing things by a camera, and the processing requirement is better met.
In an embodiment, processing an interest area of a shot image according to a first method, and after processing the outside of the interest area according to a second processing method, rendering the processed shot image; if a re-processing instruction is received, processing the interest area of the current image according to a third processing method contained in the re-processing instruction, and/or processing the outside of the interest area of the current image according to a fourth processing method contained in the re-processing instruction, and rendering the re-processed image; and storing the current image to a specified position until a storage instruction is received.
After seeing the rendered processing effect of the shot image after the processing, if the shooting requirement of the user is met and the storage button is clicked, the user responds to the received storage instruction and stores the current image to a local album (a designated position). If the user is not satisfied, the user can choose to carry out retreatment or click back, the retreatment is to carry out more treatments according to the self requirements of the user, and the body shaping treatment, the face shaping treatment, the filter lens change and the like can be carried out in the interest area; blurring, atomizing, removing redundant parts P and the like can be carried out outside the interest region; and the image can be subjected to basic processing such as integral rotation, cutting, characters, mosaic and the like. And responding to a re-processing instruction of the user, and performing secondary processing on the current image. If the user is not satisfied with the processed or reprocessed image, a return button can be clicked, and the image can be retaken in response to a return instruction of the user.
The shooting method provided by the embodiment of the invention comprises a framing process and a shooting process, wherein the introduction of the steps is a shooting step, the shooting process also comprises a framing step before shooting, and a preview image captured according to a set interval is acquired in real time in the framing process; obtaining an interest area determined according to the area selection operation of the current user; and processing the interest region of the preview image according to a first processing method, and processing the outside of the interest region according to a second processing method. And rendering the processed preview image, if the user feels that the currently rendered image meets the own processing requirement, sending a shooting instruction, and responding to the shooting instruction of the user to execute the shooting step.
Example two
The second embodiment of the present invention provides a specific implementation flow of a shooting method, which is shown in fig. 3 and includes the following steps:
step S31: and responding to a starting instruction of a user, opening the camera and starting the preview.
The method includes the steps of opening a camera, entering a preview interface, rendering a preview image captured by the camera, and including a plurality of shooting modes which can be selected by a user, such as a scenic region portrait shooting mode and a real object shooting mode, in a 'more' menu bar in a camera module, optionally including other modes, and a specific type of the shooting mode, which is not limited in this embodiment.
Step S32: and acquiring a shooting mode selected by a user, and determining a processing method according to the shooting mode and the corresponding relation between the preset shooting mode and the image processing method.
The processing method comprises a first processing method of the interest region of the image and a second processing method outside the interest region.
In response to the selected photographing mode, a specific photographing module is entered.
Step S33: when a set contact event with the touch-sensitive display screen is detected, acquiring a continuous moving track of a contact on the touch-sensitive display screen, and determining a closed curve according to the continuous moving track; the inside of the closed curve is determined as the region of interest.
For example, the pop-up animation prompt may be opened for the first time, the user double clicks the touch-sensitive display screen, the finger does not leave the screen when clicking for the second time, the finger slides to form a closed curve, that is, the continuous moving track is a closed curve, and the region of interest is determined according to the continuous moving track.
Step S34: the method comprises the steps of acquiring a preview image captured according to a set interval in real time, acquiring an interest area determined according to the area selection operation of a current user, processing the interest area of the preview image according to a first method, and processing the outside of the interest area according to a second processing method.
The steps S33 and S34 may be executed in a loop, and each time a set contact event with the touch-sensitive display screen is detected, the steps S33 and S34 are executed until a shooting instruction is acquired.
Step S35: and responding to a shooting instruction of a user, processing the interest area of the shot image according to a first processing method, and processing the outside of the interest area according to a second processing method.
The region of interest here is a region of interest of the image determined in response to the last region selection operation by the user during the framing.
Step S36: and rendering the processed shot image.
The shot image is the image obtained by shooting. The processed captured image is a processed current captured image, and may specifically be a captured image after primary processing, or may be a captured image after one or more secondary processing.
If the save instruction is received, go to step S37; if the re-processing instruction is received, execute step S38; if the abort command is received, step S39 is executed.
Step S37: and saving the current shot image to a specified position.
Step S38: and processing the interest region of the current image according to a third processing method contained in the reprocessing instruction, and/or processing the outside of the interest region of the current image according to a fourth processing method contained in the reprocessing instruction.
After the re-processing of the captured image is completed, the process returns to step S36.
Step S39: the preview is reopened.
Upon receiving the user' S cancel instruction, the present post-processing captured image is canceled, the preview is re-opened, and the process returns to step S32.
Referring to fig. 4, the above-mentioned shooting method can be specifically implemented by the following steps:
step S41: the Camera service was obtained by Camera Manager.
The Camera Manager acquires the Camera Id list and can acquire the attribute information of the corresponding Camera.
Step S42: the camera is turned on.
Open the corresponding Camera through manager.
Step S43: and opening the preview.
And configuring preview width and height through the Image Reader, monitoring through the Image Reader, and entering a picture storage method when monitoring a photographing action.
Step S44: the preview is implemented using Texture View.
The previewed images are all coordinates of a closed curve transmitted to a bottom-layer algorithm by an upper-layer APP, the images are processed in real time, a Camera Capture Session is created, the images are multiplexed to the Camera Capture Session at intervals through a Capture Request.
Step S45: and (6) shooting.
When the user clicks to take a picture, a Capture Request requesting to take a picture is created, the direction of a screen is grabbed, the picture taking direction is set, the preview is stopped, the user automatically returns to the picture saving method of the Image Reader, the picture is saved, and then the user continuously returns to the repeated preview method.
When the system Android Device sends a Capture Request to the Camera Device, the Camera will return Camera Metadata. The transmission flow is shown in fig. 5. Taking an Android Camera API2 as an example, data exchange between an Android Device and a Camera Device is carried out through pipeline, Camera Metadata is used for parameter interaction from APP to HAL, captured image data can be transmitted to a HAL layer for image processing, and processed images are transmitted back to an APP layer to be displayed for a user to preview. The user selects (1) to edit the image according to the self requirement, and the image is processed again; (2) storing the image, and directly storing the processed image to an album; (3) and (5) re-photographing, returning to a camera preview interface, and re-photographing.
Based on the inventive concept of the present invention, an embodiment of the present invention further provides a shooting device, which can implement the above shooting method. The device can be arranged in an electronic mobile terminal with a photographing function, and the structure of the device is shown in figure 6, and comprises:
a determining module 61, configured to determine an interest region of the image in response to a region selection operation of a user;
and the processing module 62 is configured to process the interest region of the captured image according to a first processing method, and process the outside of the interest region according to a second processing method.
In an embodiment, the determining module 61 is specifically configured to:
detecting a set contact event with a touch-sensitive display screen; acquiring a continuous moving track of a contact on the touch-sensitive display screen, and determining a closed curve according to the continuous moving track; and determining the inside of the closed curve as an interest region.
In an embodiment, the determining module 61 is specifically configured to:
and determining the interest area of the image in response to the last area selection operation of the user in the framing process.
In one embodiment, the apparatus further comprises a view finding module 63 for:
in the framing process, acquiring a preview image captured at set intervals in real time; obtaining an interest area determined according to the area selection operation of the current user; and processing the interest region of the preview image according to a first method, and processing the outside of the interest region according to a second processing method.
In one embodiment, the apparatus further comprises a rendering module 64 and a saving module 65:
a rendering module 64 configured to render the processed captured image;
the processing module 62 is further configured to, if a reprocessing instruction is received, process the interest region of the current image according to a third processing method included in the reprocessing instruction, and/or process the outside of the interest region of the current image according to a fourth processing method included in the reprocessing instruction, and render the reprocessed image;
and the saving module 65 is configured to, when receiving a saving instruction, save the current image to the specified location.
In one embodiment, the determining module 61 is further configured to:
and determining a processing method according to the shooting mode selected by the user and the corresponding relation between the preset shooting mode and the image processing method, wherein the processing method comprises a first processing method for the interest area of the image and a second processing method outside the interest area.
In an embodiment, the determining module 61 is specifically configured to:
if the shooting mode selected by the user is a scenic region portrait shooting mode, determining that the corresponding image processing method comprises portrait thinning processing of an interested region of the image and portrait removing processing outside the interested region; and if the shooting mode selected by the user is the real object shooting mode, determining that the corresponding preview image processing method comprises real object thinning processing of an interest area of the image and blurring processing outside the interest area.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
Based on the inventive concept of the present invention, an embodiment of the present invention further provides a camera application, which includes the above-described photographing apparatus.
Unless specifically stated otherwise, terms such as processing, computing, calculating, determining, displaying, or the like, may refer to an action and/or process of one or more processing or computing systems or similar devices that manipulates and transforms data represented as physical (e.g., electronic) quantities within the processing system's registers and memories into other data similarly represented as physical quantities within the processing system's memories, registers or other such information storage, transmission or display devices. Information and signals may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof.
It should be understood that the specific order or hierarchy of steps in the processes disclosed is an example of exemplary approaches. Based upon design preferences, it is understood that the specific order or hierarchy of steps in the processes may be rearranged without departing from the scope of the present disclosure. The accompanying method claims present elements of the various steps in a sample order, and are not intended to be limited to the specific order or hierarchy presented.
In the foregoing detailed description, various features are grouped together in a single embodiment for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed embodiments of the subject matter require more features than are expressly recited in each claim. Rather, as the following claims reflect, invention lies in less than all features of a single disclosed embodiment. Thus, the following claims are hereby expressly incorporated into the detailed description, with each claim standing on its own as a separate preferred embodiment of the invention.
Those of skill would further appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. Of course, the storage medium may also be integral to the processor. The processor and the storage medium may reside in an ASIC. The ASIC may reside in a user terminal. Of course, the processor and the storage medium may reside as discrete components in a user terminal.
For a software implementation, the techniques described herein may be implemented with modules (e.g., procedures, functions, and so on) that perform the functions described herein. The software codes may be stored in memory units and executed by processors. The memory unit may be implemented within the processor or external to the processor, in which case it can be communicatively coupled to the processor via various means as is known in the art.
What has been described above includes examples of one or more embodiments. It is, of course, not possible to describe every conceivable combination of components or methodologies for purposes of describing the aforementioned embodiments, but one of ordinary skill in the art may recognize that many further combinations and permutations of various embodiments are possible. Accordingly, the embodiments described herein are intended to embrace all such alterations, modifications and variations that fall within the scope of the appended claims. Furthermore, to the extent that the term "includes" is used in either the detailed description or the claims, such term is intended to be inclusive in a manner similar to the term "comprising" as "comprising" is interpreted when employed as a transitional word in a claim. Furthermore, any use of the term "or" in the specification of the claims is intended to mean a "non-exclusive or".
Claims (10)
1. A photographing method, characterized by comprising:
responding to the region selection operation of a user, and determining an interest region of the image;
and processing the interest region of the shot image according to a first processing method, and processing the outside of the interest region according to a second processing method.
2. The method of claim 1, wherein determining the region of interest of the image in response to a region selection operation by the user comprises:
detecting a set contact event with a touch-sensitive display screen;
acquiring a continuous moving track of a contact on the touch-sensitive display screen, and determining a closed curve according to the continuous moving track;
and determining the inside of the closed curve as an interest region.
3. The method of claim 1, wherein determining the region of interest of the image in response to a region selection operation by the user comprises:
and determining the interest area of the image in response to the last area selection operation of the user in the framing process.
4. The method of claim 3, further comprising:
in the framing process, acquiring a preview image captured at set intervals in real time;
obtaining an interest area determined according to the area selection operation of the current user;
and processing the interest region of the preview image according to a first method, and processing the outside of the interest region according to a second processing method.
5. The method as claimed in claim 1, wherein the processing of the region of interest of the captured image according to the first method, and the processing of the outside of the region of interest according to the second processing method further comprise:
rendering the processed shot image;
if a re-processing instruction is received, processing the interest area of the current image according to a third processing method contained in the re-processing instruction, and/or processing the outside of the interest area of the current image according to a fourth processing method contained in the re-processing instruction, and rendering the re-processed image;
and storing the current image to a specified position until a storage instruction is received.
6. The method according to claim 1, wherein the captured image is focused according to the region of interest.
7. The method according to any one of claims 1 to 6, wherein the processing of the region of interest of the captured image according to the first method, and the processing of the outside of the region of interest according to the second processing method further comprise:
and determining a processing method according to the shooting mode selected by the user and the corresponding relation between the preset shooting mode and the image processing method, wherein the processing method comprises a first processing method for the interest area of the image and a second processing method outside the interest area.
8. The method according to claim 7, wherein the determining the processing method according to the corresponding relationship between the shooting mode selected by the user and the preset shooting mode and the image processing method specifically comprises:
if the shooting mode selected by the user is a scenic region portrait shooting mode, determining that the corresponding image processing method comprises portrait thinning processing of an interested region of the image and portrait removing processing outside the interested region;
and if the shooting mode selected by the user is the real object shooting mode, determining that the corresponding preview image processing method comprises real object thinning processing of an interest area of the image and blurring processing outside the interest area.
9. A camera, comprising:
the determining module is used for responding to the region selection operation of the user and determining the interest region of the image;
and the processing module is used for processing the interest area of the shot image according to a first processing method, and processing the outside of the interest area according to a second processing method.
10. A camera application, characterized in that it comprises a camera device according to claim 9.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110384887.9A CN113225477A (en) | 2021-04-09 | 2021-04-09 | Shooting method and device and camera application |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110384887.9A CN113225477A (en) | 2021-04-09 | 2021-04-09 | Shooting method and device and camera application |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113225477A true CN113225477A (en) | 2021-08-06 |
Family
ID=77086857
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110384887.9A Pending CN113225477A (en) | 2021-04-09 | 2021-04-09 | Shooting method and device and camera application |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113225477A (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103297699A (en) * | 2013-05-31 | 2013-09-11 | 北京小米科技有限责任公司 | Method and terminal for shooting images |
US8760551B2 (en) * | 2011-03-02 | 2014-06-24 | Canon Kabushiki Kaisha | Systems and methods for image capturing based on user interest |
CN109951635A (en) * | 2019-03-18 | 2019-06-28 | Oppo广东移动通信有限公司 | It takes pictures processing method, device, mobile terminal and storage medium |
WO2020078027A1 (en) * | 2018-10-15 | 2020-04-23 | 华为技术有限公司 | Image processing method, apparatus and device |
-
2021
- 2021-04-09 CN CN202110384887.9A patent/CN113225477A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8760551B2 (en) * | 2011-03-02 | 2014-06-24 | Canon Kabushiki Kaisha | Systems and methods for image capturing based on user interest |
CN103297699A (en) * | 2013-05-31 | 2013-09-11 | 北京小米科技有限责任公司 | Method and terminal for shooting images |
WO2020078027A1 (en) * | 2018-10-15 | 2020-04-23 | 华为技术有限公司 | Image processing method, apparatus and device |
CN109951635A (en) * | 2019-03-18 | 2019-06-28 | Oppo广东移动通信有限公司 | It takes pictures processing method, device, mobile terminal and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP3457683B1 (en) | Dynamic generation of image of a scene based on removal of undesired object present in the scene | |
JP6316968B2 (en) | Interactive image composition | |
CN109040474B (en) | Photo display method, device, terminal and storage medium | |
KR20140098009A (en) | Method and system for creating a context based camera collage | |
CN107231524A (en) | Image pickup method and device, computer installation and computer-readable recording medium | |
US20100118175A1 (en) | Imaging Apparatus For Image Integration | |
US20200344411A1 (en) | Context-aware image filtering | |
CN112887617B (en) | Shooting method and device and electronic equipment | |
CN109948525A (en) | It takes pictures processing method, device, mobile terminal and storage medium | |
CN113706421B (en) | Image processing method and device, electronic equipment and storage medium | |
CN111290659A (en) | Writing board information recording method and system and writing board | |
Chang et al. | Panoramic human structure maintenance based on invariant features of video frames | |
CN113810627A (en) | Video processing method and device and mobile terminal | |
CN113225477A (en) | Shooting method and device and camera application | |
KR102022559B1 (en) | Method and computer program for photographing image without background and taking composite photograph using digital dual-camera | |
CN113822899A (en) | Image processing method, image processing device, computer equipment and storage medium | |
CN115423692A (en) | Image processing method, image processing device, electronic equipment and storage medium | |
CN114430456A (en) | Image capturing method, image capturing apparatus, and storage medium | |
CN112565586A (en) | Automatic focusing method and device | |
CN112036342A (en) | Document snapshot method, device and computer storage medium | |
KR20210029905A (en) | Method and computer program for remove photo background and taking composite photograph | |
CN113315903A (en) | Image acquisition method and device, electronic equipment and storage medium | |
CN113873160B (en) | Image processing method, device, electronic equipment and computer storage medium | |
CN112804451B (en) | Method and system for photographing by utilizing multiple cameras and mobile device | |
CN113056905A (en) | System and method for taking tele-like images |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210806 |
|
RJ01 | Rejection of invention patent application after publication |