CN111464734A - Method and device for processing image data - Google Patents
Method and device for processing image data Download PDFInfo
- Publication number
- CN111464734A CN111464734A CN201910049904.6A CN201910049904A CN111464734A CN 111464734 A CN111464734 A CN 111464734A CN 201910049904 A CN201910049904 A CN 201910049904A CN 111464734 A CN111464734 A CN 111464734A
- Authority
- CN
- China
- Prior art keywords
- image
- information
- preset
- shooting
- image data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000012545 processing Methods 0.000 title claims abstract description 103
- 238000000034 method Methods 0.000 title claims abstract description 79
- 230000005484 gravity Effects 0.000 claims abstract description 17
- 238000012544 monitoring process Methods 0.000 claims abstract description 17
- 239000002131 composite material Substances 0.000 claims description 16
- 230000008569 process Effects 0.000 claims description 14
- 238000012163 sequencing technique Methods 0.000 claims description 7
- 230000004048 modification Effects 0.000 claims description 4
- 238000012986 modification Methods 0.000 claims description 4
- 238000004590 computer program Methods 0.000 claims description 2
- 238000010586 diagram Methods 0.000 description 30
- 230000006870 function Effects 0.000 description 15
- 230000001815 facial effect Effects 0.000 description 14
- 238000004891 communication Methods 0.000 description 8
- 238000005516 engineering process Methods 0.000 description 7
- 230000005236 sound signal Effects 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 3
- 230000001133 acceleration Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000013500 data storage Methods 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000007726 management method Methods 0.000 description 2
- 238000003825 pressing Methods 0.000 description 2
- 230000006978 adaptation Effects 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000007717 exclusion Effects 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 239000004984 smart glass Substances 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/62—Control of parameters via user interfaces
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/61—Control of cameras or camera modules based on recognised objects
- H04N23/611—Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/265—Mixing
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Human Computer Interaction (AREA)
- Studio Devices (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The present disclosure provides a method and apparatus for processing image data. The method is applied to a terminal, and the terminal comprises the following steps: a first lens and a second lens, the method comprising: the method comprises the steps of monitoring preset trigger information in a display interface of a first image, wherein the first image is a historical image shot by a first lens, acquiring and outputting preset object information when the preset trigger information is monitored, the preset object information is object information of at least one shot object in the first image and/or a second image, the second image is a historical image shot by a second lens when the first lens shoots the first image, acquiring target object information determined by a user according to the preset object information, processing first image data of the first image and second image data of the second image according to the target object information to obtain target image data, and the target image data takes a target shot object corresponding to the target object information as a display center of gravity, so that the target object is highlighted.
Description
Technical Field
The present disclosure relates to the field of computer communication technologies, and in particular, to a method and an apparatus for processing image data.
Background
With the development of terminal technology, an image shooting function becomes one of basic functions of a terminal, and the terminal realizes picture shooting and video recording by installing a lens.
And after receiving the image shooting instruction, the terminal controls the lens to shoot the image. However, limited by the shooting level of the photographer, it often happens that the shot image does not highlight the target subject and does not reflect the actual shooting intention of the photographer, resulting in poor image shooting effect.
Disclosure of Invention
In view of the above, the present disclosure provides a method and an apparatus for processing image data, in which target image data is obtained by recommending object information of at least one object in images captured by two lenses to a user and processing the two images according to target object information selected by the user, and the target object corresponding to the target object information is used as a display center of gravity, so that the target object is highlighted in the target image.
According to a first aspect of the embodiments of the present disclosure, there is provided a method for processing image data, which is applied to a terminal, where the terminal includes: a first lens and a second lens, the method comprising:
monitoring preset trigger information in a display interface of a first image, wherein the first image is a historical image shot by a first lens;
when the preset trigger information is monitored, acquiring and outputting preset object information, wherein the preset object information is object information of at least one shot object in the first image and/or the second image, and the second image is a historical image shot by the second lens when the first lens shoots the first image;
acquiring target object information determined by a user according to the preset object information;
and processing first image data of the first image and second image data of the second image according to the target object information to obtain target image data, wherein the target image data takes a target photographed object corresponding to the target object information as a display gravity center.
Optionally, the acquiring and outputting preset object information includes:
determining at least one shot object according to the first image and the second image;
and acquiring and outputting object information of at least one shot object.
Optionally, the determining at least one of the photographed objects according to the first image and the second image includes:
identifying each of the first image and the second image as a captured object;
acquiring shooting information of each shot object;
and matching the shooting information of each shot object with a preset shooting information condition, and determining at least one shot object of which the shooting information is matched with the preset shooting information condition.
Optionally, the shooting information includes: shooting time length; the matching the shooting information of each shot object with a preset shooting information condition, and determining at least one shot object of which the shooting information is matched with the preset shooting information condition, includes:
determining whether the shooting time of each shot object is matched with a preset shooting time condition;
and acquiring at least one shot object with the shooting duration matched with the preset shooting duration condition.
Optionally, the shooting information includes: shooting position information; the matching the shooting information of each shot object with a preset shooting information condition, and determining at least one shot object of which the shooting information is matched with the preset shooting information condition, includes:
determining whether the shooting position information of each shot object is matched with a preset position information condition;
and acquiring at least one shot object of which the shooting position information is matched with the preset position information condition.
Optionally, if the terminal stores a reference image of at least one preset object, the determining at least one photographed object according to the first image and the second image includes:
determining, for each reference image of the preset object, whether the first image and the second image include a reference image of the preset object;
if the first image and/or the second image comprise a reference image of the preset object, determining the preset object as the shot object;
and determining at least one shot object according to the preset object.
Optionally, before the determining at least one of the objects to be photographed according to the first image and the second image, the method further includes:
acquiring a historical image set shot by the terminal in a preset time period;
acquiring at least one preset object of which the shooting information is matched with a second preset shooting information condition in the historical image set;
and acquiring a reference image shot for at least one preset object from the historical image set.
Optionally, if the preset object information is object information of at least two of the photographed objects in the first image and/or the second image, the acquiring and outputting the preset object information includes:
acquiring shooting information of each shot object;
sequencing at least two pieces of shooting information to generate a shooting information sequence;
and sequentially displaying the object information of at least two shot objects according to the shooting information sequence.
Optionally, the sorting at least two pieces of the shooting information to generate a shooting information sequence includes:
when the shooting information is shooting duration, sequencing at least two shooting durations according to the sequence of the shooting durations from long to short to generate a shooting duration sequence; or,
and when the shooting information is the shooting times within the preset duration, sequencing the at least two shooting times according to the sequence of the shooting times from the largest to the smallest to generate a shooting time sequence.
Optionally, the processing, according to the target object information, the first image data of the first image and the second image data of the second image to obtain target image data includes:
processing the first image data and the second image data to obtain composite image data;
modifying the display position data of the target photographed object in the synthesized image data to obtain target image data, wherein the target photographed object in a target image corresponding to the target image data is displayed in an image center area; or,
determining specified image data for displaying the target photographic subject in the composite image data;
and performing blurring processing on other image data except the designated image data in the synthetic image data to obtain the target image data, wherein other photographed objects positioned around the target photographed object in the target image are displayed in a blurred mode.
Optionally, before the monitoring of the preset trigger information in the display interface of the first image, the method further includes:
when the first lens is detected to enter a preset shooting mode, starting the second lens;
acquiring an initial image acquired by the second lens;
determining whether preset image information is included in the initial image;
and if the initial image comprises the preset image information, controlling the second lens to shoot the second image.
According to a second aspect of the embodiments of the present disclosure, there is provided an apparatus for processing image data, applied to a terminal, the terminal including: a first lens and a second lens, the apparatus comprising:
a monitoring module configured to monitor preset trigger information within a display interface of a first image, the first image being a history image captured by the first lens;
the output module is configured to acquire and output preset object information when the preset trigger information is monitored, wherein the preset object information is object information of at least one shot object in the first image and/or the second image, and the second image is a historical image shot by the second lens when the first lens shoots the first image;
the first acquisition module is configured to acquire target object information determined by a user according to the preset object information;
and the processing module is configured to process the first image data of the first image and the second image data of the second image according to the target object information to obtain target image data, and the target image data takes a target photographed object corresponding to the target object information as a display gravity center.
Optionally, the output module includes:
a determination sub-module configured to determine at least one of the photographed objects from the first image and the second image;
and the output sub-module is configured to acquire and output object information of at least one shot object.
Optionally, the determining sub-module includes:
an identifying unit configured to identify each of the first image and the second image as a subject;
an acquisition unit configured to acquire shooting information of each of the objects;
the first determination unit is configured to match the shooting information of each shot object with a preset shooting information condition, and determine at least one shot object of which the shooting information is matched with the preset shooting information condition.
Optionally, the first determining unit includes:
a first determination subunit configured to determine, when the shooting information includes a shooting duration, whether the shooting duration of each of the objects matches a preset shooting duration condition;
the first acquisition subunit is configured to acquire at least one photographed object of which the photographing time length is matched with the preset photographing time length condition.
Optionally, the first determining unit includes:
a second determination subunit configured to determine, when the shooting information includes shooting position information, whether the shooting position information of each of the objects matches a preset position information condition;
a second acquisition subunit configured to acquire at least one of the photographic subjects whose photographing position information matches the preset position information condition.
Optionally, the determining sub-module includes:
a second determining unit, configured to determine, if the terminal stores at least one reference image of a preset object, for each reference image of the preset object, whether the first image and the second image include the reference image of the preset object;
a third determining unit configured to determine the preset object as the photographed object if the first image and/or the second image include a reference image of the preset object;
a fourth determination unit configured to determine at least one of the photographed objects according to the preset object.
Optionally, the apparatus further comprises:
a second acquisition module configured to acquire a set of historical images captured by the terminal within a preset time period before the determination of at least one of the objects to be captured according to the first image and the second image;
the third acquisition module is configured to acquire at least one preset object of which the shooting information is matched with a second preset shooting information condition in the historical image set;
a fourth obtaining module configured to obtain a reference image captured for at least one preset object from the historical image set.
Optionally, the output module includes:
the obtaining sub-module is configured to obtain shooting information of each shot object if the preset object information is object information of at least two shot objects in the first image and/or the second image;
the generation sub-module is configured to sort at least two pieces of shooting information and generate a shooting information sequence;
and the display sub-module is configured to sequentially display the object information of at least two shot objects according to the shooting information sequence.
Optionally, the generating sub-module includes:
the first generation unit is configured to sort at least two shooting time lengths according to the sequence from long to short of the shooting time lengths when the shooting information is the shooting time lengths, and generate a shooting time length sequence; or,
and the second generation unit is configured to sort the at least two shooting times according to the sequence of the shooting times from the largest to the smallest to generate a shooting time sequence when the shooting information is the shooting times within a preset time length.
Optionally, the processing module includes:
a first processing sub-module configured to process the first image data and the second image data to obtain composite image data;
a modification sub-module configured to modify display position data of the target photographed object in the synthesized image data to obtain the target image data, wherein the target photographed object in a target image corresponding to the target image data is displayed in an image center area; or,
a determination sub-module configured to determine specified image data for displaying the target photographic subject in the synthesized image data;
and the second processing submodule is configured to perform fuzzy processing on other image data except the specified image data in the synthesized image data to obtain the target image data, and the other photographed objects around the target photographed object in the target image are displayed in a fuzzy mode.
Optionally, the apparatus further comprises:
the starting module is configured to start the second lens when the first lens is detected to enter a preset shooting mode before preset trigger information is monitored in a display interface of the first image;
a fifth acquisition module configured to acquire an initial image acquired by the second lens;
a determination module configured to determine whether preset image information is included in the initial image;
and the control module is configured to control the second lens to shoot the second image if the initial image comprises the preset image information.
According to a third aspect of embodiments of the present disclosure, there is provided a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the method of any one of the above first aspects.
According to a fourth aspect of the embodiments of the present disclosure, there is provided an apparatus for processing image data, including:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to:
monitoring preset trigger information in a display interface of a first image, wherein the first image is a historical image shot by the first lens;
when the preset trigger information is monitored, acquiring and outputting preset object information, wherein the preset object information is object information of at least one shot object in the first image and/or the second image, and the second image is a related image shot by the second lens when the first lens shoots the first image;
acquiring target object information determined by a user according to the preset object information;
and processing first image data of the first image and second image data of the second image according to the target object information to obtain target image data, wherein the target image data takes a target photographed object corresponding to the target object information as a display gravity center.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects:
according to the method and the device, after the preset trigger information is monitored in the display interface of the first image, the object information of at least one shot object in the first image and/or the second image is acquired and output, the first image data and the second image data are processed according to the target object information selected by a user, the target image data with the target shot object as the display center of gravity is obtained, and the target image is highlighted and displayed.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
FIG. 1 is a flow diagram illustrating a method of processing image data in accordance with an exemplary embodiment;
FIG. 2 is a flow diagram illustrating another method of processing image data in accordance with an exemplary embodiment;
FIG. 3 is a flow diagram illustrating another method of processing image data in accordance with an exemplary embodiment;
FIG. 4 is a schematic diagram illustrating another method of processing image data in accordance with an exemplary embodiment;
FIG. 5 is a schematic diagram illustrating another method of processing image data in accordance with an exemplary embodiment;
FIG. 6 is a schematic illustration of a display interface for a first image, shown in accordance with an exemplary embodiment;
FIG. 7 is a schematic diagram illustrating another method of processing image data in accordance with an exemplary embodiment;
FIG. 8 is a schematic diagram illustrating another method of processing image data in accordance with an exemplary embodiment;
FIG. 9 is a schematic diagram illustrating another method of processing image data in accordance with an exemplary embodiment;
FIG. 10 is a block diagram illustrating an apparatus for processing image data according to an exemplary embodiment;
FIG. 11 is a block diagram illustrating another apparatus for processing image data in accordance with an illustrative embodiment;
FIG. 12 is a block diagram illustrating another apparatus for processing image data according to an exemplary embodiment;
FIG. 13 is a block diagram illustrating another apparatus for processing image data in accordance with an exemplary embodiment;
FIG. 14 is a block diagram illustrating another apparatus for processing image data according to an exemplary embodiment;
FIG. 15 is a block diagram illustrating another apparatus for processing image data according to an exemplary embodiment;
FIG. 16 is a block diagram illustrating another apparatus for processing image data according to an exemplary embodiment;
FIG. 17 is a block diagram illustrating another apparatus for processing image data in accordance with an illustrative embodiment;
FIG. 18 is a block diagram illustrating another apparatus for processing image data according to an exemplary embodiment;
FIG. 19 is a schematic diagram illustrating an architecture of an apparatus for processing image data in accordance with an exemplary embodiment;
fig. 20 is a schematic structural diagram illustrating another apparatus for processing image data according to an exemplary embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
The terminology used in the present disclosure is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used in this disclosure and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It is to be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present disclosure. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
As shown in fig. 1, fig. 1 is a flowchart illustrating a method for processing image data according to an exemplary embodiment of the present disclosure, applied to a terminal including: a first lens and a second lens, the method comprising the steps of:
in step 101, preset trigger information is monitored within a display interface of a first image, which is a history image captured by a first lens.
The method for processing image data provided by the embodiment of the disclosure is applied to a terminal, the terminal comprises a first lens and a second lens, and both the first lens and the second lens can be used for image shooting. The first lens and the second lens may be the same type of lens or different types of lenses, for example, the first lens is a non-wide-angle lens, and the second lens is a wide-angle lens; for another example, the first lens and the second lens are both non-wide-angle lenses. In the embodiment of the disclosure, the terminal performs linkage control on the first lens and the second lens, and when the first lens is controlled to shoot images, the second lens is controlled to shoot images according to a preset shooting rule.
The terminal has acquired a first image and a second image when executing this step, wherein the first image is a history image captured by a first lens, the second image is a history image captured by a second lens when the first lens captures the first image, and an association exists between the first image and the second image.
When the terminal displays the first image, preset trigger information is monitored in a display interface of the first image, and if the preset trigger information is monitored, subsequent operation is started.
There are various ways to monitor the preset trigger information. For example, when a preset option is provided on the display interface of the first image, the terminal monitors whether a selection operation for the preset option is received within the display interface of the first image. Illustratively, the terminal opens an album and displays a first image in the album, a preset option is arranged on a display interface of the first image, and at this time, the terminal monitors whether a selection operation for the preset option is received in the display interface. For another example, whether voice information including indication content input by a user is received or not is monitored, and the indication content instructs the terminal to start an image data processing function.
In an optional embodiment, before the terminal monitors the preset trigger information in the display interface of the first image, a first image captured by the first lens and a second image captured by the second lens need to be acquired.
For example, referring to fig. 2 which is a flowchart illustrating another method for processing image data, before the terminal monitors the preset trigger information in the display interface of the first image, the operation of acquiring the first image and the second image before the terminal monitors the preset trigger information in the display interface of the first image may be implemented through the following steps 105 to 108:
in step 105, when the first lens is detected to enter the preset shooting mode, the second lens is turned on.
The preset photographing mode may be various, such as an on mode, a photographing mode, and the like. When the preset shooting mode is the opening mode, after the first lens is detected to be opened, the second lens is opened; when the preset shooting mode is the shooting mode, after the first lens is detected to start shooting the first image, the second lens is started.
In step 106, an initial image of the second shot acquisition is acquired.
And after the terminal opens the second lens, acquiring an initial image acquired by the second lens, wherein the initial image is an image acquired by the second lens in a period after the second lens is opened and before the second image starts to be shot.
In step 107, it is determined whether preset image information is included in the initial image.
The preset image information is in various types, for example, a preset kind of a subject (e.g., a person, an animal, a building, etc.), a specified subject (e.g., a specified person, a specified animal, a specified building, etc.), and the like.
And after the terminal acquires the initial image acquired by the second lens, determining whether the initial image comprises preset image information.
In step 108, if the initial image includes the preset image information, the second lens is controlled to capture a second image.
And after the terminal determines that the initial image currently acquired by the second lens comprises the preset image information, the second lens is controlled to shoot the second image. The terminal acquires the initial image subsequently acquired by the second lens after judging that the initial image currently acquired by the second lens does not include the preset image information, determines whether the initial image subsequently acquired includes the preset image information, and controls the second lens to shoot the second image under the condition that the initial image subsequently acquired is determined to include the preset image information.
In operation, if the first lens finishes shooting the first image, the second lens still does not acquire the initial image including the preset image information, the terminal may automatically close the second lens, or if the terminal monitors that the first lens is not used again within the preset duration to shoot the image, the terminal may automatically close the second lens, or after receiving a preset instruction input by a user, the terminal closes the second lens, the preset instruction has various types, for example, an instruction to close the second lens, an instruction to close the first lens and the second lens at the same time, and the like.
Based on the settings from step 105 to step 108, the image shooting control of the second lens is realized, and the terminal functions are enriched.
In step 102, when the preset trigger information is monitored, preset object information is acquired and output, the preset object information is object information of at least one photographed object in a first image and/or a second image, and the second image is a history image photographed by a second lens when the first lens photographs the first image.
The terminal acquires a first image and a second image after monitoring preset trigger information in a display interface of the first image, and then acquires object information of at least one shot object from the first image and/or the second image. There are various kinds of object information, such as an object avatar, an object text label, and the like.
In operation, the terminal may acquire at least one photographed object from only the first image; the terminal may acquire at least one photographed object only from the second image; the terminal can acquire at least one shot object from the first image and the second image simultaneously, and in this case, the at least one shot object can be an object which appears in the first image or the second image independently or an object which appears in the first image and the second image simultaneously.
Referring to fig. 3, which is a flowchart illustrating another method of processing image data according to an exemplary embodiment, the operation of acquiring and outputting preset object information may be implemented by steps 1021 to 1024 as follows:
in step 1021, a first image and a second image are acquired.
There are various ways for the terminal to acquire the first image and the second image, for example:
the first acquisition mode is as follows: the second image is a history image shot by the second lens when the first image is shot by the first lens, and the second image is associated with the first image, so that the terminal can establish a corresponding relation between the first image identifier of the first image and the second image identifier of the second image after the first image and the second image are acquired.
After monitoring the preset trigger information in the display interface of the first image, the terminal can acquire the first image identifier of the first image, search the pre-established corresponding relationship between the first image identifier and the second image identifier according to the first image identifier, determine the second image identifier corresponding to the first image identifier, determine the second image according to the second image identifier, and acquire the second image.
The second acquisition mode is as follows: the terminal may store the first image and the second image in a preset folder, and the terminal records a correspondence between a folder identifier of the preset folder and a first image identifier of the first image.
After monitoring the preset trigger information in the display interface of the first image, the terminal can acquire the first image identifier of the first image, search the corresponding relation between the pre-established folder identifier and the first image identifier according to the first image identifier, determine the folder identifier corresponding to the first image identifier, determine the preset folder according to the folder identifier, determine the first image in the preset folder according to the first image identifier, and determine the second image in the preset folder by using an exclusion method.
In step 1022, at least one subject is determined based on the first image and the second image.
There are various ways for the terminal to determine at least one object to be photographed based on the first image and the second image.
For example, referring to fig. 4, which is a flowchart illustrating another method for processing image data according to an exemplary embodiment, the operation of the terminal determining at least one photographed object according to the first image and the second image may be implemented by: in step 1022-1, each of the photographed objects in the first image and the second image is identified; in step 1022-2, photographing information of each subject is acquired; in step 1022-3, the shooting information of each object is matched with the first preset shooting information condition, and at least one object whose shooting information matches with the preset shooting information condition is determined.
With respect to the above step 1022-1, when the photographic subject is a person, each of the photographic subjects in the first image and the second image may be recognized by a face recognition method.
For the step 1022-3, the preset shooting information conditions corresponding to different types of shooting information are different. When the shooting information includes the shooting duration, the first preset shooting information condition includes a preset shooting duration condition, and the first preset shooting duration condition may include: the shooting duration of the shot object is greater than the shooting duration threshold, or the shooting duration of the shot object meets a preset shooting duration range, and the like; when the photographing information includes photographing position information, the first preset photographing information condition includes a preset photographing position information condition, and the preset photographing position information condition may include: the shot object is located in a designated area in the image, the position relation between one shot object and other shot objects meets the preset position relation, and the like.
In operation, when the photographing information includes photographing position information, step 1022-3 may be implemented by: firstly, determining whether the shooting duration of each shot object is matched with a preset shooting duration condition; and secondly, determining at least one shot object with the shooting duration matched with the preset shooting duration condition.
When the photographing information includes the photographing position information, step 1022-3 may be implemented by: firstly, determining whether the shooting position information of each shot object is matched with a preset position information condition; and determining at least one shot object of which the shooting position information is matched with the preset position information condition.
Based on the settings of step 1022-1 to step 1022-3, the terminal is enabled to have a function of determining at least one object to be photographed by determining whether the photographing information matches the preset photographing information condition, and the terminal function is enriched.
For another example, referring to fig. 5, which is a flowchart illustrating another method for processing image data according to an exemplary embodiment, when the terminal stores a reference image of at least one preset object, the terminal may determine at least one photographed object according to the first image and the second image by: in step 1022-4, for each reference image of the preset object, determining whether the first image and the second image include the reference image of the preset object; in step 1022-5, if the first image and/or the second image includes a reference image of the preset object, determining the preset object as a photographed object; in step 1022-6, at least one photographed object is determined according to the preset object.
For example, the terminal stores a first facial avatar of a first person and a second facial avatar of a second person in advance, determines whether the first facial avatar is included in the first image and the second image, and determines that the first person is a photographed object if the first image and/or the second image is determined to include the first facial image; similarly, the terminal determines whether the first image and the second image include the second facial avatar, and if the first image and/or the second image include the second facial image, the terminal determines that the second person is the shot object.
Based on the settings of step 1022-4-step 1022-6, the terminal is enabled to have a function of determining at least one photographed object by determining whether the first image and the second image include a reference image of a preset object, and the terminal function is enriched.
In an alternative embodiment, the terminal determines at least one shot object according to the first image and the second image
In an alternative embodiment, before performing step 1022-4-step 1022-6, i.e., before determining at least one photographed object from the first image and the second image, the terminal may further perform the following operations: firstly, acquiring a historical image set shot by a terminal in a preset time period; secondly, acquiring at least one preset object of which the shooting information is matched with a second preset shooting information condition in the historical image set; finally, a reference image shot for at least one preset object is acquired from the historical image set.
The history image set may only include history photos taken by the terminal within a preset time period, may only include history videos taken by the terminal within the preset time period, or may include both the history photos and the history videos taken by the terminal within the preset time period.
There are various kinds of shooting information such as shooting time, shooting duration, shooting times, shooting frequency, shooting position information, and the like. And the second preset shooting information conditions corresponding to different shooting information are different. For example, when the photographing information includes a photographing duration, the second preset photographing information condition includes a preset photographing time condition, and the preset photographing time condition may include: the shooting duration in the image shot last time is greater than the duration threshold, the total shooting duration in the preset historical period is greater than the duration threshold, and the like; when the shooting information includes the shooting times, the second preset shooting information condition includes a preset shooting time condition, and the preset shooting time condition may include: the total shooting times in a preset historical time period are larger than a time threshold value and the like; when the photographing information includes a photographing frequency, the second preset photographing information condition includes a preset photographing frequency condition, and the preset photographing frequency condition may include: the total shooting frequency in a preset historical time period is greater than a frequency threshold value and the like; when the photographing information includes the photographing position information, the second preset photographing information condition includes a preset photographing position information condition, and the preset photographing position information condition may include: the preset object is located within a preset region of the image, and so on.
The preset object may be a person, an animal, a scene, etc. There are various reference images of the preset object, such as a photograph, an avatar, etc. of the preset object. After acquiring at least one preset object in the historical image set, the terminal may intercept a reference image photographed for the at least one preset object from the historical image set. In operation, the terminal may further receive a reference image of a preset object input by a user.
Based on the setting of the three steps, the terminal has the function of acquiring the reference image shot aiming at least one preset object from the historical image set shot in advance, and the terminal function is enriched.
In step 1023, object information of at least one object to be photographed is acquired.
There are various kinds of object information such as facial avatar, photograph, text, and the like.
There are various ways in which the terminal acquires the object information of the subject. For example, when the object information is a facial avatar, the terminal may capture the facial avatar of the object in the first image or the second image by means of screenshot after determining the object, so as to obtain the facial avatar of the object.
For another example, the terminal presets a corresponding relationship between an object identifier of the provided object and object information, after the object is determined, searches the preset corresponding relationship between the object identifier of the object and the object information according to the object identifier of the object, and determines the object information corresponding to the object identifier, thereby obtaining the object information of the object.
In step 1024, object information of at least one object to be photographed is output.
The terminal outputs the object information of at least one shot object after acquiring the object information of at least one shot object.
There are various ways in which the terminal outputs the object information of at least one object to be photographed. For example, object information of at least one object to be photographed is displayed on a display interface of the first image. Illustratively, referring to fig. 6, which is a schematic view of a display interface of a first image according to an exemplary embodiment, referring to fig. 6, the terminal displays facial head images of a plurality of subjects on the display interface of the first image. As another example, the terminal may output the object information content of at least one photographed object by voice.
In an alternative embodiment, when the preset object information acquired by the terminal when performing step 102 is object information of at least two objects to be photographed, referring to fig. 7, which is a flowchart illustrating another method for processing image data according to an exemplary embodiment, the operation of acquiring and outputting the preset object information by the terminal may be implemented as follows: in step 1025, acquiring shooting information of each of at least two shot objects; in step 1026, at least two pieces of shooting information are sorted to generate a shooting information sequence; in step 1027, object information of at least two objects to be photographed is sequentially displayed according to the photographing information sequence.
Based on the settings of the steps 1025 to 1027, the terminal has the functions of sequencing the shooting information of at least two shot objects and sequentially displaying the object information of at least two shot objects according to the obtained shooting information sequence, so that the intelligent display of the object information of at least two shot objects is realized, the intelligent degree of the terminal is improved, and the user experience is improved.
For the above step 1027, there are various shooting information, such as a shooting time length, a shooting number of times within a preset history period, a shooting frequency within a preset history period, and the like. When the shooting information is the shooting duration, the terminal can sort at least two shooting durations according to the sequence from long to short of the shooting durations to generate a shooting duration sequence; when the shooting information is the shooting times within the preset duration, the terminal can sort at least two shooting times according to the sequence of the shooting times from the maximum shooting time to the minimum shooting time to generate a shooting time sequence.
In step 103, target object information determined by a user according to preset object information is acquired.
After the terminal outputs the preset object information, namely after the terminal outputs the object information of at least one shot object, the user can select the target object information from the object information of at least one shot object output by the terminal, determine the target shot object corresponding to the target object information as the center of gravity of image display, and input the target object information to the terminal.
There are various ways in which a user inputs target object information to a terminal. For example, referring to fig. 6, when the terminal displays a facial avatar of at least one subject on the display interface of the first image, the user may input target subject information to the terminal by clicking a target facial avatar; or, an input box is arranged on the display interface of the first image, and a user can input target object information to the terminal in a mode of triggering preset options by inputting the target object information in the input box; alternatively, the user may input the target object information to the terminal by means of voice input.
In step 104, the first image data of the first image and the second image data of the second image are processed according to the target object information to obtain target image data, and the target image data takes a target object corresponding to the target object information as a display center of gravity.
After acquiring target object information determined by a user according to preset object information, the terminal processes first image data of the first image and second image data of the second image according to the target object information to obtain target image data with a target shot object corresponding to the target object information as a display gravity center. When the target image is displayed based on the target image data, the target subject is highlighted in the target image.
In operation, referring to fig. 8, which is a flowchart illustrating another method for processing image data according to an exemplary embodiment, as shown in fig. 8, the terminal processes the first image data of the first image and the second image data of the second image according to the target object information, and the operation of obtaining the target image data may be implemented by the following steps 1041 and 1042:
in step 1041, the first image data and the second image data are processed to obtain composite image data.
And processing the first image data and the second image data to realize the synthesis of the first image and the second image. When the shooting angle of view of the obtained composite image is larger than the shooting angle of view of the first image and the shooting angle of view of the second image respectively, an image with a larger angle of view range is obtained by the method. This process is prior art and is not described herein in detail in the embodiments of the present disclosure.
In step 1042, display position data of the target object in the synthesized image data is modified to obtain target image data, where the target object in the target image corresponding to the target image data is displayed in the image center area.
Based on the setting of the step 1041 and the step 1042, the target photographed object is displayed in the image center area of the target image, the target photographed object is highlighted, and the terminal functions are enriched.
Referring to fig. 9, which is a flowchart illustrating another method for processing image data according to an exemplary embodiment, in fig. 9, the terminal processes the first image data of the first image and the second image data of the second image according to the target object information, and the operation of obtaining the target image data may be implemented by the following steps 1041, 1043 and 1044:
in step 1041, the first image data and the second image data are processed to obtain composite image data.
In step 1043, the number of designated images for displaying the target photographic subject in the composite image data is determined.
In step 1044, the other image data except the designated image data in the composite image data is blurred, so that target image data is obtained, in which the other objects located around the target object in the target image corresponding to the target image data are blurred and displayed.
Based on the settings of the step 1041, the step 1043 and the step 1044, the focusing display of the target shot object in the target image is realized, and the terminal functions are enriched.
In the embodiment of the disclosure, after the preset trigger information is monitored in the display interface of the first image, the object information of at least one photographed object in the first image and/or the second image is acquired and output, and the first image data and the second image data are processed according to the target object information selected by the user to obtain the target image data with the target photographed object as the display center of gravity, so that the target image can highlight the target photographed image.
The method for processing image data provided by the embodiments of the present disclosure is described with reference to the following scenarios.
Scene one
The mobile phone comprises a non-wide-angle lens and a wide-angle lens. When the mobile phone is used for shooting a picture, if a shutter on the mobile phone is pressed, the non-wide-angle lens and the wide-angle lens shoot the picture at the same time, the mobile phone acquires first picture data through the non-wide-angle lens, acquires second picture data through the wide-angle lens, and stores the first picture data and the second picture data.
When the mobile phone displays a first photo corresponding to the first photo data in the photo album, a preset option/button is arranged on a display interface of the first photo, the mobile phone monitors selection/pressing of the preset option and then starts a wide-angle mode, identifies all face-exposed persons in the first photo and the second photo, and displays the facial head portrait of each face-exposed person on the display interface of the first photo.
After receiving the selection operation of the user on the displayed target face head portrait, the mobile phone processes the first photo data and the second photo data to obtain synthetic photo data; determining designated photo data for displaying a target person corresponding to the head portrait of the target face in the synthetic photo data; the other photo data except the designated photo data in the synthetic photo data is blurred to obtain target photo data, and other photographed objects around the target person in the target photo displayed according to the target photo data are blurred and displayed.
Scene two
The mobile phone comprises a non-wide-angle lens and a wide-angle lens. When the mobile phone shoots a video, if a shutter on the mobile phone is pressed, the mobile phone controls the non-wide-angle lens to shoot a first video, and the mobile phone obtains first video data through the non-wide-angle lens and stores the first video data. The mobile phone obtains an initial picture collected by the wide-angle lens in the process of shooting the first video by the non-wide-angle lens, judges whether the initial picture collected by the wide-angle lens comprises face information or not, controls the wide-angle lens to shoot a second video after judging that the initial picture collected by the wide-angle lens comprises the face information, obtains second video data through the wide-angle lens, and stores the second video data.
When the mobile phone displays a first video corresponding to first video data in an album, a preset option/button is arranged on a display interface of the first video, the mobile phone monitors that a wide-angle mode is started after the selection/pressing of the preset option is monitored, all face-exposed persons in the first video and the second video are identified, whether the shooting duration of each face-exposed person is greater than 60% of the total shooting duration is judged, and the face head portrait of the face-exposed person with the shooting duration greater than 60% of the total shooting duration is displayed on the display interface of the first video.
After receiving the selection operation of a user on the displayed target face head portrait, the mobile phone processes the first video data and the second video data to obtain composite video data; determining appointed video data which are used for displaying a target person corresponding to the head portrait of the target face in the synthesized video data; and performing fuzzy processing on other video data except the specified video data in the synthesized video data to obtain target video data, and performing fuzzy display on other shot objects positioned around the target person in the target video displayed according to the target video data.
While, for purposes of simplicity of explanation, the foregoing method embodiments have been described as a series of acts or combination of acts, it will be appreciated by those skilled in the art that the present disclosure is not limited by the order of acts, as some steps may, in accordance with the present disclosure, occur in other orders and concurrently.
Further, those skilled in the art should also appreciate that the embodiments described in the specification are exemplary embodiments and that acts and modules referred to are not necessarily required by the disclosure.
Corresponding to the embodiment of the application function implementation method, the disclosure also provides an embodiment of an application function implementation device and a corresponding terminal.
Referring to fig. 10, a block diagram of an apparatus for processing image data according to an exemplary embodiment is shown, which is applied to a terminal including: a first lens and a second lens, the apparatus comprising:
a monitoring module 21 configured to monitor preset trigger information within a display interface of a first image, the first image being a history image captured by the first lens;
an output module 22, configured to, when the preset trigger information is monitored, acquire and output preset object information, where the preset object information is object information of at least one object to be photographed in the first image and/or a second image, and the second image is a history image photographed by the second lens when the first lens photographs the first image;
a first obtaining module 23 configured to obtain target object information determined by a user according to the preset object information;
and the processing module 24 is configured to process the first image data of the first image and the second image data of the second image according to the target object information to obtain target image data, wherein the target image data takes a target photographed object corresponding to the target object information as a display center of gravity.
Referring to fig. 11, which is a block diagram illustrating another apparatus for processing image data according to an exemplary embodiment, on the basis of the embodiment of the apparatus shown in fig. 10, the output module 22 may include:
a determination sub-module 221 configured to determine at least one of the photographic subjects from the first image and the second image;
an output sub-module 222 configured to acquire and output object information of at least one of the objects.
Referring to fig. 12, which is a block diagram illustrating another apparatus for processing image data according to an exemplary embodiment, on the basis of the embodiment of the apparatus shown in fig. 11, the determining sub-module 221 may include:
an identifying unit 2211 configured to identify each of the first image and the second image as a subject;
an acquisition unit 2212 configured to acquire shooting information of each of the subjects;
a first determining unit 2213 configured to match the photographing information of each of the objects with a preset photographing information condition, and determine at least one of the objects whose photographing information matches the preset photographing information condition.
In an alternative embodiment, the first determining unit 2213 may include:
a first determination subunit configured to determine, when the shooting information includes a shooting duration, whether the shooting duration of each of the objects matches a preset shooting duration condition;
the first acquisition subunit is configured to acquire at least one photographed object of which the photographing time length is matched with the preset photographing time length condition.
In an alternative embodiment, the first determining unit 2213 may include:
a second determination subunit configured to determine, when the shooting information includes shooting position information, whether the shooting position information of each of the objects matches a preset position information condition;
a second acquisition subunit configured to acquire at least one of the photographic subjects whose photographing position information matches the preset position information condition.
Referring to fig. 13, which is a block diagram illustrating another apparatus for processing image data according to an exemplary embodiment, on the basis of the embodiment of the apparatus shown in fig. 11, the determining sub-module 221 may include:
a second determining unit 2214 configured to determine, if the terminal stores at least one reference image of a preset object, for each reference image of the preset object, whether the first image and the second image include the reference image of the preset object;
a third determining unit 2215 configured to determine the preset object as the photographed object if the first image and/or the second image include a reference image of the preset object;
a fourth determining unit 2216 configured to determine at least one of the photographed objects according to the preset object.
Referring to fig. 14, which is a block diagram illustrating another apparatus for processing image data according to an exemplary embodiment, the apparatus may further include, on the basis of the embodiment of the apparatus illustrated in fig. 13:
a second obtaining module 25 configured to obtain a set of historical images taken by the terminal within a preset time period before the determination of at least one of the objects to be taken according to the first image and the second image;
a third obtaining module 26 configured to obtain at least one preset object in the historical image set, where the shooting information matches a second preset shooting information condition;
a fourth obtaining module 27 configured to obtain a reference image captured for at least one preset object from the historical image set.
Referring to fig. 15, which is a block diagram illustrating another apparatus for processing image data according to an exemplary embodiment, on the basis of the embodiment of the apparatus shown in fig. 10, the output module 22 may include:
the obtaining sub-module 223 is configured to obtain shooting information of each of the shot objects if the preset object information is object information of at least two of the shot objects in the first image and/or the second image;
a generation sub-module 224 configured to sort at least two pieces of the shooting information, generating a shooting information sequence;
a display sub-module 225 configured to sequentially display object information of at least two of the objects to be photographed according to the photographing information sequence.
In an optional embodiment, the generating sub-module 224 may include:
the first generation unit is configured to sort at least two shooting time lengths according to the sequence from long to short of the shooting time lengths when the shooting information is the shooting time lengths, and generate a shooting time length sequence; or,
and the second generation unit is configured to sort the at least two shooting times according to the sequence of the shooting times from the largest to the smallest to generate a shooting time sequence when the shooting information is the shooting times within a preset time length.
Referring to fig. 16, which is a block diagram illustrating another apparatus for processing image data according to an exemplary embodiment, on the basis of the embodiment of the apparatus shown in fig. 10, the processing module 24 may include:
a first processing submodule 241, configured to process the first image data and the second image data to obtain composite image data;
a modification sub-module 242 configured to modify the display position data of the target object in the composite image data to obtain the target image data, wherein the target object in the target image corresponding to the target image data is displayed in the image center area.
Referring to fig. 17, which is a block diagram of another apparatus for processing image data according to an exemplary embodiment, on the basis of the embodiment of the apparatus shown in fig. 10, the processing module 24 may include:
a first processing submodule 241, configured to process the first image data and the second image data to obtain composite image data;
a determination sub-module 243 configured to determine specified image data for displaying the target photographic subject in the composite image data;
a second processing sub-module 244 configured to perform blurring processing on the other image data except the designated image data in the synthesized image data to obtain the target image data, in which the other photographed objects around the target photographed object are displayed in a blurred manner.
Referring to fig. 18, which is a block diagram illustrating another apparatus for processing image data according to an exemplary embodiment, the apparatus may further include, on the basis of the embodiment of the apparatus illustrated in fig. 10:
an opening module 28 configured to open the second lens when the first lens is detected to enter a preset shooting mode before the monitoring of preset trigger information in the display interface of the first image;
a fifth acquiring module 29 configured to acquire the initial image acquired by the second lens;
a determining module 210 configured to determine whether preset image information is included in the initial image;
the control module 211 is configured to control the second lens to capture the second image if the initial image includes the preset image information.
For the device embodiments, since they substantially correspond to the method embodiments, reference may be made to the partial description of the method embodiments for relevant points. The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules can be selected according to actual needs to achieve the purpose of the disclosed solution. One of ordinary skill in the art can understand and implement it without inventive effort.
Accordingly, in one aspect, the present disclosure provides an apparatus for processing image data, including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to:
monitoring preset trigger information in a display interface of a first image, wherein the first image is a historical image shot by a first lens;
when the preset trigger information is monitored, acquiring and outputting preset object information, wherein the preset object information is object information of at least one shot object in the first image and/or the second image, and the second image is a historical image shot by the second lens when the first lens shoots the first image;
acquiring target object information determined by a user according to the preset object information;
and processing first image data of the first image and second image data of the second image according to the target object information to obtain target image data, wherein the target image data takes a target photographed object corresponding to the target object information as a display gravity center.
Fig. 19 is a schematic structural diagram illustrating an apparatus 2000 for processing image data according to an exemplary embodiment. For example, the apparatus 2000 may be a user device, which may be embodied as a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, a wearable device such as a smart watch, smart glasses, a smart bracelet, a smart running shoe, and the like.
Referring to fig. 19, the apparatus 2000 may include one or more of the following components: a processing component 2002, a memory 2004, a power component 2006, a multimedia component 2008, an audio component 2010, an input/output (I/O) interface 2012, a sensor component 2014, and a communication component 2016.
The processing component 2002 generally controls the overall operation of the device 2000, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 2002 may include one or more processors 2020 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 2002 can include one or more modules that facilitate interaction between the processing component 2002 and other components. For example, the processing component 2002 may include a multimedia module to facilitate interaction between the multimedia component 2008 and the processing component 2002.
The memory 2004 is configured to store various types of data to support operation at the device 2000. Examples of such data include instructions for any application or method operating on device 2000, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 2004 may be implemented by any type or combination of volatile or non-volatile memory devices, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 2006 provides power to the various components of the device 2000. The power supply components 2006 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the device 2000.
The multimedia assembly 2008 includes a front camera and/or a rear camera, when the device 2000 is in an operational mode, such as a capture mode or a video mode, the front camera and/or the rear camera may receive external multimedia data.
The I/O interface 2012 provides an interface between the processing component 2002 and peripheral interface modules, which can be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 2014 includes one or more sensors for providing various aspects of state assessment for the device 2000. For example, sensor assembly 2014 may detect an open/closed state of device 2000, a relative positioning of components, such as a display and keypad of apparatus 2000, a change in position of apparatus 2000 or a component of apparatus 2000, the presence or absence of user contact with apparatus 2000, an orientation or acceleration/deceleration of apparatus 2000, and a change in temperature of apparatus 2000. The sensor assembly 2014 may include a proximity sensor configured to detect the presence of a nearby object in the absence of any physical contact. The sensor assembly 2014 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 2014 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 2016 may also include a Near Field Communication (NFC) module to facilitate short range communication, in one exemplary embodiment, the communication component 2016 may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 2000 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), programmable logic devices (P L D), Field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic components for performing the above-described methods.
In an exemplary embodiment, there is also provided a non-transitory computer readable storage medium, such as the memory 2004, comprising instructions that, when executed by the processor 2020 of the apparatus 2000, enable the apparatus 2000 to perform a method of processing image data, the method comprising:
monitoring preset trigger information in a display interface of a first image, wherein the first image is a historical image shot by a first lens;
when the preset trigger information is monitored, acquiring and outputting preset object information, wherein the preset object information is object information of at least one shot object in the first image and/or the second image, and the second image is a historical image shot by the second lens when the first lens shoots the first image;
acquiring target object information determined by a user according to the preset object information;
and processing first image data of the first image and second image data of the second image according to the target object information to obtain target image data, wherein the target image data takes a target photographed object corresponding to the target object information as a display gravity center.
The non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
As shown in fig. 20, fig. 20 is a schematic structural diagram illustrating an apparatus 2100 for processing image data according to an exemplary embodiment. For example, the apparatus 2100 may be provided as an application server. Referring to fig. 20, the apparatus 2100 includes a processing component 2122 that further includes one or more processors and memory resources, represented by memory 2116, for storing instructions, e.g., applications, that are executable by the processing component 2122. The application programs stored in the memory 2116 may include one or more modules each corresponding to a set of instructions. Further, the processing component 2122 is configured to execute instructions to perform the above-described method of transmitting information.
The device 2100 may also include a power component 2126 configured to perform power management for the device 2100, a wired or wireless network interface 2150 configured to connect the device 2100 to a network, and an input/output (I/O) interface 2158. The device 2100 may operate based on an operating system stored in the memory 2116, such as Android, iOS, windows serverTM,Mac OS XTM,UnixTM,LinuxTM,FreeBSDTMOr the like.
In an exemplary embodiment, a non-transitory computer readable storage medium comprising instructions, such as the memory 2116 comprising instructions, executable by the processing component 2122 of the apparatus 2100 to perform the above-described method is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
Wherein the instructions in the memory 2116, when executed by the processing component 2122, enable the apparatus 2100 to perform a method of processing image data, comprising:
monitoring preset trigger information in a display interface of a first image, wherein the first image is a historical image shot by a first lens;
when the preset trigger information is monitored, acquiring and outputting preset object information, wherein the preset object information is object information of at least one shot object in the first image and/or the second image, and the second image is a historical image shot by the second lens when the first lens shoots the first image;
acquiring target object information determined by a user according to the preset object information;
and processing first image data of the first image and second image data of the second image according to the target object information to obtain target image data, wherein the target image data takes a target photographed object corresponding to the target object information as a display gravity center.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.
Claims (24)
1. A method for processing image data, the method being applied to a terminal, the terminal comprising: a first lens and a second lens, the method comprising:
monitoring preset trigger information in a display interface of a first image, wherein the first image is a historical image shot by a first lens;
when the preset trigger information is monitored, acquiring and outputting preset object information, wherein the preset object information is object information of at least one shot object in the first image and/or the second image, and the second image is a historical image shot by the second lens when the first lens shoots the first image;
acquiring target object information determined by a user according to the preset object information;
and processing first image data of the first image and second image data of the second image according to the target object information to obtain target image data, wherein the target image data takes a target photographed object corresponding to the target object information as a display gravity center.
2. The method according to claim 1, wherein the acquiring and outputting preset object information comprises:
determining at least one shot object according to the first image and the second image;
and acquiring and outputting object information of at least one shot object.
3. The method according to claim 2, wherein the determining at least one of the objects from the first image and the second image comprises:
identifying each of the first image and the second image as a captured object;
acquiring shooting information of each shot object;
and matching the shooting information of each shot object with a preset shooting information condition, and determining at least one shot object of which the shooting information is matched with the preset shooting information condition.
4. The method of claim 3, wherein the photographing information comprises: shooting time length; the matching the shooting information of each shot object with a preset shooting information condition, and determining at least one shot object of which the shooting information is matched with the preset shooting information condition, includes:
determining whether the shooting time of each shot object is matched with a preset shooting time condition;
and acquiring at least one shot object with the shooting duration matched with the preset shooting duration condition.
5. The method of claim 3, wherein the photographing information comprises: shooting position information; the matching the shooting information of each shot object with a preset shooting information condition, and determining at least one shot object of which the shooting information is matched with the preset shooting information condition, includes:
determining whether the shooting position information of each shot object is matched with a preset position information condition;
and acquiring at least one shot object of which the shooting position information is matched with the preset position information condition.
6. The method according to claim 2, wherein if the terminal stores at least one reference image of a preset object, the determining at least one of the objects according to the first image and the second image comprises:
determining, for each reference image of the preset object, whether the first image and the second image include a reference image of the preset object;
if the first image and/or the second image comprise a reference image of the preset object, determining the preset object as the shot object;
and determining at least one shot object according to the preset object.
7. The method of claim 6, wherein prior to said determining at least one of said objects from said first image and said second image, said method further comprises:
acquiring a historical image set shot by the terminal in a preset time period;
acquiring at least one preset object of which the shooting information is matched with a second preset shooting information condition in the historical image set;
and acquiring a reference image shot for at least one preset object from the historical image set.
8. The method according to claim 1, wherein if the preset object information is object information of at least two of the photographed objects in the first image and/or the second image, the acquiring and outputting the preset object information includes:
acquiring shooting information of each shot object;
sequencing at least two pieces of shooting information to generate a shooting information sequence;
and sequentially displaying the object information of at least two shot objects according to the shooting information sequence.
9. The method of claim 8, wherein the sorting the at least two shot information to generate a shot information sequence comprises:
when the shooting information is shooting duration, sequencing at least two shooting durations according to the sequence of the shooting durations from long to short to generate a shooting duration sequence; or,
and when the shooting information is the shooting times within the preset duration, sequencing the at least two shooting times according to the sequence of the shooting times from the largest to the smallest to generate a shooting time sequence.
10. The method according to claim 1, wherein the processing first image data of the first image and second image data of the second image according to the target object information to obtain target image data comprises:
processing the first image data and the second image data to obtain composite image data;
modifying the display position data of the target photographed object in the synthesized image data to obtain target image data, wherein the target photographed object in a target image corresponding to the target image data is displayed in an image center area; or,
determining specified image data for displaying the target photographic subject in the composite image data;
and performing blurring processing on other image data except the designated image data in the synthetic image data to obtain the target image data, wherein other photographed objects positioned around the target photographed object in the target image are displayed in a blurred mode.
11. The method of claim 1, wherein prior to the monitoring for the preset trigger information within the display interface of the first image, the method further comprises:
when the first lens is detected to enter a preset shooting mode, starting the second lens;
acquiring an initial image acquired by the second lens;
determining whether preset image information is included in the initial image;
and if the initial image comprises the preset image information, controlling the second lens to shoot the second image.
12. An apparatus for processing image data, applied to a terminal, the terminal comprising: a first lens and a second lens, the apparatus comprising:
a monitoring module configured to monitor preset trigger information within a display interface of a first image, the first image being a history image captured by the first lens;
the output module is configured to acquire and output preset object information when the preset trigger information is monitored, wherein the preset object information is object information of at least one shot object in the first image and/or the second image, and the second image is a historical image shot by the second lens when the first lens shoots the first image;
the first acquisition module is configured to acquire target object information determined by a user according to the preset object information;
and the processing module is configured to process the first image data of the first image and the second image data of the second image according to the target object information to obtain target image data, and the target image data takes a target photographed object corresponding to the target object information as a display gravity center.
13. The apparatus of claim 12, wherein the output module comprises:
a determination sub-module configured to determine at least one of the photographed objects from the first image and the second image;
and the output sub-module is configured to acquire and output object information of at least one shot object.
14. The apparatus of claim 13, wherein the determining sub-module comprises:
an identifying unit configured to identify each of the first image and the second image as a subject;
an acquisition unit configured to acquire shooting information of each of the objects;
the first determination unit is configured to match the shooting information of each shot object with a preset shooting information condition, and determine at least one shot object of which the shooting information is matched with the preset shooting information condition.
15. The apparatus of claim 14, wherein the first determining unit comprises:
a first determination subunit configured to determine, when the shooting information includes a shooting duration, whether the shooting duration of each of the objects matches a preset shooting duration condition;
the first acquisition subunit is configured to acquire at least one photographed object of which the photographing time length is matched with the preset photographing time length condition.
16. The apparatus of claim 14, wherein the first determining unit comprises:
a second determination subunit configured to determine, when the shooting information includes shooting position information, whether the shooting position information of each of the objects matches a preset position information condition;
a second acquisition subunit configured to acquire at least one of the photographic subjects whose photographing position information matches the preset position information condition.
17. The apparatus of claim 13, wherein the determining sub-module comprises:
a second determining unit, configured to determine, if the terminal stores at least one reference image of a preset object, for each reference image of the preset object, whether the first image and the second image include the reference image of the preset object;
a third determining unit configured to determine the preset object as the photographed object if the first image and/or the second image include a reference image of the preset object;
a fourth determination unit configured to determine at least one of the photographed objects according to the preset object.
18. The apparatus of claim 17, further comprising:
a second acquisition module configured to acquire a set of historical images captured by the terminal within a preset time period before the determination of at least one of the objects to be captured according to the first image and the second image;
the third acquisition module is configured to acquire at least one preset object of which the shooting information is matched with a second preset shooting information condition in the historical image set;
a fourth obtaining module configured to obtain a reference image captured for at least one preset object from the historical image set.
19. The apparatus of claim 12, wherein the output module comprises:
the obtaining sub-module is configured to obtain shooting information of each shot object if the preset object information is object information of at least two shot objects in the first image and/or the second image;
the generation sub-module is configured to sort at least two pieces of shooting information and generate a shooting information sequence;
and the display sub-module is configured to sequentially display the object information of at least two shot objects according to the shooting information sequence.
20. The apparatus of claim 19, wherein the generating sub-module comprises:
the first generation unit is configured to sort at least two shooting time lengths according to the sequence from long to short of the shooting time lengths when the shooting information is the shooting time lengths, and generate a shooting time length sequence; or,
and the second generation unit is configured to sort the at least two shooting times according to the sequence of the shooting times from the largest to the smallest to generate a shooting time sequence when the shooting information is the shooting times within a preset time length.
21. The apparatus of claim 12, wherein the processing module comprises:
a first processing sub-module configured to process the first image data and the second image data to obtain composite image data;
a modification sub-module configured to modify display position data of the target photographed object in the synthesized image data to obtain the target image data, wherein the target photographed object in a target image corresponding to the target image data is displayed in an image center area; or,
a determination sub-module configured to determine specified image data for displaying the target photographic subject in the synthesized image data;
and the second processing submodule is configured to perform fuzzy processing on other image data except the specified image data in the synthesized image data to obtain the target image data, and the other photographed objects around the target photographed object in the target image are displayed in a fuzzy mode.
22. The apparatus of claim 12, further comprising:
the starting module is configured to start the second lens when the first lens is detected to enter a preset shooting mode before preset trigger information is monitored in a display interface of the first image;
a fifth acquisition module configured to acquire an initial image acquired by the second lens;
a determination module configured to determine whether preset image information is included in the initial image;
and the control module is configured to control the second lens to shoot the second image if the initial image comprises the preset image information.
23. A non-transitory computer readable storage medium having stored thereon a computer program, wherein the program when executed by a processor implements the steps of the method of any of claims 1 to 11.
24. An apparatus for processing image data, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to:
monitoring preset trigger information in a display interface of a first image, wherein the first image is a historical image shot by a first lens;
when the preset trigger information is monitored, acquiring and outputting preset object information, wherein the preset object information is object information of at least one shot object in the first image and/or the second image, and the second image is a historical image shot by the second lens when the first lens shoots the first image;
acquiring target object information determined by a user according to the preset object information;
and processing first image data of the first image and second image data of the second image according to the target object information to obtain target image data, wherein the target image data takes a target photographed object corresponding to the target object information as a display gravity center.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910049904.6A CN111464734B (en) | 2019-01-18 | 2019-01-18 | Method and device for processing image data |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910049904.6A CN111464734B (en) | 2019-01-18 | 2019-01-18 | Method and device for processing image data |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111464734A true CN111464734A (en) | 2020-07-28 |
CN111464734B CN111464734B (en) | 2021-09-21 |
Family
ID=71684956
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910049904.6A Active CN111464734B (en) | 2019-01-18 | 2019-01-18 | Method and device for processing image data |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111464734B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114339018A (en) * | 2020-09-30 | 2022-04-12 | 北京小米移动软件有限公司 | Lens switching method and device and storage medium |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101090442A (en) * | 2006-06-13 | 2007-12-19 | 三星电子株式会社 | Method and apparatus for taking images using mobile communication terminal with plurality of camera lenses |
CN104883497A (en) * | 2015-04-30 | 2015-09-02 | 广东欧珀移动通信有限公司 | Positioning shooting method and mobile terminal |
CN105227838A (en) * | 2015-09-28 | 2016-01-06 | 广东欧珀移动通信有限公司 | A kind of image processing method and mobile terminal |
CN105554364A (en) * | 2015-07-30 | 2016-05-04 | 宇龙计算机通信科技(深圳)有限公司 | Image processing method and terminal |
CN105719239A (en) * | 2016-01-21 | 2016-06-29 | 科盾科技股份有限公司 | Image splicing method and image splicing device |
US20170048427A1 (en) * | 2014-05-16 | 2017-02-16 | Lg Electronics Inc. | Mobile terminal and control method therefor |
CN108702445A (en) * | 2017-03-03 | 2018-10-23 | 华为技术有限公司 | A kind of method for displaying image and electronic equipment |
CN109196551A (en) * | 2017-10-31 | 2019-01-11 | 深圳市大疆创新科技有限公司 | Image processing method, equipment and unmanned plane |
-
2019
- 2019-01-18 CN CN201910049904.6A patent/CN111464734B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101090442A (en) * | 2006-06-13 | 2007-12-19 | 三星电子株式会社 | Method and apparatus for taking images using mobile communication terminal with plurality of camera lenses |
US20170048427A1 (en) * | 2014-05-16 | 2017-02-16 | Lg Electronics Inc. | Mobile terminal and control method therefor |
CN104883497A (en) * | 2015-04-30 | 2015-09-02 | 广东欧珀移动通信有限公司 | Positioning shooting method and mobile terminal |
CN105554364A (en) * | 2015-07-30 | 2016-05-04 | 宇龙计算机通信科技(深圳)有限公司 | Image processing method and terminal |
CN105227838A (en) * | 2015-09-28 | 2016-01-06 | 广东欧珀移动通信有限公司 | A kind of image processing method and mobile terminal |
CN105719239A (en) * | 2016-01-21 | 2016-06-29 | 科盾科技股份有限公司 | Image splicing method and image splicing device |
CN108702445A (en) * | 2017-03-03 | 2018-10-23 | 华为技术有限公司 | A kind of method for displaying image and electronic equipment |
CN109196551A (en) * | 2017-10-31 | 2019-01-11 | 深圳市大疆创新科技有限公司 | Image processing method, equipment and unmanned plane |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114339018A (en) * | 2020-09-30 | 2022-04-12 | 北京小米移动软件有限公司 | Lens switching method and device and storage medium |
CN114339018B (en) * | 2020-09-30 | 2023-08-22 | 北京小米移动软件有限公司 | Method and device for switching lenses and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN111464734B (en) | 2021-09-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP3125154B1 (en) | Photo sharing method and device | |
EP3125530B1 (en) | Video recording method and device | |
EP3179408A2 (en) | Picture processing method and apparatus, computer program and recording medium | |
CN106210496B (en) | Photo shooting method and device | |
US7742625B2 (en) | Autonomous camera having exchangable behaviours | |
US20090174805A1 (en) | Digital camera focusing using stored object recognition | |
US20170034325A1 (en) | Image-based communication method and device | |
CN109359458B (en) | Application unlocking method and device and computer readable storage medium | |
US10769743B2 (en) | Method, device and non-transitory storage medium for processing clothes information | |
CN106095465B (en) | Method and device for setting identity image | |
CN111586296B (en) | Image capturing method, image capturing apparatus, and storage medium | |
WO2017000491A1 (en) | Iris image acquisition method and apparatus, and iris recognition device | |
EP3261046A1 (en) | Method and device for image processing | |
CN113364965A (en) | Shooting method and device based on multiple cameras and electronic equipment | |
CN115525140A (en) | Gesture recognition method, gesture recognition apparatus, and storage medium | |
CN108848303A (en) | Shoot reminding method and device | |
CN108737631B (en) | Method and device for rapidly acquiring image | |
CN112004020B (en) | Image processing method, image processing device, electronic equipment and storage medium | |
CN111464734B (en) | Method and device for processing image data | |
CN108027821B (en) | Method and device for processing picture | |
CN110047115B (en) | Star image shooting method and device, computer equipment and storage medium | |
CN110636377A (en) | Video processing method, device, storage medium, terminal and server | |
CN105530439B (en) | Method, apparatus and terminal for capture pictures | |
CN110572582B (en) | Shooting parameter determining method and device and shooting data processing method and device | |
CN112825544A (en) | Picture processing method and device and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |