CN106951090B - Picture processing method and device - Google Patents
Picture processing method and device Download PDFInfo
- Publication number
- CN106951090B CN106951090B CN201710196930.2A CN201710196930A CN106951090B CN 106951090 B CN106951090 B CN 106951090B CN 201710196930 A CN201710196930 A CN 201710196930A CN 106951090 B CN106951090 B CN 106951090B
- Authority
- CN
- China
- Prior art keywords
- picture
- gesture operation
- preset
- matched
- added
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/60—Editing figures and text; Combining figures or text
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2203/00—Indexing scheme relating to G06F3/00 - G06F3/048
- G06F2203/01—Indexing scheme relating to G06F3/01
- G06F2203/012—Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Human Computer Interaction (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The disclosure discloses a picture processing method and device, and belongs to the field of picture processing. The method comprises the following steps: capturing gesture operation within a preset range in the display process of the first picture; when the gesture operation is matched with a first preset gesture operation, acquiring a first material; and adding the first material to the first picture based on the display position indicated by the first material, and displaying a second picture obtained after the material is added. According to the method and the device, the electronic equipment can be triggered to add the materials to the displayed picture through simple gesture operation, the user does not need to manually select the materials to be added, the steps are simple, and the picture processing efficiency is improved.
Description
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to an image processing method and apparatus.
Background
With the development of picture processing technology, the processing methods of pictures are more and more diversified. The method for processing the pictures comprises the steps of adding materials into the pictures, wherein the materials are taken as a common picture processing method, and the method is popular with the majority of users due to the outstanding characteristics of high interestingness, high attractiveness and strong flexibility.
In the process of displaying the picture, the electronic equipment provides a plurality of materials which can be added, and a user can browse the materials and manually select the materials to be added from the materials. After the electronic equipment detects the selection operation of the user, the selected materials are added into the picture, and the picture obtained after the materials are added is displayed.
Disclosure of Invention
In order to overcome the problems in the related art, the present disclosure provides a method and an apparatus for processing an image.
According to a first aspect of the embodiments of the present disclosure, there is provided a picture processing method applied to an electronic device, including:
capturing gesture operation within a preset range in the display process of the first picture;
when the gesture operation is matched with a first preset gesture operation, acquiring a first material;
and adding the first material to the first picture based on the display position indicated by the first material, and displaying a second picture obtained after the material is added.
In one possible implementation manner, the capturing a gesture operation within a preset range during the displaying of the first picture includes:
capturing gesture operation within the preset range through an image shot by a camera module of the electronic equipment in the display process of the first picture; or the like, or, alternatively,
and capturing gesture operation in the preset range by detecting obstacles in the preset range through a distance sensor of the electronic equipment in the display process of the first picture.
In one possible implementation manner, when the gesture operation matches a first preset gesture operation, acquiring the first material includes:
when the gesture operation is matched with the first preset gesture operation, acquiring any material from a preset material library as the first material; or the like, or, alternatively,
and when the gesture operation is matched with the first preset gesture operation, acquiring a material matched with the first preset gesture operation from the preset material library as the first material, wherein different preset gesture operations are matched with different materials.
In one possible implementation manner, when the gesture operation matches a first preset gesture operation, acquiring the first material includes:
and carrying out picture identification on the first picture to obtain picture characteristics of the first picture, and acquiring a material matched with the picture characteristics from a preset material library as a first material.
In one possible implementation manner, the picture feature of the first picture comprises at least one of a facial expression feature and a limb action feature.
In one possible implementation manner, after the first material is added to the first picture based on the display position indicated by the first material and a second picture obtained after the material is added is displayed, the method further includes:
and when any gesture operation is not captured within the preset time, storing the second picture obtained after the material is added.
In one possible implementation manner, after the first material is added to the first picture based on the display position indicated by the first material and a second picture obtained after the material is added is displayed, the method further includes:
and when capturing gesture operation matched with second preset gesture operation in the preset range within preset time, deleting the second picture obtained after the material is added, and displaying the first picture.
In one possible implementation manner, after the first material is added to the first picture based on the display position indicated by the first material and a second picture obtained after the material is added is displayed, the method further includes:
when the gesture operation matched with a third preset gesture operation in the preset range is captured within the preset duration, deleting the second picture obtained after the material is added, obtaining a second material from a preset material library, adding the second material to the first picture based on the display position indicated by the second material, and displaying the third picture obtained after the material is added.
According to a second aspect of the embodiments of the present disclosure, there is provided a picture processing apparatus applied to an electronic device, the apparatus including:
the capturing module is used for capturing gesture operation within a preset range in the display process of the first picture;
the acquisition module is used for acquiring a first material when the gesture operation is matched with a first preset gesture operation;
an adding module, configured to add the first material to the first picture based on a display position indicated by the first material;
and the display module is used for displaying the second picture obtained after the material is added.
In a possible implementation manner, the capturing module is configured to capture a gesture operation within the preset range through an image captured by a camera module of the electronic device in a display process of the first picture; or, in the display process of the first picture, capturing gesture operation in the preset range through detection of an obstacle in the preset range by a distance sensor of the electronic device.
In a possible implementation manner, the obtaining module is configured to obtain any material from a preset material library as the first material when the gesture operation matches the first preset gesture operation; or when the gesture operation is matched with the first preset gesture operation, acquiring a material matched with the first preset gesture operation from the preset material library as the first material, wherein different preset gesture operations are matched with different materials.
In one possible implementation manner, the obtaining module includes:
the identification submodule is used for carrying out picture identification on the first picture to obtain the picture characteristics of the first picture;
and the obtaining sub-module is used for obtaining a material matched with the picture characteristics from a preset material library as a first material.
In one possible implementation manner, the picture feature of the first picture comprises at least one of a facial expression feature and a limb action feature.
In one possible implementation, the apparatus further includes:
and the storage module is used for storing the second picture obtained after the material is added when any gesture operation is not captured within the preset time length.
In one possible implementation, the apparatus further includes:
the deleting module is used for deleting the second picture obtained after the material is added when the gesture operation matched with the second preset gesture operation in the preset range is captured in the preset time length;
the display module is used for displaying the first picture.
In one possible implementation, the apparatus further includes:
the deleting module is used for deleting the second picture obtained after the material is added when the gesture operation matched with the third preset gesture operation in the preset range is captured in the preset time length;
the acquisition module is further used for acquiring a second material from a preset material library and adding the second material to the first picture based on a display position indicated by the second material;
and the display module is also used for displaying a third picture obtained after the material is added.
According to a third aspect of the embodiments of the present disclosure, there is provided a picture processing apparatus including:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to:
capturing gesture operation within a preset range in the display process of the first picture;
when the gesture operation is matched with a first preset gesture operation, acquiring a first material;
and adding the first material to the first picture based on the display position indicated by the first material, and displaying a second picture obtained after the material is added.
The technical scheme provided by the embodiment has the following beneficial effects:
according to the method and the device provided by the embodiment, the gesture operation in the preset range is captured in the display process of the first picture, when the gesture operation is matched with the first preset gesture operation, the first material is obtained, the first material is added to the first picture based on the display position indicated by the first material, and the second picture obtained after the material is added is displayed. According to the method and the device, the electronic equipment can be triggered to add the materials to the displayed picture through simple gesture operation, the user does not need to manually select the materials to be added, the steps are simple, and the picture processing efficiency is improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
Fig. 1 is a flowchart illustrating a picture processing method according to an exemplary embodiment.
Fig. 2 is a flow chart illustrating a picture processing method according to an example embodiment.
Fig. 3 is a diagram illustrating material in accordance with an exemplary embodiment.
Fig. 4 is a diagram illustrating material in accordance with an exemplary embodiment.
Fig. 5 is a diagram illustrating material in accordance with an exemplary embodiment.
Fig. 6 is a schematic diagram illustrating a first picture according to an example embodiment.
FIG. 7 is a schematic diagram illustrating a gesture operation in accordance with an exemplary embodiment.
Fig. 8 is a diagram illustrating material in accordance with an exemplary embodiment.
FIG. 9 is a schematic diagram illustrating a gesture operation in accordance with an illustrative embodiment.
Fig. 10 is a block diagram illustrating a picture processing apparatus according to an exemplary embodiment.
Fig. 11 is a block diagram illustrating a picture processing apparatus according to an exemplary embodiment.
Fig. 12 is a block diagram illustrating a picture processing apparatus according to an exemplary embodiment.
Fig. 13 is a block diagram illustrating a picture processing apparatus according to an exemplary embodiment.
Detailed Description
To make the objects, technical solutions and advantages of the present disclosure more apparent, embodiments of the present disclosure will be described in detail with reference to the accompanying drawings.
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
Fig. 1 is a flowchart illustrating a picture processing method according to an exemplary embodiment, which is used in an electronic device, as shown in fig. 1, and includes the following steps:
in step 101, in the display process of the first picture, gesture operation within a preset range is captured.
In step 102, when the gesture operation matches with a first preset gesture operation, a first material is obtained.
In step 103, the first material is added to the first picture based on the display position indicated by the first material, and a second picture obtained after the material is added is displayed.
In the method provided by the embodiment, a gesture operation in a preset range is captured in the display process of the first picture, when the gesture operation is matched with the first preset gesture operation, the first material is obtained, the first material is added to the first picture based on the display position indicated by the first material, and the second picture obtained after the material is added is displayed. According to the method and the device, the electronic equipment can be triggered to add the materials to the displayed picture through simple gesture operation, the user does not need to manually select the materials to be added, the steps are simple, and the picture processing efficiency is improved.
In one possible implementation manner, the capturing a gesture operation within a preset range during the displaying of the first picture includes:
capturing gesture operation within the preset range through an image shot by a camera module of the electronic equipment in the display process of the first picture; or the like, or, alternatively,
and capturing gesture operation within the preset range by detecting the obstacle within the preset range through a distance sensor of the electronic equipment in the display process of the first picture.
In one possible implementation manner, when the gesture operation matches the first preset gesture operation, acquiring the first material includes:
when the gesture operation is matched with the first preset gesture operation, any material is obtained from a preset material library to serve as the first material; or the like, or, alternatively,
when the gesture operation is matched with the first preset gesture operation, the material matched with the first preset gesture operation is obtained from the preset material library to serve as the first material, and different preset gesture operations are matched with different materials.
In one possible implementation manner, when the gesture operation matches the first preset gesture operation, acquiring the first material includes:
and carrying out picture identification on the first picture to obtain the picture characteristics of the first picture, and acquiring a material matched with the picture characteristics from a preset material library as a first material.
In one possible implementation manner, the picture features of the first picture comprise at least one of facial expression features and limb action features.
In one possible implementation manner, after the first material is added to the first picture based on the display position indicated by the first material and a second picture obtained after the material is added is displayed, the method further includes:
and when any gesture operation is not captured within the preset time, storing the second picture obtained after the material is added.
In one possible implementation manner, after the first material is added to the first picture based on the display position indicated by the first material and a second picture obtained after the material is added is displayed, the method further includes:
and when capturing gesture operation matched with second preset gesture operation in the preset range within preset time, deleting the second picture obtained after the material is added, and displaying the first picture.
In one possible implementation manner, after the first material is added to the first picture based on the display position indicated by the first material and a second picture obtained after the material is added is displayed, the method further includes:
when the gesture operation matched with the third preset gesture operation in the preset range is captured within the preset duration, deleting the second picture obtained after the material is added, obtaining the second material from a preset material library, adding the second material to the first picture based on the display position indicated by the second material, and displaying the third picture obtained after the material is added.
Fig. 2 is a flowchart illustrating a picture processing method applied to an electronic device according to an exemplary embodiment, the method including:
in step 201, during the display of the first picture, the electronic device captures a gesture operation within a preset range.
The electronic device may be a mobile phone, a tablet computer, a wearable device, or the like, and the first picture may be a picture taken by a camera module configured to the electronic device, or a picture obtained from a stored gallery for the electronic device.
For example, the electronic device can call an interface provided by the camera module, start the camera module, align the camera module with a target object to be shot by a user, and trigger a shooting operation, and the electronic device can acquire a first picture shot by the camera module when detecting the shooting operation. Wherein, this module of making a video recording can include a plurality of cameras such as leading camera and rear camera, and this first picture can be shot through arbitrary camera to this electronic equipment.
For another example, the electronic device may call an interface provided by a gallery, display at least one picture stored in the gallery, and select one picture from the pictures by the user, which is the first picture.
In this embodiment, in order to implement a function of automatically adding a material to a first picture, in the display process of the first picture, when a user wants to add a material, a gesture operation may be triggered, and the electronic device may add the material to the first picture based on the gesture operation. The gesture operation may be a static gesture operation or a dynamic gesture operation, the static gesture operation may be a gesture form of the user, and the dynamic gesture operation may be a movement operation of the palm of the user. The material can be a sticker, a pattern, etc.
In a first possible implementation manner, the electronic device may be configured with a camera module, and in the display process of the first picture, a gesture operation within a preset range is captured through an image obtained by the camera module of the electronic device.
Wherein, this preset scope can include the biggest scope that the module of making a video recording of electronic equipment can shoot, promptly, when the distance between user's palm and this module of making a video recording belongs to preset within range, the module of making a video recording can catch the gesture operation.
If the first picture is the current picture shot by the camera module, the camera module is still in a shooting state when the electronic equipment displays the first picture, and gesture operation can be captured through the camera module at the moment. Or, if the first picture is a picture selected from a gallery, when the electronic device displays the first picture, the camera module can be started, and gesture operation is captured through the camera module.
To how to catch the gesture operation through the module of making a video recording, when the module of making a video recording is in the shooting state, can carry out the live view and shoot to the image that will shoot obtains is cached. The electronic device may determine whether a gesture feature is present in the cached image, for example, the gesture feature may be a palm five fingers open or a scissor hand gesture. If the cached image has the gesture feature, the electronic device may determine that the gesture operation within the preset range is captured.
In a second possible implementation manner, the electronic device may capture the gesture operation within the preset range through detection of an obstacle within the preset range by a distance sensor of the electronic device during display of the first picture.
The preset range may include a maximum distance that can be detected by a distance sensor of the electronic device, that is, when a distance between a palm of a user and the distance sensor falls within the preset range, the distance sensor may capture a gesture operation.
Regarding how to capture gesture operations through the distance sensor, when the electronic device displays a first picture, the distance sensor emits infrared rays, when infrared rays irradiate the palm (i.e., an obstacle) of a user, the palm of the user reflects the infrared rays, when the distance sensor receives the reflected infrared rays, the palm of the user can be determined to exist in a preset range, moreover, the distance sensor can determine the distance between the palm of the user and the distance sensor according to the intensity of the reflected infrared rays, and determine the direction of the palm of the user according to the angle of the reflected infrared rays, so as to determine the type, the form and the like of the gesture operations made by the user.
It should be noted that, in order to guide how the user performs the gesture operation to add the material to the picture, the electronic device may further display a first operation prompt message while displaying the first picture, where the first operation prompt message is used to prompt the user how to perform the operation to add the material to the picture. The first operation prompting message may specifically be: "the palm hangs in the air and draws the screen upwards and can add the sticker".
In step 202, when the gesture operation matches with a first preset gesture operation, the electronic device acquires a first material.
In order to determine whether the user wishes to add the material to the first picture, in the embodiment, a first preset gesture operation is set, and when the captured gesture operation matches the first preset gesture operation, it may be determined that the user is to add the material to the first picture. The first preset gesture operation may be an operation in which the palm is suspended and the screen of the electronic device is scratched downward, an operation in which the thumb swings right across the screen of the electronic device, or the like.
The first preset gesture operation may be set on the electronic device by a technician at the time of development, or may be determined by a setting operation of a user. For example, the electronic device may display a gesture setting interface, where the gesture setting interface may be used to prompt a user to enter a gesture operation, the user may perform the gesture operation according to the prompt of the gesture setting interface, and the electronic device may capture the gesture operation as a first preset gesture operation. Subsequently, each time the user wishes to add material to the first picture, the same gesture operation as when the gesture operation was entered may be triggered, and the electronic device may determine that material is to be added to the first picture.
The material to be added may be the first material obtained from a preset material library. The preset material library can be pre-stored by the electronic device, or the electronic device can send a preset material library acquisition instruction to the server, and the server sends the preset material library to the electronic device after receiving the preset material library acquisition instruction.
Three possible implementations of how to obtain the first material from the preset material library can be included.
In a first possible implementation manner, when the gesture operation matches the first preset gesture operation, the electronic device acquires any material from a preset material library as the first material.
The electronic equipment can randomly acquire a material from the preset material library, or a plurality of materials in the preset material library can be sorted according to preset rules, and the electronic equipment can acquire the first-ranked material from the preset material library. The preset rule may be that the materials are sorted according to a sequence from high popularity to low popularity, a sequence from late popularity to early popularity, a sequence from small size to large size, or the like, and may also be sorted according to other manners, and the step of sorting the materials in the preset material library may be performed by the electronic device or the server, which is not limited in this embodiment.
In a second possible implementation manner, when the gesture operation is matched with the first preset gesture operation, the electronic device acquires a material matched with the first preset gesture operation from a preset material library as a first material.
The electronic equipment can establish a matching relation between preset gesture operation and materials, different preset gesture operations are matched with different materials, and the electronic equipment can acquire the materials matched with the first preset gesture operation as first materials based on the matching relation. For example, "palm hangs up across the screen" may correspond to "cat-ear sticker," then when the user's palm hangs up across the screen, the electronic device may retrieve "cat-ear sticker.
In a third possible implementation manner, the electronic device performs picture recognition on the first picture to obtain picture characteristics of the first picture, and obtains a material matched with the picture characteristics from a preset material library as the first material.
The picture characteristics of the first picture comprise at least one of facial expression characteristics and limb action characteristics. The facial expression features are used for representing features of a human face when certain expressions appear on the face, such as 'angry expressions', 'happy expressions', 'surprised expressions', 'grippy expressions' and the like, and the limb action features represent features of limbs and organs of the human body when certain actions are carried out, and can be features formed by a certain person independently, such as 'gun gesture', 'forehead supporting action', 'cheek supporting action' and 'jump action', and also can be features formed by combination of people and people, such as 'kiss actions', 'hugging actions', 'stroking actions' and the like.
For how to obtain the picture features, the electronic device may perform a feature extraction operation on the first picture by using a feature extraction algorithm to obtain the picture features of the first picture. The feature extraction algorithm may be a neural network algorithm, a wavelet transform algorithm, or the like.
According to how to acquire the materials matched with the picture characteristics, for a plurality of materials in a preset material library, the electronic equipment can establish a matching relation between the picture characteristics and the materials, and after the picture characteristics of the first picture are obtained, the materials matched with the picture characteristics of the first picture can be acquired according to the matching relation. For example, referring to FIG. 3, a "javelin gesture" may correspond to "BOOM! Paster,; referring to fig. 4, the "kissing action" may correspond to a "love bubble sticker"; referring to fig. 5, the "angry expression" may correspond to a "magic sticker".
In step 203, the electronic device adds the first material to the first picture based on the display position indicated by the first material.
The material may include pattern information, size information, and the like, for example, one sticker is determined by the pattern and size, and one sticker may include one pattern or may also include a combination of a plurality of patterns. The display position of the material is used to determine the position of the material added to the picture, such as "upper left corner position", "center position", and the like.
In a possible implementation manner, the electronic device may establish a coordinate system for the first picture, for example, taking the center of the first picture as an origin, the horizontal direction as an X-axis direction, and a direction perpendicular to the X-axis direction as a Y-axis direction, then the display position of the first picture may be represented as a position in the coordinate system, for example, "10 (cm), 20 (cm)". Of course, other positions of the first picture may also be used as an origin, and a coordinate system may be established based on other directions, which is not limited in this embodiment.
The electronic device may store the display position indicated by each material, and may determine the display position indicated by the first material when the first material is retrieved.
Wherein, the display position indicated by the first material may be a fixed position, for example, the display position of the "heart-shaped bubble sticker" is the central position, and then the electronic device will add the "heart-shaped bubble sticker" to the central position of the first picture. Or the display position indicated by the first material may also be a position determined by the picture feature of the first picture, for example, the display position of the "cat-ear sticker" is a human ear position, and when the electronic device acquires the first picture, the electronic device may determine the human ear position in the first picture, and add the "cat-ear sticker" to the human ear position.
In step 204, the electronic device displays a second picture obtained after the material is added.
After the first material is added to the first picture, the electronic equipment can obtain a second picture to which the first material is added, the second picture is displayed, and a user can preview the second picture to obtain the effect of the first picture after the material is added.
In a possible implementation manner, the electronic device may adjust the display position, size, direction, added characters, and the like of the first material in the second picture according to the setting operation of the user.
For example, a user may click a first material on the electronic device and drag the first material, and the electronic device may display the first material at a position dragged by the user based on a user dragging operation, thereby implementing a function of moving the first material to a position desired by the user.
For another example, the electronic device may display four arrows around the first material, when the user drags any one of the four arrows outward, the electronic device may enlarge the first material by a ratio corresponding to the dragging operation, and when the user drags any one of the four arrows inward, the electronic device may reduce the first material by a ratio corresponding to the dragging operation, thereby implementing a function of enlarging or reducing the first material to a size desired by the user.
For another example, the electronic device may display a rotation arrow at a certain position of the first material, and when the user clicks the rotation arrow, and rotates the rotation arrow on the electronic device in a clockwise direction or a counterclockwise direction, the electronic device may rotate the first material to a corresponding angle, thereby implementing a function of rotating the first material to an angle desired by the user.
For another example, the electronic device may display a text input box at a certain position of the first material, and when the user triggers an input operation in the text input box, the electronic device may add text input by the user to the first material, thereby implementing a function of adding text that the user wishes to add to the first material.
In the process from step 201 to step 203, the electronic device can be triggered to add the material to the displayed picture through simple gesture operation, and the user does not need to manually select the material to be added, so that the steps are simple, and the picture processing efficiency is improved.
In other embodiments provided by the present disclosure, the electronic device may further process the picture to which the material has been added, so that the picture processing mode is expanded, and the flexibility is improved. For example, after executing step 204, the electronic device may further execute the following step 205, and implement a function of automatically storing a picture to which the material has been added.
In step 205, when no gesture operation is captured within the preset time period, the electronic device stores a second picture obtained after the material is added.
Within a preset time period after the second picture is displayed, if the electronic device does not capture any gesture operation, the user can be considered to be satisfied with the second picture, and therefore the electronic device stores the second picture. In the subsequent process, when the user wants to view the second picture, the electronic device can call out the stored second picture, and the second picture is provided for the user.
The electronic device may configure a timer for determining how to determine that the time length after the second picture is displayed reaches a preset time length, where the timer has a timing function. After the first material is added to the electronic device, when the second picture is displayed, an interface provided by a timer can be called, the timer is started, the time when the second picture starts to be displayed can be used as a time starting point by the timer, the time length from the time starting point is counted, and when the time length reaches the preset time length and any gesture operation is not captured, the second picture can be stored.
That is, after the second picture is displayed, if the user does not trigger any gesture operation within the preset time, the electronic device can automatically store the second picture without manually triggering the operation of storing the second picture by the user, the steps are simple, the picture storing efficiency is improved, and the automatic storage without the operation can provide great convenience for the user.
It should be noted that the step 205 is an optional step, and in practical applications, the electronic device may not execute the step 205, but add another material again after executing the steps 201 and 204.
In another embodiment, when the gesture operation matched with the second preset gesture operation in the preset range is captured within the preset time length, the second picture obtained after the material is added is deleted, and the first picture is displayed.
In consideration of the fact that the user may wish to cancel the added material when the user is not satisfied with the picture of the added material, the present embodiment provides a second preset gesture operation for deleting the picture of the added material. Then, within a preset time period after the second picture is displayed, when a gesture operation matching the second preset gesture operation is captured, it is considered that the user does not wish to store the second picture but wishes to resume displaying the first picture, and therefore the second picture is deleted and the first picture is displayed.
For example, when the first preset gesture operation is an operation in which the palm is suspended and the screen of the electronic device is scratched downward, the second preset gesture operation may be an operation in which the palm is suspended and the screen of the electronic device is scratched upward, and when the first preset gesture operation is an operation in which the thumb is swung rightward and the screen of the electronic device is scratched, the second preset gesture operation may be an operation in which the thumb is pointed and the screen of the electronic device is scratched. The setting process of the second preset gesture operation may be similar to the first preset gesture operation, and is not described herein again.
In order to improve the operation success rate, the electronic device may further display a second operation prompting message while displaying the second picture, where the second operation prompting message is used to guide the user how to perform a gesture operation to cancel the added material. The second operation prompting message may specifically be: the palm is suspended and slides upwards to pass through the screen to cancel the paster.
In this embodiment, after the second picture is displayed, if the user triggers a gesture operation matched with the second preset gesture operation within a preset time period, the electronic device may automatically delete the second picture and automatically display the first picture without manually triggering the operation of deleting the second picture by the user, the steps are simple, and the picture processing efficiency is improved.
In another embodiment, when a gesture operation matched with a third preset gesture operation in the preset range is captured within a preset time length, a second picture obtained after the material is added is deleted, a second material is obtained from a preset material library, the second material is added to the first picture based on a display position indicated by the second material, and the third picture obtained after the material is added is displayed.
Considering that the user may be dissatisfied with the first material and wants to replace the added material, a third preset gesture operation is set in the embodiment, and the third preset gesture operation is used for replacing the material. Within a preset time length after the second picture is displayed, when the gesture operation matched with the third preset gesture operation is captured, the user is considered to want to replace the added material, so that the electronic equipment deletes the second picture, obtains the second material from the preset material library, and adds the second material to the first picture.
For at least one material in the preset material library except the first material, the electronic device may acquire any one material from the at least one material as the second material, or acquire a material matched with the third preset gesture operation from the at least one material as the second material, or acquire a material matched with the picture feature of the first picture from the at least one material as the second material.
In this embodiment, when the electronic device displays the picture with the added material, once a gesture operation matching with a certain preset gesture operation in a preset range is captured within a preset time length, the material added to the current first picture is switched, and until the gesture operation is not captured within the preset time length, it may be determined that the user is satisfied with the picture with the added current material, and the picture with the added current material is not stored.
In this embodiment, after the second picture is displayed, if the user triggers a gesture operation matched with the third preset gesture operation within a preset time period, the electronic device can automatically switch the first material in the first picture to the second material without manually selecting the material to be replaced by the user, the steps are simple, and the picture processing efficiency is improved.
Referring to fig. 6 to 9, in an exemplary scenario provided by the present disclosure, after a user uses a mobile phone to perform self-photographing to obtain a first picture shown in fig. 6, the user may suspend the palm to slide right across a mobile phone screen according to the gesture operation shown in fig. 7, the mobile phone may add a "cat-ear sticker" shown in fig. 8 to the self-photographing picture, and when the user is unsatisfied with the "cat-ear sticker", the user may suspend the palm to slide left across the mobile phone screen according to the gesture operation shown in fig. 9, the mobile phone may cancel the added "cat-ear sticker", and redisplay the first picture. For a self-timer scene, because the gesture operation is a suspended gesture, a user can add materials to a picture quickly without touching a screen of the electronic equipment and clicking any button on the screen, and then the user can take a picture for the next time quickly.
Fig. 10 is a block diagram illustrating a picture processing apparatus according to an exemplary embodiment, and the method is applied to an electronic device, and the apparatus includes a capturing module 1001, an obtaining module 1002, an adding module 1003, and a displaying module 1004, as illustrated in fig. 10.
The capturing module 1001 is used for capturing gesture operation within a preset range in the display process of the first picture;
the obtaining module 1002 is configured to obtain a first material when the gesture operation matches a first preset gesture operation;
an adding module 1003, configured to add the first material to the first picture based on the display position indicated by the first material;
and the display module 1004 is configured to display the second picture obtained after the material is added.
According to the device provided by the embodiment, in the display process of the first picture, the gesture operation in the preset range is captured, when the gesture operation is matched with the first preset gesture operation, the first material is obtained, the first material is added to the first picture based on the display position indicated by the first material, and the second picture obtained after the material is added is displayed. According to the method and the device, the electronic equipment can be triggered to add the materials to the displayed picture through simple gesture operation, the user does not need to manually select the materials to be added, the steps are simple, and the picture processing efficiency is improved.
In a possible implementation manner, the capturing module 1001 is configured to capture a gesture operation within the preset range through an image captured by a camera module of the electronic device in a display process of the first picture; or, in the display process of the first picture, the gesture operation in the preset range is captured through the detection of the obstacle in the preset range by the distance sensor of the electronic equipment.
In a possible implementation manner, the obtaining module 1002 is configured to obtain any material from a preset material library as the first material when the gesture operation matches the first preset gesture operation; or when the gesture operation is matched with the first preset gesture operation, acquiring a material matched with the first preset gesture operation from the preset material library as the first material, wherein different preset gesture operations are matched with different materials.
In one possible implementation, the obtaining module 1002 includes:
the identification submodule is used for carrying out picture identification on the first picture to obtain the picture characteristics of the first picture;
and the obtaining sub-module is used for obtaining the material matched with the picture characteristics from a preset material library as a first material.
In one possible implementation manner, the picture features of the first picture comprise at least one of facial expression features and limb action features.
In one possible implementation, referring to fig. 11, the apparatus further includes:
the storage module 1005 is configured to store the second picture obtained after the material is added when no gesture operation is captured within a preset time period.
In one possible implementation, referring to fig. 12, the apparatus further includes:
a deleting module 1006, configured to delete the second picture obtained after the material is added when a gesture operation matching the second preset gesture operation within the preset range is captured within a preset duration;
the display module 1004 is further configured to display the first picture.
In one possible implementation, the apparatus further includes:
a deleting module 1006, configured to delete the second picture obtained after the material is added when a gesture operation matching the third preset gesture operation in the preset range is captured within a preset duration;
the obtaining module 1002 is further configured to obtain a second material from a preset material library, and add the second material to the first picture based on a display position indicated by the second material;
the display module 1004 is further configured to display a third picture obtained after the material is added.
Fig. 13 is a block diagram illustrating a picture processing apparatus 1300 according to an exemplary embodiment. For example, the apparatus 1300 may be a mobile phone, a computer, a digital broadcaster, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Referring to fig. 13, the apparatus 1300 may include one or more of the following components: a processing component 1302, a memory 1304, a power component 1306, a multimedia component 1308, an audio component 1310, an input/output (I/O) interface 1312, a sensor component 1314, and a communication component 1316.
The processing component 1302 generally controls overall operation of the device 1300, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 1302 may include one or more processors 1320 to execute instructions to perform all or part of the steps of the methods described above. Further, the processing component 1302 can include one or more modules that facilitate interaction between the processing component 1302 and other components. For example, the processing component 1302 may include a multimedia module to facilitate interaction between the multimedia component 1308 and the processing component 1302.
The memory 1304 is configured to store various types of data to support operations at the apparatus 1300. Examples of such data include instructions for any application or method operating on device 1300, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 1304 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The multimedia component 1308 includes a screen between the device 1300 and the user that provides an output interface. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 1308 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the apparatus 1300 is in an operation mode, such as a photographing mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 1310 is configured to output and/or input audio signals. For example, the audio component 1310 includes a Microphone (MIC) configured to receive external audio signals when the apparatus 1300 is in an operating mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 1304 or transmitted via the communication component 1316. In some embodiments, the audio component 1310 also includes a speaker for outputting audio signals.
The I/O interface 1312 provides an interface between the processing component 1302 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 1314 includes one or more sensors for providing various aspects of state assessment for the device 1300. For example, the sensor assembly 1314 may detect the open/closed state of the device 1300, the relative positioning of components, such as a display and keypad of the device 1300, the sensor assembly 1314 may also detect a change in the position of the device 1300 or a component of the device 1300, the presence or absence of user contact with the device 1300, orientation or acceleration/deceleration of the device 1300, and a change in the temperature of the device 1300. The sensor assembly 1314 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 1314 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 1314 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 1316 is configured to facilitate communications between the apparatus 1300 and other devices in a wired or wireless manner. The apparatus 1300 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 1316 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communications component 1316 also includes a Near Field Communications (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 1300 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer readable storage medium comprising instructions, such as the memory 1304 comprising instructions, executable by the processor 1320 of the apparatus 1300 to perform the method described above is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
A non-transitory computer readable storage medium, instructions in which, when executed by a processor of an electronic device, enable the electronic device to perform the method of the above embodiments, the method comprising:
capturing gesture operation within a preset range in the display process of the first picture;
when the gesture operation is matched with a first preset gesture operation, acquiring a first material;
and adding the first material to the first picture based on the display position indicated by the first material, and displaying a second picture obtained after the material is added.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.
Claims (13)
1. A picture processing method is applied to an electronic device, and the method comprises the following steps:
in the display process of a first picture, displaying a first operation prompt message, wherein the first operation prompt message is used for prompting a user of an operation mode of adding materials to the picture, capturing gesture operation within a preset range, and the gesture operation is used for adding the materials;
when the gesture operation is matched with a first preset gesture operation, acquiring a first material;
adding the first material to the first picture based on the display position indicated by the first material, and displaying a second picture obtained after the material is added;
when any gesture operation is not captured within a preset time length, storing a second picture obtained after the material is added;
and when capturing gesture operation matched with second preset gesture operation in the preset range within preset time, deleting the second picture obtained after the material is added, and displaying the first picture.
2. The method according to claim 1, wherein capturing gesture operations within a preset range during the displaying of the first picture comprises:
capturing gesture operation within the preset range through an image shot by a camera module of the electronic equipment in the display process of the first picture; or the like, or, alternatively,
and capturing gesture operation in the preset range by detecting obstacles in the preset range through a distance sensor of the electronic equipment in the display process of the first picture.
3. The method according to claim 1, wherein when the gesture operation matches a first preset gesture operation, acquiring a first material comprises:
when the gesture operation is matched with the first preset gesture operation, acquiring any material from a preset material library as the first material; or the like, or, alternatively,
and when the gesture operation is matched with the first preset gesture operation, acquiring a material matched with the first preset gesture operation from the preset material library as the first material, wherein different preset gesture operations are matched with different materials.
4. The method according to claim 1, wherein when the gesture operation matches a first preset gesture operation, acquiring a first material comprises:
and carrying out picture identification on the first picture to obtain picture characteristics of the first picture, and acquiring a material matched with the picture characteristics from a preset material library as a first material.
5. The method according to claim 4, wherein the picture features of the first picture comprise at least one of facial expression features and limb movement features.
6. The method of claim 1, wherein after adding the first material to the first picture based on the display position indicated by the first material and displaying a second picture obtained after adding the material, the method further comprises:
when the gesture operation matched with a third preset gesture operation in the preset range is captured within the preset duration, deleting the second picture obtained after the material is added, obtaining a second material from a preset material library, adding the second material to the first picture based on the display position indicated by the second material, and displaying the third picture obtained after the material is added.
7. A picture processing device applied to an electronic device, the device comprising:
the device comprises a capturing module, a processing module and a display module, wherein the capturing module is used for displaying a first operation prompt message in the display process of a first picture, the first operation prompt message is used for prompting a user to add a material to the picture in an operation mode, capturing gesture operation in a preset range, and the gesture operation is used for adding the material;
the acquisition module is used for acquiring a first material when the gesture operation is matched with a first preset gesture operation;
an adding module, configured to add the first material to the first picture based on a display position indicated by the first material;
the display module is used for displaying a second picture obtained after the material is added;
the storage module is used for storing a second picture obtained after the material is added when any gesture operation is not captured within a preset time length;
the deleting module is used for deleting the second picture obtained after the material is added when the gesture operation matched with the second preset gesture operation in the preset range is captured in the preset time length;
the display module is used for displaying the first picture.
8. The apparatus according to claim 7, wherein the capturing module is configured to capture the gesture operation within the preset range through an image captured by a camera module of the electronic device during the display of the first picture; or, in the display process of the first picture, capturing gesture operation in the preset range through detection of an obstacle in the preset range by a distance sensor of the electronic device.
9. The device according to claim 7, wherein the obtaining module is configured to obtain any one of the materials from a preset material library as the first material when the gesture operation matches the first preset gesture operation; or when the gesture operation is matched with the first preset gesture operation, acquiring a material matched with the first preset gesture operation from the preset material library as the first material, wherein different preset gesture operations are matched with different materials.
10. The apparatus of claim 7, wherein the obtaining module comprises:
the identification submodule is used for carrying out picture identification on the first picture to obtain the picture characteristics of the first picture;
and the obtaining sub-module is used for obtaining a material matched with the picture characteristics from a preset material library as a first material.
11. The apparatus of claim 10, wherein the picture features of the first picture comprise at least one of facial expression features and limb movement features.
12. The apparatus of claim 7, further comprising:
the deleting module is used for deleting the second picture obtained after the material is added when the gesture operation matched with the third preset gesture operation in the preset range is captured in the preset time length;
the acquisition module is further used for acquiring a second material from a preset material library and adding the second material to the first picture based on a display position indicated by the second material;
and the display module is also used for displaying a third picture obtained after the material is added.
13. A picture processing apparatus, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to:
in the display process of a first picture, displaying a first operation prompt message, wherein the first operation prompt message is used for prompting a user of an operation mode of adding materials to the picture, capturing gesture operation within a preset range, and the gesture operation is used for adding the materials;
when the gesture operation is matched with a first preset gesture operation, acquiring a first material;
adding the first material to the first picture based on the display position indicated by the first material, and displaying a second picture obtained after the material is added;
when any gesture operation is not captured within a preset time length, storing a second picture obtained after the material is added;
and when capturing gesture operation matched with second preset gesture operation in the preset range within preset time, deleting the second picture obtained after the material is added, and displaying the first picture.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710196930.2A CN106951090B (en) | 2017-03-29 | 2017-03-29 | Picture processing method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710196930.2A CN106951090B (en) | 2017-03-29 | 2017-03-29 | Picture processing method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106951090A CN106951090A (en) | 2017-07-14 |
CN106951090B true CN106951090B (en) | 2021-03-30 |
Family
ID=59475334
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710196930.2A Active CN106951090B (en) | 2017-03-29 | 2017-03-29 | Picture processing method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106951090B (en) |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111656318A (en) * | 2017-11-09 | 2020-09-11 | 深圳传音通讯有限公司 | Facial expression adding method and facial expression adding device based on photographing function |
CN108345387A (en) * | 2018-03-14 | 2018-07-31 | 百度在线网络技术(北京)有限公司 | Method and apparatus for output information |
CN108628976A (en) * | 2018-04-25 | 2018-10-09 | 咪咕动漫有限公司 | A kind of material methods of exhibiting, terminal and computer storage media |
CN110298283B (en) * | 2019-06-21 | 2022-04-12 | 北京百度网讯科技有限公司 | Image material matching method, device, equipment and storage medium |
CN114257775B (en) * | 2020-09-25 | 2023-04-07 | 荣耀终端有限公司 | Video special effect adding method and device and terminal equipment |
CN113518026B (en) * | 2021-03-25 | 2023-06-06 | 维沃移动通信有限公司 | Message processing method and device and electronic equipment |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101382836A (en) * | 2008-09-05 | 2009-03-11 | 浙江大学 | Electronic painting creative method based on multi-medium user interaction |
CN106155542A (en) * | 2015-04-07 | 2016-11-23 | 腾讯科技(深圳)有限公司 | Image processing method and device |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR100776801B1 (en) * | 2006-07-19 | 2007-11-19 | 한국전자통신연구원 | Gesture recognition method and system in picture process system |
CN104168417B (en) * | 2014-05-20 | 2019-09-13 | 腾讯科技(深圳)有限公司 | Image processing method and device |
CN105320929A (en) * | 2015-05-21 | 2016-02-10 | 维沃移动通信有限公司 | Synchronous beautification method for photographing and photographing apparatus thereof |
CN105827900A (en) * | 2016-03-31 | 2016-08-03 | 纳恩博(北京)科技有限公司 | Data processing method and electronic device |
-
2017
- 2017-03-29 CN CN201710196930.2A patent/CN106951090B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101382836A (en) * | 2008-09-05 | 2009-03-11 | 浙江大学 | Electronic painting creative method based on multi-medium user interaction |
CN106155542A (en) * | 2015-04-07 | 2016-11-23 | 腾讯科技(深圳)有限公司 | Image processing method and device |
Also Published As
Publication number | Publication date |
---|---|
CN106951090A (en) | 2017-07-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106951090B (en) | Picture processing method and device | |
KR101821750B1 (en) | Picture processing method and device | |
EP3182716A1 (en) | Method and device for video display | |
CN107015648B (en) | Picture processing method and device | |
JP2017532922A (en) | Image photographing method and apparatus | |
US11539888B2 (en) | Method and apparatus for processing video data | |
CN111159449B (en) | Image display method and electronic equipment | |
CN106210495A (en) | Image capturing method and device | |
CN108122195B (en) | Picture processing method and device | |
CN111586296B (en) | Image capturing method, image capturing apparatus, and storage medium | |
CN107426489A (en) | Processing method, device and terminal during shooting image | |
US20230097879A1 (en) | Method and apparatus for producing special effect, electronic device and storage medium | |
CN108986803B (en) | Scene control method and device, electronic equipment and readable storage medium | |
CN104216525A (en) | Method and device for mode control of camera application | |
CN111612876A (en) | Expression generation method and device and storage medium | |
WO2022262211A1 (en) | Content processing method and apparatus | |
CN109145878B (en) | Image extraction method and device | |
CN112130719B (en) | Page display method, device and system, electronic equipment and storage medium | |
CN112004020B (en) | Image processing method, image processing device, electronic equipment and storage medium | |
EP3799415A2 (en) | Method and device for processing videos, and medium | |
CN111340690A (en) | Image processing method, image processing device, electronic equipment and storage medium | |
CN107832377B (en) | Image information display method, device and system, and storage medium | |
CN107239490B (en) | Method and device for naming face image and computer readable storage medium | |
CN114079724B (en) | Taking-off snapshot method, device and storage medium | |
CN117412169A (en) | Focus tracking method, apparatus, electronic device and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |