CN113271379A - Image processing method and device and electronic equipment - Google Patents

Image processing method and device and electronic equipment Download PDF

Info

Publication number
CN113271379A
CN113271379A CN202110450100.4A CN202110450100A CN113271379A CN 113271379 A CN113271379 A CN 113271379A CN 202110450100 A CN202110450100 A CN 202110450100A CN 113271379 A CN113271379 A CN 113271379A
Authority
CN
China
Prior art keywords
input
target
multimedia files
multimedia
electronic device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110450100.4A
Other languages
Chinese (zh)
Other versions
CN113271379B (en
Inventor
孙鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202110450100.4A priority Critical patent/CN113271379B/en
Publication of CN113271379A publication Critical patent/CN113271379A/en
Application granted granted Critical
Publication of CN113271379B publication Critical patent/CN113271379B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses an image processing method, an image processing device and electronic equipment, and belongs to the technical field of image processing. The method comprises the following steps: receiving a first input of a user in a case where the target image is displayed; responding to a first input, and determining M multimedia files from all multimedia files saved by the electronic equipment based on a target input parameter of the first input, wherein M is a positive integer; executing target operation on the M multimedia files; wherein, M multimedia files are: the geographic location is within a target geographic location range and comprises a multimedia file of a target object, wherein the target object is an object associated with the first input; the target input parameters include at least one of: fingerprint characteristic information, input mode and input track.

Description

Image processing method and device and electronic equipment
Technical Field
The application belongs to the technical field of image processing, and particularly relates to an image processing method and device and electronic equipment.
Background
In the case that the electronic device is used for a longer time, more and more files are stored in the electronic device, especially files such as photos and videos. The user needs to frequently clean up files that are not important in the electronic device.
In the related art, when processing (such as deleting, transmitting, editing, and the like) images (photos and videos) saved in an electronic device, a user can process images to be processed in the electronic device through an application program having a file processing function installed in the electronic device. Or the user can manually select the image to be processed in the photo album application program so as to trigger the electronic equipment to process the corresponding image through a plurality of inputs.
However, in the above method, the user needs to select the file for multiple times and confirm the file, and then the electronic device can be triggered to process the corresponding image. The user operation is not intelligent enough and not convenient enough, can't provide more efficient mode for the user, triggers electronic equipment and handles the image fast. Therefore, the operation of the user is cumbersome and time-consuming, and thus the efficiency of processing images by the electronic device is low.
Disclosure of Invention
An embodiment of the present application provides an image processing method and apparatus, and an electronic device, which can solve the problem that the efficiency of processing an image by the electronic device is low.
In a first aspect, an embodiment of the present application provides an image processing method, including: receiving a first input of a user in a case where the target image is displayed; responding to a first input, and determining M multimedia files from all multimedia files saved by the electronic equipment based on a target input parameter of the first input, wherein M is a positive integer; executing target operation on the M multimedia files; wherein, M multimedia files are: the geographic location is within a target geographic location range and comprises a multimedia file of a target object, and the target object is an object associated with the first input; the target input parameters include at least one of: fingerprint characteristic information, input mode and input track.
In a second aspect, an embodiment of the present application provides an image processing apparatus, including: the device comprises a receiving module, a determining module and an executing module. The receiving module is used for receiving a first input of a user under the condition that the target image is displayed. And the determining module is used for responding to the first input received by the receiving module, and determining M multimedia files from all the multimedia files saved by the electronic equipment based on the target input parameter of the first input, wherein M is a positive integer. And the execution module is used for executing target operation on the M multimedia files. Wherein, M multimedia files are: the geographic location is within a target geographic location range and comprises a multimedia file of a target object, and the target object is an object associated with the first input; the target input parameters include at least one of: fingerprint characteristic information, input mode and input track.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor, and when executed by the processor, the program or instructions implement the steps of the method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method according to the first aspect.
In this embodiment, when the electronic device displays the target image, the user may perform a first input to trigger the electronic device to determine, based on a target input parameter input by the first input of the user, M multimedia files that are stored in the electronic device, whose geographic locations are within a target geographic location range and include the target object, and perform a target operation on the M multimedia files. Under the condition that the electronic equipment displays the target image, the user can perform first input to trigger the electronic equipment to acquire fingerprint characteristic information, an input mode and an input track corresponding to the first input of the user, and determine the target object and the target geographical position range according to the target input parameters, so that the electronic equipment can determine from all stored multimedia files, the geographical position of the electronic equipment is within the target geographical position range and comprises M multimedia files of the target object, and target operation is performed on the M multimedia files. Therefore, the operation of the user can be simplified, and the efficiency and the flexibility of processing the image by the electronic equipment are improved.
Drawings
Fig. 1 is a schematic diagram of an image processing method according to an embodiment of the present application;
fig. 2 is one of schematic diagrams of an example of an interface of a mobile phone according to an embodiment of the present disclosure;
fig. 3 is a second schematic diagram of an image processing method according to an embodiment of the present application;
fig. 4 is a third schematic diagram of an image processing method according to an embodiment of the present application;
fig. 5 is a second schematic diagram of an example of an interface of a mobile phone according to an embodiment of the present disclosure;
FIG. 6 is a fourth schematic diagram of an image processing method according to an embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application;
fig. 8 is a second schematic structural diagram of an image processing apparatus according to an embodiment of the present disclosure;
fig. 9 is a third schematic structural diagram of an image processing apparatus according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of an electronic device provided in an embodiment of the present application;
fig. 11 is a schematic hardware structure diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described clearly below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments that can be derived by one of ordinary skill in the art from the embodiments given herein are intended to be within the scope of the present disclosure.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced in sequences other than those illustrated or described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense and do not limit the number of terms, e.g., the first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The image processing method provided by the embodiment of the present application is applied to scenes for processing multimedia files (such as pictures, motion pictures, videos, and the like) stored in electronic devices, and a specific application scene may be determined according to actual use requirements, which is not specifically limited in the present application.
Taking the example of deleting pictures stored in the electronic device as an example, assuming that the user needs to trigger the electronic device to delete some pictures (for example, the pictures are pictures of the user 1), when the user (the holder of the electronic device) displays a picture a (the picture a is taken in a city b) of the user 1 (the holder of the non-electronic device) on the electronic device, the user (the holder of the electronic device) performs a first input in a fingerprint input area of the electronic device, so that the electronic device can obtain input parameters (namely fingerprint characteristic information, input mode and input trajectory) corresponding to the input of the user according to the first input of the user in the fingerprint input area, thereby determining a face image and a geographic position range (for example, the north geographic position range of the city b) of the user 1 according to the input parameters, and all multimedia files (picture file files, picture files, and picture files) stored in the electronic device, Video files, etc.), a plurality of multimedia files including the face image of the user 1 and having geographic positions within the range of the northern geographic position of the city b are acquired, so that the multimedia files can be deleted.
For example, if the user clicks the fingerprint input area with the index finger (i.e., the first fingerprint feature information), the electronic device may acquire and delete a plurality of multimedia files including the face image of the user 1 and having geographic locations located in the city b; or, the user performs click input in the fingerprint input area through the index finger and performs sliding input clockwise, so that the electronic device can acquire and delete a plurality of multimedia files which comprise the face image of the user 1 and have geographic positions within the geographic position range of the south of the city b; or, the user performs click input in the fingerprint input area by the index finger and performs slide input in the counterclockwise direction, so that the electronic device may acquire and delete the plurality of multimedia files including the face image of the user 1 and having the geographic location in the northern geographic location range of the city b.
Therefore, in the embodiment of the application, the user does not need to select and delete the multimedia files needing to be deleted one by one through a file management interface. Therefore, the operation of the user can be simplified, and the efficiency and the flexibility of cleaning the image by the electronic equipment are improved.
The image processing method provided by the embodiment of the present application is described in detail below with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
An embodiment of the present application provides an image processing method, and fig. 1 shows a flowchart of the image processing method provided in the embodiment of the present application, which may be applied to an electronic device. As shown in fig. 1, the image processing method provided in the embodiment of the present application may include steps 201 to 203 described below.
In step 201, in the case of displaying the target image, the electronic device receives a first input of a user.
In the embodiment of the application, when the electronic device displays the target image, a user can perform a first input, so that the electronic device can acquire the fingerprint feature information, the input mode and the input track of the first input, and determine, according to the target input parameters, M multimedia files of which the geographic positions are within the range of the target geographic position and which include the target object from all multimedia files stored in the electronic device, thereby performing target operation on the M multimedia files.
Optionally, in this embodiment of the application, the first input is an input of a user in the fingerprint input area, where the first input includes at least one of: fingerprint input, press input, click input, slide input, etc.
It should be noted that, in the fingerprint identification technology, a corresponding relationship is established between a user and a piece of fingerprint feature information, so that a corresponding face image (i.e. identity information of a verified user) can be determined by comparing the fingerprint feature information input by the user with the corresponding relationship stored in advance. Because the skin lines corresponding to each user (including the fingerprints) are different and unique in patterns, break points and cross points, the fingerprint identification technology is obtained by means of uniqueness and stability.
Optionally, in this embodiment of the application, the target image includes a plurality of objects, and the plurality of objects at least include a target object. The plurality of objects included in the target image may be any one of: the present application is not limited to human face images, animal images, plant images, object images, and the like, and in the following embodiments, the present application takes an example in which an object is a human face image and a target object is a target human face image as an example for illustrative explanation.
Optionally, in this embodiment of the application, the target object may be understood as a region corresponding to a screen displaying the target object in the target image.
Optionally, in this embodiment of the application, a floating control is included in the interface for displaying the target image, and before the user performs the first input, the user may input the floating control to trigger the electronic device to be in the "multimedia file management mode", and recognize the target image through an object recognition algorithm (e.g., a face recognition algorithm), so as to mark and display all the face images in the target image.
In this embodiment of the application, the target image may be any one of: a picture, a photograph, a frame of a moving picture, a frame of a video, etc. That is, the multimedia file corresponding to the target image may be any one of the following: picture files, photo files, motion picture files, video files, and the like.
The electronic device is taken as a mobile phone for illustration. As shown in fig. 2 (a), the mobile phone displays a target image 10, where the target image includes a first facial image 11 and a second facial image 12, and a user may input a file management control 13 displayed in suspension on a screen to trigger the mobile phone to be in a multimedia file management mode; as shown in fig. 2 (B), the mobile phone starts a face recognition function according to the user input to the file management control 13, and displays (shown by a dashed line frame in the figure) the first face image 11 and the second face image 12 in the target image 10 in a marked manner.
Step 202, the electronic device responds to the first input, and determines M multimedia files from all multimedia files saved by the electronic device based on the target input parameter of the first input.
Wherein M is a positive integer.
In an embodiment of the present application, the M multimedia files are: the geographic location is within a target geographic location range and comprises a multimedia file of a target object, wherein the target object is an object associated with the first input; the target input parameters include at least one of: fingerprint characteristic information, input mode and input track.
Optionally, in this embodiment of the application, the electronic device may determine, according to the fingerprint feature information in the target input parameter, that the M multimedia files are multimedia files including the target object, that is, the fingerprint feature information in the first input of the user is associated with the target object.
Optionally, in this embodiment of the application, the electronic device may obtain M multimedia files from all multimedia files stored in the electronic device based on the target feature information of the target object.
Optionally, in this embodiment of the application, the electronic device may determine, according to an input manner in the target input parameter, whether the M multimedia files are multimedia files including a first object (i.e., a first face image). That is, when the input modes are different, the M multimedia files may be: a multimedia file including the target object and the first object, or a multimedia file including the target object but not the first object, etc.
Optionally, in this embodiment of the application, the electronic device may determine, according to the input trajectory in the target input parameter, that the M multimedia files are multimedia files whose geographic positions are within the target geographic position range. The target geographical position range is associated with the input track, and when the input track is different, the target geographical position range is different, and the corresponding M multimedia files are also different.
Step 203, the electronic device executes target operation on the M multimedia files.
In the embodiment of the application, when the electronic device determines the M multimedia files according to the target input parameter input by the first user, the electronic device may directly perform the target operation on the M multimedia files, or the electronic device may display a prompt message for determining the target operation to be performed, so as to continue to perform the target operation on the M multimedia files according to the input of the user.
Optionally, in this embodiment of the present application, the target operation may be any one of the following: a delete operation, a file transfer operation, a file edit operation, and the like are described, and in the embodiment of the present application, a target operation is taken as an example of the delete operation.
In the embodiment of the application, through a fingerprint identification technology, a user can determine and process a corresponding multimedia file more conveniently, the input of the user is associated with the geographic position of the multimedia file to be processed, and the multimedia file to be processed in the target geographic position range is determined according to the input mode.
The embodiment of the application provides an image processing method, when an electronic device displays a target image, a user can perform first input to trigger the electronic device to determine, based on a target input parameter input by the user, from all multimedia files stored in the electronic device, a geographic position of the electronic device is within a target geographic position range and includes M multimedia files of a target object, and target operation is performed on the M multimedia files. Under the condition that the electronic equipment displays the target image, the user can perform first input to trigger the electronic equipment to acquire fingerprint characteristic information, an input mode and an input track corresponding to the first input of the user, and determine the target object and the target geographical position range according to the target input parameters, so that the electronic equipment can determine from all stored multimedia files, the geographical position of the electronic equipment is within the target geographical position range and comprises M multimedia files of the target object, and target operation is performed on the M multimedia files. Therefore, the operation of the user can be simplified, and the efficiency and the flexibility of processing the image by the electronic equipment are improved.
Optionally, in this embodiment of the present application, as shown in fig. 3 in combination with fig. 1, before "determining M multimedia files from all multimedia files saved by the electronic device based on the first input target input parameter" in step 202, the image processing method provided in this embodiment of the present application may further include step 301 described below, and "determining M multimedia files from all multimedia files saved by the electronic device based on the first input target input parameter" in step 202 may specifically be implemented by step 202a described below.
Step 301, the electronic device, in response to the first input, acquires a target geographic position when the target image is stored, and a plurality of geographic position ranges corresponding to the target geographic position.
Optionally, in this embodiment of the application, after the user inputs the first input, the electronic device may further obtain, according to the input of the user, the target geographic position when the target image is stored from the electronic device, and determine, according to the target geographic position, a geographic position range around the target geographic position, so that the geographic position range around the target geographic position is divided into a plurality of geographic position ranges in a preset manner.
Optionally, in this embodiment of the application, the electronic device performs grouping processing on multimedia files including the target face image, which are stored in the electronic device, according to a plurality of geographic position ranges, so as to obtain a plurality of sets of multimedia files. One geographic location range corresponds to a group of multimedia file sets, and the group of multimedia file sets comprises at least one multimedia file.
It can be understood that the electronic device correspondingly groups the multimedia files including the target face image to obtain a plurality of groups of multimedia file sets on the basis of a plurality of geographical position ranges according to the geographical position corresponding to each multimedia file in the M multimedia files.
It should be noted that, when the electronic device saves each multimedia file, the geographical location of the electronic device when the electronic device saves the multimedia file may be correspondingly recorded, so that the electronic device may directly obtain the geographical location corresponding to each multimedia file. Therefore, the geographic position range of each multimedia file is determined according to the geographic position corresponding to the multimedia file.
Step 202a, the electronic device determines the target geographical position range based on the first input target input parameter, and determines M multimedia files from all multimedia files stored in the electronic device according to the target geographical position range and the target object.
Optionally, in this embodiment of the application, the electronic device may determine, according to the input trajectory of the first input (i.e., the target input parameter), a target geographic location range from the multiple geographic location ranges, so as to determine a set of multimedia file sets (i.e., M multimedia files) that includes the target object and correspond to the target geographic location range.
Optionally, in this embodiment of the application, a geographical location range corresponding to different input tracks is preset in the electronic device, and when the input tracks of the first input are different, the target geographical location ranges are different. For example, when the input trajectory is a clockwise sliding, the target geographic location range may be a south location range of the target geographic location; when the input trajectory is a counterclockwise swipe, the target geographic location range may be a north location range of the target geographic location.
For example, the electronic device may divide a geographic location range around the target geographic location into the following geographic location ranges: a geographic location range in which the target geographic location is located, a south location range of the target geographic location, a north location range of the target geographic location, a west location range of the target geographic location, and an east location range of the target geographic location.
In the embodiment of the application, the electronic device may determine a plurality of geographical position ranges corresponding to the target geographical position according to the target geographical position corresponding to the target image, so that the target geographical position range may be determined from the plurality of geographical position ranges according to a target input parameter input by a user, and the M multimedia files including the target object may be determined from all the multimedia files stored in the electronic device according to the target geographical position range and the target object.
Optionally, in this embodiment of the present application, the target input parameters include: fingerprint characteristic information, input mode and input track. The step 202 "determining M multimedia files from all multimedia files saved in the electronic device based on the first input target input parameter" may be specifically implemented by the following step 202b or step 202 c.
Step 202b, the electronic device responds to the first input, and under the condition that the fingerprint feature information is matched with the target fingerprint feature information and the input mode is the first input mode, the M multimedia files are determined according to the input track of the first input.
In an embodiment of the present application, the target fingerprint feature information is associated with a target object, the target geographic location range is associated with a first input trajectory, and when the first input trajectory is different, the target geographic location range is different.
For example, if the user clicks the fingerprint input area with the index finger, the electronic device may determine that all multimedia files that include the person a (i.e., the face image corresponding to the fingerprint feature information of the index finger) and have the same target geographic location as the currently displayed target image need to be deleted; or, the user clicks the fingerprint input area with the index finger and performs sliding input clockwise, the electronic device may determine that all multimedia files including the person a and having geographic positions within the south position range of the target geographic position of the currently displayed target image need to be deleted; or, the user clicks the fingerprint input area with the index finger and performs a sliding input in the counterclockwise direction, the electronic device may determine that all multimedia files including the person a and having a geographical location within the north position range of the target geographical location of the currently displayed target image need to be deleted.
Specifically, in the electronic device, 10 individual photographs of a person a, 15 combined photographs of the person a and the person B, and 20 combined photographs of persons other than the person a and the person B are stored, and of these 45 photographs, 8 are cities that are the same as the geographical position of the currently displayed target image, 12 are cities that are south of the geographical position of the currently displayed target image, and 25 are cities that are north of the geographical position of the currently displayed target image. If the user clicks the fingerprint input area with the index finger, the user represents that all the photos, i.e., 8 photos, containing the person a and having the same geographical position as the currently displayed target image are to be deleted; if the user clicks the fingerprint input area by the index finger and performs sliding input in the clockwise direction within 2s, deleting 12 photos; if the user clicks the fingerprint input area with the index finger and performs a slide input in the counterclockwise direction within 2s, 25 photos are deleted.
Step 202c, the electronic device responds to the first input, and under the condition that the fingerprint feature information is matched with the target fingerprint feature information and the input mode is the second input mode, the M multimedia files are determined according to the input track of the first input.
In this embodiment of the application, the target geographic location range is associated with an input track of a first input, and the images corresponding to the M multimedia files do not include a first object.
For example, if the user clicks the fingerprint input region with the index finger, the electronic device may determine that all multimedia files including person a but not including person B and having the same target geographical location as the currently displayed target image need to be deleted; or, the user clicks the fingerprint input area with the index finger and performs sliding input clockwise, the electronic device may determine that all multimedia files that include the person a but not the person B and whose geographic positions are within the south position range of the target geographic position of the currently displayed target image need to be deleted; alternatively, the user clicks the fingerprint input region with the index finger and performs a sliding input in the counterclockwise direction, the electronic device may determine that all multimedia files including the person a but not including the person B and having geographic positions within the range of the north position of the target geographic position of the currently displayed target image need to be deleted.
Specifically, 10 copies of a single person of the person a (1 of them is a city identical to the geographical position of the currently displayed target image, 5 copies of them are cities south of the geographical position of the currently displayed target image, 4 copies of them are cities north of the geographical position of the currently displayed target image), 15 copies of the person a and the person B (6 of them are cities identical to the geographical position of the currently displayed target image, 5 copies of them are cities south of the geographical position of the currently displayed target image, 4 copies of them are cities north of the geographical position of the currently displayed target image), 20 copies of the persons other than the person a and the person B (1 of them is a city identical to the geographical position of the currently displayed target image, 2 copies of them are cities south of the geographical position of the currently displayed target image, 17 copies of them are cities north of the geographical position of the currently displayed target image) are stored in the electronic device, of these 45 photographs, 30 photographs were taken that include person a but not person B (2 of them being cities that are the same as the geographical position of the currently displayed target image, 7 of them being cities that are south of the geographical position of the currently displayed target image, and 21 of them being cities that are north of the geographical position of the currently displayed target image).
If the user clicks the fingerprint input area by the index finger, it means that all the photos, i.e., 2 photos, containing person a but not person B and having the same geographical position as the currently displayed target image are to be deleted; if the user clicks the fingerprint input area by the index finger and performs sliding input in the clockwise direction within 2s, all photos of a city which contains the person A but does not contain the person B and is south of the geographic position of the currently displayed target image, namely 7 photos, are deleted; if the user clicks the fingerprint input area with the index finger and performs a slide input counterclockwise within 2s, it represents that all the photos of the city including person a but not including person B and north of the geographical position of the currently displayed target image, i.e., 21 photos, are to be deleted.
Optionally, in this embodiment of the application, the user performs the above step operation by using a middle finger to represent that the photo containing the person B is to be deleted. One person corresponds to one fingerprint feature information (i.e., one finger).
In the embodiment of the application, the fingerprint identification technology is applied to the function of deleting multimedia files (such as photos), different face images are associated through different fingerprints, and when some multimedia files need to be deleted, the electronic equipment can be triggered to delete the multimedia files containing the corresponding face images through corresponding fingerprint verification; meanwhile, input parameters input by a user are associated with the target geographical position information of the currently displayed image, and the input mode is associated with the range of the multimedia file to be deleted. Meanwhile, in order to improve the controllability of the deletion operation, the user can see all the photos to be deleted and can shield the photos which are not desired to be deleted. Therefore, the intelligence and the convenience of photo cleaning are improved, and the user is better helped to achieve the purpose of releasing the storage space.
In the embodiment of the application, the electronic device can determine the M multimedia files according to the input tracks under the condition that the fingerprint characteristic information input by the user is matched with the target fingerprint characteristic information and the input mode is the corresponding input mode, and determine whether the M multimedia files are the multimedia files comprising the first object or not when the input modes are different, so that the multimedia files to be processed can be accurately determined according to the input of the user, and the accuracy of the electronic device for processing the multimedia files is improved.
Optionally, in this embodiment of the present application, as shown in fig. 4 in combination with fig. 1, before step 203 described above, the image processing method provided in this embodiment of the present application may further include step 401 described below.
Step 401, the electronic device divides the M multimedia files into N types and displays the M multimedia files according to the types.
In the embodiment of the present application, N is a positive integer; the categories of the M multimedia files include at least one of the following categories: a multimedia file including only the target object, a multimedia file including the target object and the first object, a multimedia file including the target object and other objects; the other object is an object other than the target object and the first object.
Optionally, in this embodiment of the application, the electronic device may classify the M multimedia files according to a face image included in the multimedia file, so as to obtain multimedia files of multiple categories.
It should be noted that, when the electronic device directly deletes a multimedia file corresponding to a certain face image, the user cannot know which multimedia files are deleted together, and cannot determine whether some multimedia files need to be deleted through browsing. Therefore, the user can check and screen the multimedia files to be deleted and selectively delete the multimedia files by classifying the M multimedia files and displaying the multimedia files according to the categories, and the multimedia files which do not need to be deleted are shielded.
In the embodiment of the application, after the electronic device determines M multimedia files from all multimedia files stored in the electronic device, the electronic device may display the multimedia files including the target face image in a classified manner.
Illustratively, in conjunction with fig. 2, as shown in fig. 5 (a) and 5 (B), the mobile phone displays M photos including the first face image 11 in the mobile phone interface in three categories, the first category being a single photo including only the first face image 11, the second category being a group photo including the first face image 11 and a person a (e.g., the mobile phone owner), and the third category being a group photo including the first face image 11 and other persons. Therefore, the user can visually check the content of the photo to be deleted and select and adjust the photo.
Optionally, in this embodiment, with reference to fig. 4, as shown in fig. 6, after the step 401, the image processing method provided in this embodiment may further include the following step 402 and step 403, and the step 203 may be specifically implemented by the following step 203 a.
Step 402, the electronic device receives a second input of the target multimedia file from the user.
Optionally, in this embodiment of the application, after the electronic device displays the multimedia files including the target face image in a classified manner, the user may perform a selection input on the displayed multimedia files to trigger the electronic device to mark and display the target multimedia files, so that the target multimedia files may not be deleted.
And 403, the electronic equipment responds to the second input and marks and displays the target multimedia file.
In this embodiment of the application, if a multimedia file that a user thinks does not need to be deleted (for example, a second multimedia file in the multimedia file that only includes a target face image), the user may perform a selection input on the multimedia file, and the electronic device may mark the multimedia file, and in the following subsequent deletion operation, the multimedia file is not deleted.
For example, if the user selects a second multimedia file among the multimedia files including only the target face image and then clicks the fingerprint input region with the index finger, the electronic device may determine that a multimedia file including the person a but not the second multimedia file and having the same target geographical location as the currently displayed target image needs to be deleted.
Step 203a, the electronic device deletes the M multimedia files excluding the target multimedia file from the electronic device.
In the embodiment of the application, the user can trigger the electronic equipment to shield the multimedia files which do not need to be deleted through input, so that further distinction and limitation are carried out on the cleaning range.
Optionally, in this embodiment of the application, after the electronic device deletes N multimedia files, the user may input the floating control again to trigger the electronic device to exit the "multimedia file management mode", and close the face recognition function.
It should be noted that, for the M multimedia files described in the present application, the specific value of M is determined according to the specific situation in the present application, and although the M multimedia files are described, the value of M may be different.
In the embodiment of the application, after the electronic equipment displays the M multimedia files according to the categories, a user can visually check the specific content of each multimedia file, so that the user can select and input a target multimedia file in the M multimedia files to trigger the electronic equipment to mark and display the target multimedia file, and the target multimedia file is not deleted when the electronic equipment deletes the multimedia file, so that the flexibility of deleting the multimedia file by the electronic equipment can be improved.
Optionally, in this embodiment of the present application, before step 201 described above, the image processing method provided in this embodiment of the present application may further include step 501 and step 502 described below.
Step 501, the electronic device receives a third input from the user.
Step 502, the electronic device determines, in response to the third input, at least one of: the corresponding relation of fingerprint characteristic information and an object, the corresponding relation of an input track and a geographical position range, and the corresponding relation of an input mode and a multimedia file.
Optionally, in this embodiment of the application, a user may preset a corresponding relationship between different pieces of grain feature information and different face images, so that the corresponding face image may be determined when the electronic device receives a fingerprint input of the user.
Optionally, in this embodiment of the application, a user may preset a corresponding relationship between different input tracks and different geographic position ranges, so that the corresponding geographic position range may be determined when the electronic device receives a fingerprint input of the user.
Optionally, in this embodiment of the application, a user may preset a corresponding relationship between different input manners and different types of multimedia files, so that the corresponding type of multimedia file may be determined when the electronic device receives a fingerprint input of the user.
In the embodiment of the application, the corresponding relation between the fingerprint and the face image is set in advance. For example, the fingerprint feature information of the thumb corresponds to a photograph of person a, the fingerprint feature information of the index finger corresponds to a photograph of person B, the fingerprint feature information of the middle finger corresponds to a photograph of person C, and so on.
In the embodiment of the application, a user presets a corresponding relation between fingerprint characteristic information and an object, a corresponding relation between an input track and a geographical position range, and a corresponding relation between an input mode and a class of multimedia files, so that when the user needs to trigger the electronic equipment to process some multimedia files, the user can directly trigger the electronic equipment to determine the corresponding multimedia files through fingerprint input, and therefore the efficiency of the electronic equipment for processing the multimedia files can be improved.
It should be noted that, in the image processing method provided in the embodiment of the present application, the execution subject may be an image processing apparatus, or a control module in the image processing apparatus for executing the image processing method. The embodiment of the present application describes an image processing apparatus provided in the embodiment of the present application, by taking an example in which an image processing apparatus executes a loaded image processing method.
Fig. 7 shows a schematic diagram of a possible structure of an image processing apparatus according to an embodiment of the present application. As shown in fig. 7, the image processing apparatus 70 may include: a receiving module 71, a determining module 72 and an executing module 73.
Wherein, the receiving module 71 is configured to receive a first input of a user when the target image is displayed. A determining module 72, configured to determine, in response to the first input received by the receiving module 71, M multimedia files from all multimedia files saved by the electronic device based on a target input parameter of the first input, where M is a positive integer. And the execution module 73 is used for executing target operation on the M multimedia files. Wherein, M multimedia files are: the geographic location is within a target geographic location range and comprises a multimedia file of a target object, and the target object is an object associated with the first input; the target input parameters include at least one of: fingerprint characteristic information, input mode and input track.
In a possible implementation manner, with reference to fig. 7, as shown in fig. 8, the image processing apparatus 70 provided in the embodiment of the present application may further include: an acquisition module 74. The obtaining module 74 is configured to obtain a target geographic location when the target image is stored and a plurality of geographic location ranges corresponding to the target geographic location before the determining module 72 determines M multimedia files from all multimedia files stored in the electronic device based on the first input target input parameter. The determining module 72 is specifically configured to determine a target geographic location range based on the first input target input parameter, and determine M multimedia files from all multimedia files stored in the electronic device according to the target geographic location range and the target object.
In one possible implementation, the target input parameters include: fingerprint characteristic information, input mode and input track. The determining module 72 is specifically configured to determine M multimedia files according to an input trajectory of a first input when the fingerprint feature information matches the target fingerprint feature information and the input mode is the first input mode; the target fingerprint characteristic information is associated with the target object, the target geographic position range is associated with the input track of the first input, and the target geographic position range is different when the input tracks of the first input are different. Or, the determining module 72 is specifically configured to determine M multimedia files according to the input trajectory of the first input, when the fingerprint feature information matches the target fingerprint feature information and the input mode is the second input mode; the target geographic position range is associated with the input track of the first input, and the images corresponding to the M multimedia files do not comprise the first object.
In a possible implementation manner, with reference to fig. 7, as shown in fig. 9, the image processing apparatus 70 provided in the embodiment of the present application may further include: a classification module 75 and a display module 76. The classifying module 75 is configured to classify the M multimedia files into N classes before the executing module 73 performs the target operation on the M multimedia files. And a display module 76, configured to display the M multimedia files according to categories, where N is a positive integer. Wherein the categories of the M multimedia files include at least one of the following categories: a multimedia file including only the target object, a multimedia file including the target object and the first object, a multimedia file including the target object and other objects; the other object is an object other than the target object and the first object.
In a possible implementation manner, the receiving module 71 is further configured to receive a second input of the target multimedia file from the user after the displaying module 76 displays the M multimedia files according to the categories. A display module 76, further configured to mark a display target multimedia file in response to the second input received by the receiving module 71; the executing module 73 is specifically configured to delete the M multimedia files that do not include the target multimedia file from the electronic device.
In a possible implementation manner, the receiving module 71 is further configured to receive a third input from the user before receiving the first input from the user when the target image is displayed. A determining module 72, further configured to determine, in response to the third input received by the receiving module 71, at least one of: the corresponding relation of fingerprint characteristic information and an object, the corresponding relation of an input track and a geographical position range, and the corresponding relation of an input mode and a multimedia file.
The image processing apparatus provided in the embodiment of the present application can implement each process implemented by the image processing apparatus in the above method embodiments, and for avoiding repetition, detailed descriptions are not repeated here.
The embodiment of the application provides an image processing apparatus, wherein under the condition that an electronic device displays a target image, a user can perform first input to trigger the electronic device to acquire fingerprint feature information, an input mode and an input track corresponding to the first input of the user, and determine a target object and a target geographic position range according to target input parameters, so that the electronic device can determine from all stored multimedia files, the geographic position of the electronic device is within the target geographic position range and comprises M multimedia files of the target object, and target operation is performed on the M multimedia files. Therefore, the operation of the user can be simplified, and the efficiency and the flexibility of processing the image by the electronic equipment are improved.
The image processing apparatus in the embodiment of the present application may be an apparatus, or may be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine or a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The image processing apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android operating system (Android), an iOS operating system, or other possible operating systems, which is not specifically limited in the embodiments of the present application.
Optionally, as shown in fig. 10, an electronic device M00 is further provided in an embodiment of the present application, and includes a processor M01, a memory M02, and a program or an instruction stored in the memory M02 and executable on the processor M01, where the program or the instruction when executed by the processor M01 implements each process of the foregoing embodiment of the image processing method, and can achieve the same technical effect, and details are not repeated here to avoid repetition.
It should be noted that the electronic device in the embodiment of the present application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 11 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 100 includes, but is not limited to: a radio frequency unit 101, a network module 102, an audio output unit 103, an input unit 104, a sensor 105, a display unit 106, a user input unit 107, an interface unit 108, a memory 109, and a processor 110.
Those skilled in the art will appreciate that the electronic device 100 may further comprise a power source (e.g., a battery) for supplying power to various components, and the power source may be logically connected to the processor 110 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system. The electronic device structure shown in fig. 11 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is not repeated here.
Wherein, the user input unit 107 is used for receiving a first input of a user under the condition that the target image is displayed.
The processor 110 is configured to determine M multimedia files from all multimedia files saved by the electronic device in response to the first input based on a target input parameter of the first input.
A memory 109 for performing target operations on the M multimedia files.
The embodiment of the application provides electronic equipment, and because a user can perform first input under the condition that the electronic equipment displays a target image, the electronic equipment can be triggered to acquire fingerprint characteristic information, an input mode and an input track corresponding to the first input of the user, and a target object and a target geographic position range are determined according to target input parameters, the electronic equipment can determine from all stored multimedia files, the geographic position is located in the target geographic position range and comprises M multimedia files of the target object, and target operation is performed on the M multimedia files. Therefore, the operation of the user can be simplified, and the efficiency and the flexibility of processing the image by the electronic equipment are improved.
Optionally, the processor 110 is further configured to obtain a target geographic position when the target image is stored, and a plurality of geographic position ranges corresponding to the target geographic position.
The processor 110 is specifically configured to determine a target geographic location range based on the first input target input parameter, and determine M multimedia files from all multimedia files stored in the electronic device according to the target geographic location range and the target object.
In the embodiment of the application, the electronic device may determine a plurality of geographical position ranges corresponding to the target geographical position according to the target geographical position corresponding to the target image, so that the target geographical position range may be determined from the plurality of geographical position ranges according to a target input parameter input by a user, and the M multimedia files including the target object may be determined from all the multimedia files stored in the electronic device according to the target geographical position range and the target object.
The processor 110 is specifically configured to determine M multimedia files according to an input trajectory of a first input when the fingerprint feature information matches the target fingerprint feature information and the input mode is the first input mode; or, under the condition that the fingerprint feature information is matched with the target fingerprint feature information and the input mode is the second input mode, determining the M multimedia files according to the input track of the first input.
In the embodiment of the application, the electronic device can determine the M multimedia files according to the input tracks under the condition that the fingerprint characteristic information input by the user is matched with the target fingerprint characteristic information and the input mode is the corresponding input mode, and determine whether the M multimedia files are the multimedia files comprising the first object or not when the input modes are different, so that the multimedia files to be processed can be accurately determined according to the input of the user, and the accuracy of the electronic device for processing the multimedia files is improved.
The processor 110 is further configured to classify the M multimedia files into N classes.
A display unit 106 for displaying the M multimedia files by category.
The user input unit 107 is further configured to receive a second input of the target multimedia file from the user.
The display unit 106 is further configured to mark the display target multimedia file in response to a second input.
The memory 109 is specifically configured to delete M multimedia files from the electronic device, which do not include the target multimedia file.
In the embodiment of the application, after the electronic equipment displays the M multimedia files according to the categories, a user can visually check the specific content of each multimedia file, so that the user can select and input a target multimedia file in the M multimedia files to trigger the electronic equipment to mark and display the target multimedia file, and the target multimedia file is not deleted when the electronic equipment deletes the multimedia file, so that the flexibility of deleting the multimedia file by the electronic equipment can be improved.
The user input unit 107 is further configured to receive a third input from the user.
The processor 110, further responsive to a third input, is configured to determine at least one of: the corresponding relation of fingerprint characteristic information and an object, the corresponding relation of an input track and a geographical position range, and the corresponding relation of an input mode and a multimedia file.
In the embodiment of the application, a user presets a corresponding relation between fingerprint characteristic information and an object, a corresponding relation between an input track and a geographical position range, and a corresponding relation between an input mode and a class of multimedia files, so that when the user needs to trigger the electronic equipment to process some multimedia files, the user can directly trigger the electronic equipment to determine the corresponding multimedia files through fingerprint input, and therefore the efficiency of the electronic equipment for processing the multimedia files can be improved.
It should be understood that, in the embodiment of the present application, the input Unit 104 may include a Graphics Processing Unit (GPU) 1041 and a microphone 1042, and the Graphics Processing Unit 1041 processes image data of a still picture or a video obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The display unit 106 may include a display panel 1061, and the display panel 1061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 107 includes a touch panel 1071 and other input devices 1072. The touch panel 1071 is also referred to as a touch screen. The touch panel 1071 may include two parts of a touch detection device and a touch controller. Other input devices 1072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein. The memory 109 may be used to store software programs as well as various data including, but not limited to, application programs and an operating system. The processor 110 may integrate an application processor, which primarily handles operating systems, user interfaces, applications, etc., and a modem processor, which primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the embodiment of the image processing method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and so on.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to execute a program or an instruction to implement each process of the embodiment of the image processing method, and can achieve the same technical effect, and the details are not repeated here to avoid repetition.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a computer software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. An image processing method, characterized in that the method comprises:
receiving a first input of a user in a case where the target image is displayed;
responding to the first input, and determining M multimedia files from all multimedia files saved by the electronic equipment based on target input parameters of the first input, wherein M is a positive integer;
executing target operation on the M multimedia files;
wherein the M multimedia files are: the geographic position is located in a target geographic position range and comprises a multimedia file of a target object, and the target object is an object associated with the first input; the target input parameters include at least one of: fingerprint characteristic information, input mode and input track.
2. The method of claim 1, wherein before determining M multimedia files from all multimedia files saved by the electronic device based on the first input target input parameter, the method further comprises:
acquiring a target geographic position when the target image is stored and a plurality of geographic position ranges corresponding to the target geographic position;
the determining, based on the first input target input parameter, M multimedia files from all multimedia files saved by the electronic device includes:
and determining the target geographic position range based on the first input target input parameter, and determining the M multimedia files from all multimedia files saved by the electronic equipment according to the target geographic position range and the target object.
3. The method of claim 1, wherein the target input parameters comprise: fingerprint characteristic information, an input mode and an input track;
the determining, based on the first input target input parameter, M multimedia files from all multimedia files saved by the electronic device includes:
under the condition that the fingerprint feature information is matched with target fingerprint feature information and the input mode is a first input mode, determining the M multimedia files according to the input track of the first input; wherein the target fingerprint feature information is associated with the target object, the target geographic location range is associated with the input trajectory of the first input, and the target geographic location range is different when the input trajectory of the first input is different;
alternatively, the first and second electrodes may be,
under the condition that the fingerprint feature information is matched with target fingerprint feature information and the input mode is a second input mode, determining the M multimedia files according to the input track of the first input; wherein the target geographic location range is associated with the input track of the first input, and the images corresponding to the M multimedia files do not include the first object.
4. The method of claim 1, wherein prior to performing the target operation on the M multimedia files, the method further comprises:
dividing the M multimedia files into N classes, and displaying the M multimedia files according to the classes, wherein N is a positive integer;
wherein the categories of the M multimedia files include at least one of the following categories: a multimedia file including only the target object, a multimedia file including the target object and a first object, a multimedia file including the target object and other objects; the other objects are objects other than the target object and the first object.
5. The method of claim 4, wherein after said displaying said M multimedia files by category, said method further comprises:
receiving a second input of the target multimedia file from the user;
in response to the second input, marking display of the target multimedia file;
the performing target operations on the M multimedia files comprises:
deleting the M multimedia files from the electronic device that do not include the target multimedia file.
6. The method of claim 4, wherein prior to receiving the first input from the user while the target image is displayed, the method further comprises:
receiving a third input of the user;
in response to the third input, determining at least one of: the corresponding relation of fingerprint characteristic information and an object, the corresponding relation of an input track and a geographical position range, and the corresponding relation of an input mode and a multimedia file.
7. An image processing apparatus characterized by comprising: the device comprises a receiving module, a determining module and an executing module;
the receiving module is used for receiving a first input of a user under the condition that the target image is displayed;
the determining module is used for responding to the first input received by the receiving module, and determining M multimedia files from all multimedia files saved by the electronic equipment based on a target input parameter of the first input, wherein M is a positive integer;
the execution module is used for executing target operation on the M multimedia files;
wherein the M multimedia files are: the geographic position is located in a target geographic position range and comprises a multimedia file of a target object, and the target object is an object associated with the first input; the target input parameters include at least one of: fingerprint characteristic information, input mode and input track.
8. The image processing apparatus according to claim 7, characterized by further comprising: an acquisition module;
the acquisition module is used for acquiring a target geographic position when the target image is stored and a plurality of geographic position ranges corresponding to the target geographic position before the determination module determines M multimedia files from all multimedia files stored by the electronic equipment based on the first input target input parameter;
the determining module is specifically configured to determine the target geographic location range based on the first input target input parameter, and determine the M multimedia files from all multimedia files stored in the electronic device according to the target geographic location range and the target object.
9. The image processing apparatus according to claim 7, wherein the target input parameter includes: fingerprint characteristic information, an input mode and an input track;
the determining module is specifically configured to determine the M multimedia files according to an input trajectory of a first input when the fingerprint feature information matches with target fingerprint feature information and the input manner is the first input manner; wherein the target fingerprint feature information is associated with the target object, the target geographic location range is associated with the input trajectory of the first input, and the target geographic location range is different when the input trajectory of the first input is different;
alternatively, the first and second electrodes may be,
the determining module is specifically configured to determine the M multimedia files according to the input trajectory of the first input when the fingerprint feature information matches with target fingerprint feature information and the input mode is a second input mode; wherein the target geographic location range is associated with the input track of the first input, and the images corresponding to the M multimedia files do not include the first object.
10. An electronic device comprising a processor, a memory and a program or instructions stored on the memory and executable on the processor, which program or instructions, when executed by the processor, implement the steps of the image processing method according to any one of claims 1 to 6.
CN202110450100.4A 2021-04-25 2021-04-25 Image processing method and device and electronic equipment Active CN113271379B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110450100.4A CN113271379B (en) 2021-04-25 2021-04-25 Image processing method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110450100.4A CN113271379B (en) 2021-04-25 2021-04-25 Image processing method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN113271379A true CN113271379A (en) 2021-08-17
CN113271379B CN113271379B (en) 2023-07-14

Family

ID=77229294

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110450100.4A Active CN113271379B (en) 2021-04-25 2021-04-25 Image processing method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN113271379B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113271377A (en) * 2021-04-25 2021-08-17 维沃移动通信有限公司 Image processing method, image processing apparatus, electronic device, and medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110704377A (en) * 2019-09-24 2020-01-17 珠海格力电器股份有限公司 Multimedia file processing method and device, processor and electronic device
CN112486385A (en) * 2020-11-30 2021-03-12 维沃移动通信有限公司 File sharing method and device, electronic equipment and readable storage medium
CN112698775A (en) * 2020-12-30 2021-04-23 维沃移动通信(杭州)有限公司 Image display method and device and electronic equipment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110704377A (en) * 2019-09-24 2020-01-17 珠海格力电器股份有限公司 Multimedia file processing method and device, processor and electronic device
CN112486385A (en) * 2020-11-30 2021-03-12 维沃移动通信有限公司 File sharing method and device, electronic equipment and readable storage medium
CN112698775A (en) * 2020-12-30 2021-04-23 维沃移动通信(杭州)有限公司 Image display method and device and electronic equipment

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113271377A (en) * 2021-04-25 2021-08-17 维沃移动通信有限公司 Image processing method, image processing apparatus, electronic device, and medium

Also Published As

Publication number Publication date
CN113271379B (en) 2023-07-14

Similar Documents

Publication Publication Date Title
EP3125135B1 (en) Picture processing method and device
CN111612873B (en) GIF picture generation method and device and electronic equipment
CN112306347B (en) Image editing method, image editing device and electronic equipment
CN112492201B (en) Photographing method and device and electronic equipment
CN111669495B (en) Photographing method, photographing device and electronic equipment
CN113794834A (en) Image processing method and device and electronic equipment
CN113225451B (en) Image processing method and device and electronic equipment
CN113037925B (en) Information processing method, information processing apparatus, electronic device, and readable storage medium
CN112449110B (en) Image processing method and device and electronic equipment
CN113010738B (en) Video processing method, device, electronic equipment and readable storage medium
CN113271379B (en) Image processing method and device and electronic equipment
CN112330728A (en) Image processing method, image processing device, electronic equipment and readable storage medium
CN113271378B (en) Image processing method and device and electronic equipment
CN112822394A (en) Display control method and device, electronic equipment and readable storage medium
CN112437231A (en) Image shooting method and device, electronic equipment and storage medium
CN113163256B (en) Method and device for generating operation flow file based on video
CN112732961A (en) Image classification method and device
CN113747076A (en) Shooting method and device and electronic equipment
CN113271377B (en) Image processing method, device, electronic equipment and medium
CN111796733A (en) Image display method, image display device and electronic equipment
CN113873081B (en) Method and device for sending associated image and electronic equipment
CN112492206B (en) Image processing method and device and electronic equipment
CN115278378B (en) Information display method, information display device, electronic apparatus, and storage medium
CN114143454B (en) Shooting method, shooting device, electronic equipment and readable storage medium
CN117453099A (en) Image display control method, device and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant