CN113271377A - Image processing method, image processing apparatus, electronic device, and medium - Google Patents

Image processing method, image processing apparatus, electronic device, and medium Download PDF

Info

Publication number
CN113271377A
CN113271377A CN202110448677.1A CN202110448677A CN113271377A CN 113271377 A CN113271377 A CN 113271377A CN 202110448677 A CN202110448677 A CN 202110448677A CN 113271377 A CN113271377 A CN 113271377A
Authority
CN
China
Prior art keywords
target
multimedia files
input
multimedia
geographic position
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110448677.1A
Other languages
Chinese (zh)
Other versions
CN113271377B (en
Inventor
孙鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202110448677.1A priority Critical patent/CN113271377B/en
Publication of CN113271377A publication Critical patent/CN113271377A/en
Application granted granted Critical
Publication of CN113271377B publication Critical patent/CN113271377B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses an image processing method, an image processing device, electronic equipment and a medium, and belongs to the technical field of image processing. The method comprises the following steps: receiving a first input of a user to a target object in the at least one first object under the condition that the display comprises the at least one first object; responding to a first input, and acquiring M multimedia files from all multimedia files saved by the electronic equipment, wherein the M multimedia files are multimedia files comprising a target object, and M is a positive integer; receiving a second input of the user; responding to a second input, determining a target geographic position range corresponding to the second input, and executing target operation on N multimedia files, wherein N is a positive integer; wherein, the N multimedia files are: and in the M multimedia files, the geographic position of the multimedia file is within the target geographic position range.

Description

Image processing method, image processing apparatus, electronic device, and medium
Technical Field
The present application belongs to the field of image processing technologies, and in particular, to an image processing method, an image processing apparatus, an electronic device, and a medium.
Background
With the continuous increase of the use duration of the electronic device, more and more files are stored in the electronic device, and the available storage space inside the electronic device is smaller and smaller, so that the normal use of the electronic device is affected. In order to delay the normal use time of the electronic device (not stuck) or improve the fluency of the electronic device during use, it is necessary to process the garbage files and unimportant data files generated in the electronic device (for example, delete or reduce the size of the files).
In the related art, a user can process an image to be cleaned in an electronic device through an application having a file processing function installed in the electronic device. Or the user can manually select the image to be cleaned in the photo album application program so as to trigger the electronic equipment to process the corresponding image through a plurality of inputs.
However, in the above method, the user is required to perform a plurality of input steps to trigger the electronic device to process the corresponding image. The user operation is not intelligent enough and not convenient enough, can't provide more efficient mode for the user, triggers electronic equipment and handles the image fast, releases storage space. Therefore, the operation of the user is cumbersome and time-consuming, and thus the efficiency of processing images by the electronic device is low.
Disclosure of Invention
An object of the embodiments of the present application is to provide an image processing method, an image processing apparatus, an electronic device, and a medium, which can solve the problem of low efficiency of processing an image by an electronic device.
In order to solve the technical problem, the present application is implemented as follows:
in a first aspect, an embodiment of the present application provides an image processing method, including: receiving a first input of a user to a target object of the at least one first object in a case where a target image including the at least one first object is displayed; responding to a first input, and acquiring M multimedia files from all multimedia files saved by the electronic equipment, wherein the M multimedia files are multimedia files comprising a target object, and M is a positive integer; receiving a second input of the user; responding to a second input, determining a target geographic position range corresponding to the second input, and executing target operation on N multimedia files, wherein N is a positive integer; wherein, the N multimedia files are: and in the M multimedia files, the geographic position of the multimedia file is within the target geographic position range.
In a second aspect, an embodiment of the present application provides an image processing apparatus, including: the device comprises a receiving module, an obtaining module, a determining module and an executing module. The receiving module is used for receiving a first input of a user to a target object in the at least one first object under the condition that a target image comprising the at least one first object is displayed. The obtaining module is used for responding to the first input received by the receiving module, and obtaining M multimedia files from all multimedia files saved by the electronic equipment, wherein the M multimedia files are multimedia files comprising target objects, and M is a positive integer. The receiving module is further used for receiving a second input of the user. A determination module, configured to determine, in response to the second input received by the receiving module, a target geographic location range corresponding to the second input. And the execution module is used for executing target operation on the N multimedia files, wherein N is a positive integer. Wherein, the N multimedia files are: and in the M multimedia files, the geographic position of the multimedia file is within the target geographic position range.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor, and when executed by the processor, the program or instructions implement the steps of the method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method according to the first aspect.
In this embodiment of the application, when a target image displayed by an electronic device includes at least one first object, a user may perform a first input on a target object in the displayed at least one first object, so as to trigger the electronic device to acquire M multimedia files including the target object from all stored multimedia files, and thus, according to a second input of the user, a corresponding target geographic position range may be determined, and a target operation may be performed on N multimedia files, of the M multimedia files, whose geographic positions are within the target geographic position range. The method comprises the steps that when a user can directly input at least one first object in a target image displayed by the electronic equipment, the target object is triggered to acquire M multimedia files comprising the target object from all multimedia files stored by the electronic equipment according to the target object, so that the user can perform selection input, the electronic equipment is triggered to determine N multimedia files with geographic positions within a target geographic position range corresponding to the selection input of the user from the M multimedia files, and target operation is performed on the N multimedia files. The user does not need to select the multimedia files to be processed one by one through the file management interface and input the multimedia files to trigger the electronic equipment to execute corresponding operations, so that the operation of the user can be simplified, and the file processing efficiency of the electronic equipment is improved.
Drawings
Fig. 1 is a schematic diagram of an image processing method according to an embodiment of the present application;
fig. 2 is one of schematic diagrams of an example of an interface of a mobile phone according to an embodiment of the present disclosure;
fig. 3 is a second schematic diagram of an example of an interface of a mobile phone according to an embodiment of the present disclosure;
fig. 4 is a second schematic diagram of an image processing method according to an embodiment of the present application;
fig. 5 is a third schematic diagram of an image processing method according to an embodiment of the present application;
fig. 6 is a third schematic diagram of an example of an interface of a mobile phone according to an embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application;
fig. 8 is a second schematic structural diagram of an image processing apparatus according to an embodiment of the present disclosure;
fig. 9 is a third schematic structural diagram of an image processing apparatus according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Fig. 11 is a schematic hardware structure diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described clearly below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments that can be derived by one of ordinary skill in the art from the embodiments given herein are intended to be within the scope of the present disclosure.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced in sequences other than those illustrated or described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense and do not limit the number of terms, e.g., the first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The image processing method provided by the embodiment of the present application is applied to scenes for processing multimedia files (such as pictures, motion pictures, videos, and the like) stored in electronic devices, and a specific application scene may be determined according to actual use requirements, which is not specifically limited in the present application.
Taking the example of deleting pictures stored in the electronic device as an example, assuming that the user needs to trigger the electronic device to delete some pictures (for example, the pictures are pictures of the user 1), when the electronic device displays a picture a (the picture a is taken in a city b) of the user 1 (not a holder of the electronic device), the user (a holder of the electronic device) may perform a first input on a face image of the user 1 displayed in the picture a, so that the electronic device may obtain, according to the first input of the user, face feature information of the face image corresponding to the first input of the user, and obtain, according to the obtained face feature information, a plurality of multimedia files including the face image of the user 1 from all multimedia files (picture files, video files, and the like) stored in the electronic device. And dividing the geographical position into a plurality of geographical position ranges (for example, the geographical position range in which the city b is located, the north geographical position range of the city b, the south geographical position range of the city b, the east geographical position range of the city b, the west geographical position range of the city b, and the like) according to the city b corresponding to the photo a, so that the user can select one geographical position range (for example, the north geographical position range of the city b) from the multimedia files acquired by the electronic device through second input, thereby triggering the electronic device to delete some multimedia files in which the geographical position is located in the north geographical position range of the city b (namely, some multimedia files stored when the electronic device is located in the north geographical position range of the city b).
Therefore, in the embodiment of the application, the user does not need to select multimedia files needing to be processed one by one through a file management interface, and inputs the multimedia files to trigger the electronic equipment to execute corresponding operations. Therefore, the operation of the user can be simplified, and the file processing efficiency of the electronic equipment can be improved.
The image processing method provided by the embodiment of the present application is described in detail below with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
An embodiment of the present application provides an image processing method, and fig. 1 shows a flowchart of the image processing method provided in the embodiment of the present application, which may be applied to an electronic device. As shown in fig. 1, the image processing method provided in the embodiment of the present application may include steps 201 to 204 described below.
Step 201, in the case of displaying a target image including at least one first object, receiving a first input of a user to a target object of the at least one first object.
In the embodiment of the application, when the electronic device displays a target image including at least one first object, a user may perform a first input on a target object in the at least one first object, so that the electronic device may acquire M multimedia files including the target object from all multimedia files saved by the electronic device. The user can then perform a second input to trigger the electronic device, determine a target geographic location range corresponding to the second input, and perform a target operation on N multimedia files of the M multimedia files whose geographic locations are within the target geographic location range.
Optionally, in this embodiment of the present application, the first object may be any one of the following: the present application is not limited to facial images, animal images, plant images, object images, and the like, and in the following embodiments, the present application takes a first object as a facial image and a target object as a target facial image as an example for exemplary explanation.
Optionally, in this embodiment of the application, before the electronic device receives a first input of a user to a target face image (i.e., a target image) in the at least one face image (i.e., the at least one first object) (i.e., while the electronic device displays the target image), the user may trigger the electronic device to be in a "multimedia file management mode" through an input (e.g., an input to the hover control), at which time, a face recognition function (an object recognition function) is turned on, and all the face images are marked and displayed in the target image.
In this embodiment of the application, the target image may be any one of: a picture, a photograph, a frame of a moving picture, a frame of a video, etc. That is, the multimedia file corresponding to the target image may be any one of the following: picture files, photo files, motion picture files, video files, and the like.
Optionally, in this embodiment of the application, the face image may be understood as a region corresponding to a display of a person picture in the target image. The first input may be understood as input of a region corresponding to the target face image by the user. The first input may be any one of: click input, long press input, tap input, etc.
Step 202, the electronic device responds to the first input, and obtains M multimedia files from all multimedia files saved by the electronic device.
In this embodiment of the application, the M multimedia files are multimedia files including a target object, and the electronic device may obtain the M multimedia files from all multimedia files stored in the electronic device based on target feature information of the target object, where M is a positive integer.
Optionally, in this embodiment of the application, the electronic device may acquire an image of a target object (a target face image), and perform recognition by using an object recognition technology (a face recognition technology) to obtain target feature information of the target object.
For example, in the embodiment of the present application, the target face feature information may be obtained through a face recognition technology, which is a biometric technology for performing identity recognition based on the face feature information of a person. A series of related technologies, also commonly called face recognition and face recognition, are used to collect images or video streams containing faces by using a camera or a video camera, automatically detect and track the faces in the images, and then perform face recognition on the detected faces.
Optionally, in this embodiment of the application, the M multimedia files are multimedia files including a target object (i.e., target feature information).
For example, in this embodiment of the application, the electronic device may analyze all multimedia files stored in the electronic device according to target face feature information obtained by a face recognition technology, so as to determine which multimedia files include a target face image, and acquire the multimedia files including the target face image (i.e., M multimedia files).
The electronic device is taken as a mobile phone for illustration. As shown in fig. 2 (a), the mobile phone displays a target image 10, where the target image includes a first facial image 11 and a second facial image 12, and a user may input a file management control 13 displayed in suspension on a screen to trigger the mobile phone to be in a multimedia file management mode; as shown in fig. 2 (B), the mobile phone starts a face recognition function according to the user input to the file management control 13, and displays (shown by a dashed line frame in the figure) the first face image 11 and the second face image 12 in the target image 10 in a marked manner.
Step 203, the electronic device receives a second input of the user.
Optionally, in this embodiment of the application, before the electronic device receives the second input of the user, the electronic device may obtain a target geographic position of the target image (i.e., a geographic position where the electronic device is located when the target image is stored), and simulate and display a plurality of geographic position ranges in an area where the target face image is located according to the target geographic position.
Optionally, in this embodiment of the application, the user may perform a selection input (i.e., a second input) on a plurality of geographic position ranges displayed in the region where the target face image is located.
It is understood that after the face recognition function is turned on, the user can click on a person in the target image (i.e., the target face image), thereby triggering the electronic device to process the multimedia file containing the person. At this time, the electronic device may display the geographic location of the target image, perform region segmentation on the picture where the face image of the person is located, divide the picture into nine large regions, and then click on a certain region, so as to process a multimedia file matched with the geographic location of the target image in the multimedia file containing the person.
And step 204, the electronic equipment responds to the second input, determines a target geographic position range corresponding to the second input, and executes target operation on the N multimedia files.
In the embodiment of the present application, N is a positive integer. Wherein, the N multimedia files are: and in the M multimedia files, the geographic position of the multimedia file is within the target geographic position range.
Optionally, in this embodiment of the present application, the target operation may be any one of the following: a delete operation, a file transfer operation, a file edit operation, and the like are described, and in the embodiment of the present application, a target operation is taken as an example of the delete operation.
Optionally, in this embodiment of the application, in a case where only the target object is included in the target image, each of the N multimedia files is a multimedia file that only includes the target object.
Optionally, in this embodiment of the application, when the target image includes at least two first objects and the target object is one first object, each of the N multimedia files is a multimedia file that includes at least the target object.
Optionally, in this embodiment of the application, when at least two first objects are included in the target image and the target object is a plurality of first objects, each of the N multimedia files is a multimedia file including the target object.
Optionally, in this embodiment of the application, the electronic device determines the target geographic position range according to a selection input of the user to the multiple geographic position ranges, and retrieves, from the M multimedia files stored in the electronic device, N multimedia files whose geographic positions are within the target geographic position range.
It should be noted that, when the electronic device saves each multimedia file, the geographical location of the electronic device when the electronic device saves the multimedia file may be correspondingly recorded, so that the electronic device may directly obtain the geographical location corresponding to each multimedia file.
Optionally, in this embodiment of the application, in the process that the user inputs the area corresponding to the target geographic location range in the multiple geographic location ranges, the electronic device may highlight the click area on the screen, so that the user can know the range of the multimedia file to be processed more clearly, and thus control the multimedia file more accurately.
For example, in conjunction with (B) in fig. 2, as shown in (a) in fig. 3, the mobile phone may divide the display area where the first face image 11 is located into 9 areas to indicate 9 geographical location ranges through the 9 areas. The central area is used to indicate a geographical location range where a geographical location (i.e. a target geographical location) of the multimedia file corresponding to the target image 10 is stored. According to the mode of upper north, lower south, left west and right east, the area right above the central area is the north area of the target geographic position, the area right below the central area is the south area of the target geographic position, the area right left of the central area is the west area of the target geographic position, the area right of the central area is the east area of the target geographic position, the area left above the central area is the northwest area of the target geographic position, the area right above the central area is the northeast area of the target geographic position, the area left below the central area is the southwest area of the target geographic position, and the area right below the central area is the southeast area of the target geographic position.
As another example, as shown in fig. 3 (B), when the user makes a selection input to the lower right region (i.e., the southeast region of the target geographic location) of the center region of the 9 regions, the mobile phone may display a prompt message to indicate that the region selected by the user is the southeast region.
Specifically, when the selection input of the user is the northeast area, the electronic device may delete a part of the multimedia files of the M multimedia files whose geographic positions are located in the northeast area; when the selection input of the user is the south area, the electronic equipment can delete part of the M multimedia files with geographic positions in the south area; when the selection input of the user is the central area, the electronic equipment can delete part of the multimedia files of the M multimedia files, wherein the geographic position of the multimedia files is located in the area of the target geographic position.
The embodiment of the application provides an image processing method, and under the condition that a target image displayed by electronic equipment comprises at least one first object, a user can perform first input on the target object in the displayed at least one first object, so that the electronic equipment is triggered to acquire M multimedia files comprising the target object from all stored multimedia files, a corresponding target geographic position range can be determined according to second input of the user, and target operation is performed on N multimedia files of which the geographic positions are within the target geographic position range in the M multimedia files. The method comprises the steps that when a user can directly input at least one first object in a target image displayed by the electronic equipment, the target object is triggered to acquire M multimedia files comprising the target object from all multimedia files stored by the electronic equipment according to the target object, so that the user can perform selection input, the electronic equipment is triggered to determine N multimedia files with geographic positions within a target geographic position range corresponding to the selection input of the user from the M multimedia files, and target operation is performed on the N multimedia files. The user does not need to select the multimedia files to be processed one by one through the file management interface and input the multimedia files to trigger the electronic equipment to execute corresponding operations, so that the operation of the user can be simplified, and the file processing efficiency of the electronic equipment is improved.
Alternatively, in this embodiment of the application, the step 203 may be specifically realized by the step 203a described below, and the step 204 "the electronic device determines, in response to the second input, the target geographic location range corresponding to the second input" may be specifically realized by the step 204a and the step 204b described below.
Step 203a, the electronic device receives user input of a target area of the target image.
In this embodiment of the present application, the target image includes Q regions, where a target region is a region of the Q regions, and Q is a positive integer.
And step 204a, the electronic equipment responds to the second input, and determines a target geographic position range corresponding to the target area according to the target area.
Optionally, in this embodiment of the application, each of the Q regions included in the target image corresponds to one geographic position range, so that the electronic device may determine the target geographic position range according to an input of the user to the target region.
And step 204b, the electronic equipment determines N multimedia files with geographic positions within the target geographic position range from the M multimedia files.
In this embodiment of the application, each of the Q regions corresponds to a geographic position range, and the target geographic position range is a geographic position range determined by a target geographic position of the target image.
Optionally, in this embodiment of the application, the electronic device may determine a plurality of geographic position ranges according to the target geographic position of the target image, and associate each of the plurality of geographic position ranges with one of the Q regions.
In the embodiment of the application, the electronic device can determine the target geographic position range corresponding to the target area according to the input of the user to the target area of the target image, so that N multimedia files with geographic positions within the target geographic position range can be determined from the M multimedia files, the N multimedia files are processed, and the file determining and processing efficiency of the electronic device is improved.
Optionally, in this embodiment, with reference to fig. 1, as shown in fig. 4, before step 203, the image processing method provided in this embodiment may further include step 301 to step 303 described below, and step 203 described above may be specifically implemented by step 203b described below.
Step 301, the electronic device determines Q geographical position ranges according to the target geographical position of the target image.
In this embodiment of the present application, the target geographic position is a geographic position when the target image is stored, and Q is a positive integer.
Optionally, in this embodiment of the application, the electronic device may divide a geographic area around the target geographic position into Q geographic position ranges according to the target geographic position of the target image, and establish a corresponding relationship between the Q geographic position ranges and a plurality of areas (i.e., nine large areas) obtained by dividing the picture where the target face image is located.
Step 302, the electronic device groups the M multimedia files according to the Q geographical location ranges to obtain Q groups of multimedia file sets.
In this embodiment of the present application, one geographic location range of the Q geographic location ranges corresponds to one group of multimedia file sets of the Q groups of multimedia file sets, and the group of multimedia file sets includes at least one multimedia file.
Optionally, in this embodiment of the application, the electronic device correspondingly groups the M multimedia files to obtain Q groups of multimedia file sets based on the Q geographical position ranges according to the geographical position corresponding to each multimedia file in the M multimedia files.
Step 303, the electronic device associates Q sets of multimedia files with Q geographical location ranges, respectively.
In this embodiment of the present application, one multimedia file set in the Q multimedia file sets corresponds to one geographic location range of the Q geographic location ranges, and one geographic location range corresponds to one multimedia file set.
In step 203b, the electronic device receives user input of a target area in the Q areas of the target image.
In the embodiment of the present application, one of the Q regions corresponds to one of the Q geographic position ranges.
Wherein, the N multimedia files are: and the geographic position range corresponding to the target multimedia file set is the geographic position range when the N multimedia files are stored.
Optionally, in this embodiment of the application, the electronic device may correspondingly determine a multimedia file corresponding to the target area selected by the user (that is, a multimedia file in a set of multimedia file sets to be processed) when the user performs selection input on the target area in the Q areas of the target image according to the corresponding relationship between the Q sets of multimedia file sets and the Q geographical location ranges and the association relationship between one area in the Q areas of the target image and one geographical location range in the Q geographical location ranges.
In this embodiment of the application, the electronic device may divide the map into Q geographic position ranges according to a target geographic position when the target image is saved, perform grouping processing on M multimedia files including the target object according to the Q geographic position ranges to obtain Q groups of multimedia file sets corresponding to the Q geographic position ranges, associate the Q groups of multimedia file sets with Q regions of the target image, and indicate the Q groups of multimedia file sets through the Q regions of the target image. Therefore, a user can input a target area in the Q areas of the target image to trigger the electronic equipment to determine N multimedia files in a set of multimedia files corresponding to the target area and process the N multimedia files, so that the electronic equipment can flexibly determine the multimedia files to be processed according to the displayed image, and the flexibility of processing the files by the electronic equipment is improved.
Optionally, in this embodiment of the application, with reference to fig. 4, as shown in fig. 5, the step 301 may be specifically implemented by a step 301a described below.
Step 301a, the electronic device divides a target area range corresponding to the target geographic position into Q geographic position ranges.
In the embodiment of the application, the target area range is a preset geographic area range; when the input parameters of the second input are different, the target area ranges are different; when the input parameters of the second input are different, the number of the Q geographic position ranges is the same, and when the input parameters of the second input are different, the number of the M multimedia files is different.
Optionally, in this embodiment of the application, the target area range is a geographical area range centered on the target geographical location, and the electronic device may determine the size of the target area range according to a second input parameter of the user.
Specifically, when the target geographic location is the geographic location of the city a, the target area range may be an area range corresponding to a province to which the city a belongs; or, the target area range may be an area range corresponding to a country to which the city a belongs; alternatively, the target area range may be an area range of the entire world.
It should be noted that, when processing a multimedia file (for example, deleting the multimedia file), the geographic location corresponding to the multimedia file may be a foreign area or a domestic area, and may be different in storage significance.
Illustratively, when a user needs to trigger the electronic device to acquire multimedia files in different target area ranges, the electronic device may be triggered to determine the target area range corresponding to the target geographic location through different input parameters. For example: under the condition that the target geographic position of the target image is a city a (a city belonging to the country a), if the parameter input by the user to the northwest area in the target face image is a first parameter, the electronic equipment needs to be triggered to acquire the multimedia file in the range of the country a and stored in the northwest area of the city a, and at the moment, even if the city b is located in the northwest area of the city a, but the city b is a city not belonging to the country a, the multimedia file stored in the city b cannot be deleted; if the parameter input by the user to the northwest area in the target face image is the second parameter, the electronic device needs to be triggered to acquire all multimedia files stored in the northwest area of the city a, and at this time, the electronic device may delete all multimedia files stored in the northwest area of the city a (including multimedia files stored in the city b).
In the embodiment of the application, the electronic equipment can determine different target area ranges corresponding to the target geographic position according to different input parameters input by a user, and the target area ranges are divided into a plurality of geographic position ranges, so that when the input parameters input by the user are different, the number of multimedia files corresponding to each geographic position range is different, and therefore the files to be processed in the electronic equipment can be flexibly determined.
Optionally, in this embodiment of the present application, before the "performing the target operation on the N multimedia files" in the step 204, the image processing method provided in this embodiment of the present application may further include a step 401 and a step 402 described below, and the "performing the target operation on the N multimedia files" in the step 204 may be specifically implemented by a step 403 described below.
Step 401, the electronic device displays at least one first geographic location identifier corresponding to the target geographic location range and the number of multimedia files corresponding to each first geographic location identifier.
Optionally, in this embodiment of the application, when the electronic device determines a target geographic location range corresponding to an input of a user, the electronic device may classify N multimedia files included in the target geographic location range according to a specific geographic location identifier (e.g., a city area), and correspondingly display the number of the multimedia files corresponding to each geographic location identifier.
For example, referring to fig. 3 (a), as shown in fig. 6, after the user inputs the southeast region of the first face image 11 when the target region range is a smaller region range (e.g., a national region), the mobile phone may display the multimedia files containing the first face image 11, which geographic locations are located in the southeast region, and the number of the multimedia files corresponding to each geographic location. For example: the target geographic position of the target image is a city a, and the mobile phone can display the multimedia files located in the southeast area of the city a, wherein the city b is as follows: 10 pieces, city c: 20 pieces.
It can be understood that, assuming that the user inputs the southwest area in the first face image 11, the mobile phone may display the multimedia files containing the first face image 11, which geographic locations are located in the southeast area of the city a, and the number of multimedia files corresponding to each geographic location, that is, display: city d: 10 pieces, city e: 15 pieces. By the method, the user can know the distribution condition of the multimedia file. After the display is displayed, the user can selectively delete the multimedia files, for example, the user can select and input the city b, and then the mobile phone can be triggered to delete 10 multimedia files of the city b; and the selection input of the city c can trigger the mobile phone to delete 20 multimedia files of the city c.
It should be noted that the above-described city may be a province city, a local city, a county city, or other levels of cities, and may be determined according to a use requirement in an actual use scenario, which is not limited in the present application. In the judgment of the position direction, the longitude and latitude errors within 5 percent can be judged as north, south, east and west of a certain city.
Step 402, the electronic device receives a third input of the target geographic location identifier in the at least one first geographic location identifier from the user.
And step 403, the electronic equipment responds to the third input and executes target operation on the N multimedia files.
In this embodiment, the N multimedia files are multimedia files corresponding to the target geographic location identifier.
Optionally, in this embodiment of the application, when the electronic device displays at least one first geographic location identifier and the number of multimedia files corresponding to each first geographic location identifier, a user may trigger the electronic device to delete only the multimedia files corresponding to one or more first geographic location identifiers through a selection input.
Optionally, in this embodiment of the application, after the electronic device processes the N multimedia files, the user may input the floating control again to trigger the electronic device to exit the "multimedia file management mode" and close the face recognition function.
In the embodiment of the present application, a face recognition technology is applied to a function of multimedia file processing, a certain face image in an image is clicked to represent a multimedia file related to the person in an electronic device to be processed, and a clicked area is associated with a geographic location of an image to be deleted, for example: the method comprises the steps of dividing the whole picture of a selected image into nine areas according to a rule of 'up-north-down-south-left-west-right-east', clicking a certain area of the selected image, deleting multimedia files which are matched with the geographical position of the selected image in the multimedia files related to the person, for example, clicking the northeast area in the image, displaying which cities are in the northeast area and the number of the multimedia files corresponding to each city, and then selectively deleting the multimedia files which are in the multimedia files related to the person and correspond to the geographical position of the selected area. And then the intelligence, the convenience and the controllability of multimedia file processing are improved, and a user is better helped to release the storage space of the electronic equipment.
In the embodiment of the application, the electronic device may trigger the electronic device to process the multimedia files corresponding to part of the geographic location identifiers in the at least one first geographic location identifier by displaying the at least one first geographic location identifier corresponding to the target geographic location range and the number of the multimedia files corresponding to each first geographic location identifier through selection input by a user. Therefore, the user can clearly know the distribution situation of the multimedia files and flexibly select the multimedia files to be processed.
It should be noted that, in the image processing method provided in the embodiment of the present application, the execution subject may be an image processing apparatus, or a control module in the image processing apparatus for executing the image processing method. The embodiment of the present application describes an image processing apparatus provided in the embodiment of the present application, by taking an example in which an image processing apparatus executes a loaded image processing method.
Fig. 7 shows a schematic diagram of a possible structure of an image processing apparatus according to an embodiment of the present application. As shown in fig. 7, the image processing apparatus 70 may include: a receiving module 71, an obtaining module 72, a determining module 73 and an executing module 74.
Wherein, the receiving module 71 is configured to receive a first input of a user to a target object of the at least one first object in a case that a target image including the at least one first object is displayed. An obtaining module 72, configured to, in response to the first input received by the receiving module 71, obtain M multimedia files from all multimedia files saved by the electronic device, where M multimedia files are multimedia files including a target object, and M is a positive integer. The receiving module 71 is further configured to receive a second input from the user. A determining module 73, configured to determine, in response to the second input received by the receiving module 71, a target geographic location range corresponding to the second input. And the execution module 74 is configured to execute a target operation on N multimedia files, where N is a positive integer. Wherein, the N multimedia files are: and in the M multimedia files, the geographic position of the multimedia file is within the target geographic position range.
In a possible implementation manner, the receiving module 71 is specifically configured to receive an input of a target area of a target image by a user, where the target image includes Q areas, the target area is an area of the Q areas, and Q is a positive integer. A determining module 73, specifically configured to determine, in response to the second input received by the receiving module 71, a target geographic location range corresponding to the target area according to the target area; and determining N multimedia files with geographic positions within the target geographic position range from the M multimedia files. Each of the Q regions corresponds to a geographic position range, and the target geographic position range is determined by the target geographic position of the target image.
In a possible implementation manner, the determining module 73 is further configured to determine Q geographic position ranges according to the target geographic position of the target image before the receiving module 71 receives the second input of the user, where the target geographic position is the geographic position when the target image is stored. With reference to fig. 7, as shown in fig. 8, the image processing apparatus 70 according to the embodiment of the present application may further include: a processing module 75. The processing module 75 is configured to perform grouping processing on the M multimedia files according to the Q geographic position ranges to obtain Q groups of multimedia file sets, where a group of multimedia file sets includes at least one multimedia file. The processing module 75 is further configured to associate Q sets of multimedia files with Q geographic location ranges, respectively, where one geographic location range corresponds to one set of multimedia file sets. The receiving module 71 is specifically configured to receive an input of a user to a target area in Q areas of the target image, where one area in the Q areas corresponds to one geographic position range in the Q geographic position ranges. Wherein the N multimedia files are: and the geographic position range corresponding to the target multimedia file set is the geographic position range when the N multimedia files are stored.
In a possible implementation manner, the determining module 73 is specifically configured to divide a target area range corresponding to the target geographic location into Q geographic location ranges. The target area range is a preset geographic area range; when the input parameters of the second input are different, the target area range is different.
In a possible implementation manner, with reference to fig. 8 and as shown in fig. 9, the image processing apparatus 70 provided in the embodiment of the present application may further include: a display module 76. The display module 76 is configured to display at least one first geographic location identifier corresponding to the target geographic location range and the number of multimedia files corresponding to each first geographic location identifier before the execution module 74 performs the target operation on the N multimedia files. The receiving module 71 is further configured to receive a third input of the target geographic location identifier in the at least one first geographic location identifier from the user. The executing module 74 is specifically configured to, in response to the third input received by the receiving module 71, execute a target operation on N multimedia files, where the N multimedia files identify corresponding multimedia files for the target geographic location.
In one possible implementation, in a case where only the target object is included in the target image, each of the N multimedia files is a multimedia file including only the target object; under the condition that the target image comprises at least two first objects and the target object is one first object, each multimedia file in the N multimedia files is a multimedia file at least comprising the target object; in a case where at least two first objects are included in the target image and the target object is a plurality of first objects, each of the N multimedia files is a multimedia file including the target object.
The image processing apparatus provided in the embodiment of the present application can implement each process implemented by the image processing apparatus in the above method embodiments, and for avoiding repetition, detailed descriptions are not repeated here.
The embodiment of the application provides an image processing apparatus, because a user can directly input a target object when the target image displayed by an electronic device includes at least one first object, so as to trigger the electronic device to acquire M multimedia files including the target object from all multimedia files stored by the electronic device according to the target object, the user can perform selection input, trigger the electronic device to determine N multimedia files whose geographic positions are within a target geographic position range corresponding to the selection input of the user from the M multimedia files, and perform target operation on the N multimedia files. The user does not need to select the multimedia files to be processed one by one through the file management interface and input the multimedia files to trigger the electronic equipment to execute corresponding operations, so that the operation of the user can be simplified, and the file processing efficiency of the electronic equipment is improved.
The image processing apparatus in the embodiment of the present application may be an apparatus, or may be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine or a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The image processing apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android operating system (Android), an iOS operating system, or other possible operating systems, which is not specifically limited in the embodiments of the present application.
Optionally, as shown in fig. 10, an electronic device M00 is further provided in an embodiment of the present application, and includes a processor M01, a memory M02, and a program or an instruction stored in the memory M02 and executable on the processor M01, where the program or the instruction when executed by the processor M01 implements each process of the foregoing embodiment of the image processing method, and can achieve the same technical effect, and details are not repeated here to avoid repetition.
It should be noted that the electronic device in the embodiment of the present application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 11 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 100 includes, but is not limited to: a radio frequency unit 101, a network module 102, an audio output unit 103, an input unit 104, a sensor 105, a display unit 106, a user input unit 107, an interface unit 108, a memory 109, and a processor 110.
Those skilled in the art will appreciate that the electronic device 100 may further comprise a power source (e.g., a battery) for supplying power to various components, and the power source may be logically connected to the processor 110 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system. The electronic device structure shown in fig. 11 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is not repeated here.
Wherein, the user input unit 107 is configured to receive a first input of a target object of the at least one first object from a user in a case where a target image including the at least one first object is displayed.
The processor 110 is configured to, in response to the first input, obtain M multimedia files from all multimedia files saved by the electronic device, where the M multimedia files are multimedia files including a target object, and M is a positive integer.
The user input unit 107 is further configured to receive a second input from the user.
The processor 110 is further configured to determine, in response to a second input, a target geographic location range corresponding to the second input, and perform a target operation on N multimedia files, where N is a positive integer. Wherein, the N multimedia files are: and in the M multimedia files, the geographic position of the multimedia file is within the target geographic position range.
The embodiment of the application provides electronic equipment, because a user can directly input at least one first object when the electronic equipment displays a target image, the target object in the target image is input to trigger the electronic equipment to acquire M multimedia files comprising the target object from all multimedia files stored by the electronic equipment according to the target object, so that the user can perform selection input, trigger the electronic equipment to determine N multimedia files with geographic positions within a target geographic position range corresponding to the selection input of the user from the M multimedia files, and perform target operation on the N multimedia files. The user does not need to select the multimedia files to be processed one by one through the file management interface and input the multimedia files to trigger the electronic equipment to execute corresponding operations, so that the operation of the user can be simplified, and the file processing efficiency of the electronic equipment is improved.
Optionally, the user input unit 107 is specifically configured to receive an input of a target area of a target image by a user, where the target image includes Q areas, the target area is an area of the Q areas, and Q is a positive integer.
A processor 110, specifically configured to determine, in response to the second input, a target geographic location range corresponding to the target area according to the target area; determining N multimedia files with geographic positions within the range of the target geographic position from the M multimedia files; each of the Q regions corresponds to a geographic position range, and the target geographic position range is determined by the target geographic position of the target image.
In the embodiment of the application, the electronic device can determine the target geographic position range corresponding to the target area according to the input of the user to the target area of the target image, so that N multimedia files with geographic positions within the target geographic position range can be determined from the M multimedia files, the N multimedia files are processed, and the file determining and processing efficiency of the electronic device is improved.
The processor 110 is further configured to determine Q geographic position ranges according to a target geographic position of the target image, where the target geographic position is a geographic position when the target image is stored. And according to the Q geographic position ranges, grouping the M multimedia files to obtain Q groups of multimedia file sets, wherein one group of multimedia file sets comprises at least one multimedia file. And associating the Q groups of multimedia file sets with Q geographical position ranges respectively, wherein one geographical position range corresponds to one group of multimedia file sets.
The user input unit 107 is specifically configured to receive a user input on a target area in Q areas of the target image, where one area in the Q areas corresponds to one geographic position range in the Q geographic position ranges. Wherein the N multimedia files are: and the geographic position range corresponding to the target multimedia file set is the geographic position range when the N multimedia files are stored.
In this embodiment of the application, the electronic device may divide the map into Q geographic position ranges according to a target geographic position when the target image is saved, perform grouping processing on M multimedia files including the target object according to the Q geographic position ranges to obtain Q groups of multimedia file sets corresponding to the Q geographic position ranges, associate the Q groups of multimedia file sets with Q regions of the target image, and indicate the Q groups of multimedia file sets through the Q regions of the target image. Therefore, a user can input a target area in the Q areas of the target image to trigger the electronic equipment to determine N multimedia files in a set of multimedia files corresponding to the target area and process the N multimedia files, so that the electronic equipment can flexibly determine the multimedia files to be processed according to the displayed image, and the flexibility of processing the files by the electronic equipment is improved.
The processor 110 is specifically configured to divide a target area range corresponding to a target geographic location into Q geographic location ranges. The target area range is a preset geographic area range; when the input parameters of the second input are different, the target area range is different.
In the embodiment of the application, the electronic equipment can determine different target area ranges corresponding to the target geographic position according to different input parameters input by a user, and the target area ranges are divided into a plurality of geographic position ranges, so that when the input parameters input by the user are different, the number of multimedia files corresponding to each geographic position range is different, and therefore the files to be processed in the electronic equipment can be flexibly determined.
The display unit 106 is configured to display at least one first geographic location identifier corresponding to the target geographic location range and the number of multimedia files corresponding to each first geographic location identifier.
The user input unit 107 is further configured to receive a third input of the target geographic location identifier from the at least one first geographic location identifier.
The processor 110 is specifically configured to, in response to the third input, perform a target operation on N multimedia files, where the N multimedia files identify corresponding multimedia files for the target geographic location.
In the embodiment of the application, the electronic device may trigger the electronic device to process the multimedia files corresponding to part of the geographic location identifiers in the at least one first geographic location identifier by displaying the at least one first geographic location identifier corresponding to the target geographic location range and the number of the multimedia files corresponding to each first geographic location identifier through selection input by a user. Therefore, the user can clearly know the distribution situation of the multimedia files and flexibly select the multimedia files to be processed.
It should be understood that, in the embodiment of the present application, the input Unit 104 may include a Graphics Processing Unit (GPU) 1041 and a microphone 1042, and the Graphics Processing Unit 1041 processes image data of a still picture or a video obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The display unit 106 may include a display panel 1061, and the display panel 1061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 107 includes a touch panel 1071 and other input devices 1072. The touch panel 1071 is also referred to as a touch screen. The touch panel 1071 may include two parts of a touch detection device and a touch controller. Other input devices 1072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein. The memory 109 may be used to store software programs as well as various data including, but not limited to, application programs and an operating system. The processor 110 may integrate an application processor, which primarily handles operating systems, user interfaces, applications, etc., and a modem processor, which primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the embodiment of the image processing method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and so on.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to execute a program or an instruction to implement each process of the embodiment of the image processing method, and can achieve the same technical effect, and the details are not repeated here to avoid repetition.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a computer software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (14)

1. An image processing method, characterized in that the method comprises:
receiving a first input of a user to a target object of at least one first object in a case where a target image including the at least one first object is displayed;
responding to the first input, and acquiring M multimedia files from all multimedia files saved by the electronic equipment, wherein the M multimedia files are multimedia files comprising the target object, and M is a positive integer;
receiving a second input of the user;
responding to the second input, determining a target geographic position range corresponding to the second input, and executing target operation on N multimedia files, wherein N is a positive integer;
wherein the N multimedia files are: and in the M multimedia files, the geographic position of the multimedia file is within the target geographic position range.
2. The method of claim 1, wherein receiving a second input from the user comprises:
receiving input of a user on a target area of the target image, wherein the target image comprises Q areas, the target area is an area in the Q areas, and Q is a positive integer;
the determining, in response to the second input, a target geographic location range corresponding to the second input includes:
in response to the second input, determining the target geographic location range corresponding to the target area according to the target area;
determining the N multimedia files with geographic positions within the target geographic position range from the M multimedia files;
each of the Q regions corresponds to a geographic position range, and the target geographic position range is determined by a target geographic position of the target image.
3. The method of claim 1, wherein prior to receiving the second input from the user, the method further comprises:
determining Q geographical position ranges according to the target geographical position of the target image, wherein the target geographical position is the geographical position when the target image is stored;
grouping the M multimedia files according to the Q geographic position ranges to obtain Q groups of multimedia file sets, wherein one group of multimedia file sets comprises at least one multimedia file;
respectively associating the Q groups of multimedia file sets with the Q geographical position ranges, wherein one geographical position range corresponds to one group of multimedia file sets;
the receiving of the second input of the user comprises:
receiving user input of a target area in Q areas of the target image, wherein one area in the Q areas corresponds to one geographical position range in the Q geographical position ranges;
wherein the N multimedia files are: and the geographic position range corresponding to the target multimedia file set is the geographic position range when the N multimedia files are stored.
4. The method of claim 3, wherein determining Q geographic location ranges from the target geographic location of the target image comprises:
dividing a target area range corresponding to the target geographic position into the Q geographic position ranges;
the target area range is a preset geographic area range; when the input parameters of the second input are different, the target area ranges are different.
5. The method of claim 3, wherein prior to performing the target operation on the N multimedia files, the method further comprises:
displaying at least one first geographical position identification corresponding to the target geographical position range and the number of multimedia files corresponding to each first geographical position identification;
receiving a third input of a target geographic position identification in the at least one first geographic position identification from a user;
the performing target operations on the N multimedia files comprises:
and responding to the third input, and executing target operation on the N multimedia files, wherein the N multimedia files are multimedia files corresponding to the target geographic position identification.
6. The method according to claim 1, wherein in a case where only the target object is included in the target image, each of the N multimedia files is a multimedia file including only the target object;
under the condition that the target image comprises at least two first objects and the target object is one first object, each multimedia file in the N multimedia files is a multimedia file at least comprising the target object;
and under the condition that at least two first objects are included in the target image and the target object is a plurality of first objects, each multimedia file in the N multimedia files is a multimedia file including the target object.
7. An image processing apparatus characterized by comprising: the device comprises a receiving module, an obtaining module, a determining module and an executing module;
the receiving module is used for receiving a first input of a user to a target object in at least one first object under the condition that a target image comprising the at least one first object is displayed;
the obtaining module is configured to obtain M multimedia files from all multimedia files saved by the electronic device in response to the first input received by the receiving module, where the M multimedia files are multimedia files including the target object, and M is a positive integer;
the receiving module is further used for receiving a second input of the user;
the determining module is configured to determine, in response to the second input received by the receiving module, a target geographic location range corresponding to the second input;
the execution module is used for executing target operation on N multimedia files, wherein N is a positive integer;
wherein the N multimedia files are: and in the M multimedia files, the geographic position of the multimedia file is within the target geographic position range.
8. The image processing apparatus according to claim 7, wherein the receiving module is specifically configured to receive an input of a target area of the target image from a user, the target image includes Q areas, the target area is an area of the Q areas, and Q is a positive integer;
the determining module is specifically configured to determine, in response to the second input received by the receiving module, the target geographic location range corresponding to the target area according to the target area; determining the N multimedia files with geographic positions within the target geographic position range from the M multimedia files;
each of the Q regions corresponds to a geographic position range, and the target geographic position range is determined by a target geographic position of the target image.
9. The image processing apparatus according to claim 7, wherein the determining module is further configured to determine Q geographic location ranges according to a target geographic location of the target image before the receiving module receives the second input from the user, where the target geographic location is a geographic location where the target image is saved;
the image processing apparatus further includes: a processing module;
the processing module is used for grouping the M multimedia files according to the Q geographic position ranges to obtain Q groups of multimedia file sets, and each group of multimedia file set comprises at least one multimedia file;
the processing module is further configured to associate the Q sets of multimedia file sets with the Q geographic position ranges, respectively, where one geographic position range corresponds to one set of multimedia file set;
the receiving module is specifically configured to receive an input of a user to a target area in Q areas of the target image, where one area of the Q areas corresponds to one geographic position range of the Q geographic position ranges;
wherein the N multimedia files are: and the geographic position range corresponding to the target multimedia file set is the geographic position range when the N multimedia files are stored.
10. The image processing apparatus according to claim 9, wherein the determining module is specifically configured to divide a target area range corresponding to the target geographic location into the Q geographic location ranges;
the target area range is a preset geographic area range; when the input parameters of the second input are different, the target area ranges are different.
11. The image processing apparatus according to claim 9, characterized by further comprising: a display module;
the display module is used for displaying at least one first geographical location identifier corresponding to the target geographical location range and the number of the multimedia files corresponding to each first geographical location identifier before the execution module executes target operation on the N multimedia files;
the receiving module is further configured to receive a third input of the target geographic location identifier in the at least one first geographic location identifier from the user;
the executing module is specifically configured to execute a target operation on the N multimedia files in response to the third input received by the receiving module, where the N multimedia files are multimedia files corresponding to the target geographic location identifier.
12. The image processing apparatus according to claim 7, wherein in a case where only the target object is included in the target image, each of the N multimedia files is a multimedia file including only the target object;
under the condition that the target image comprises at least two first objects and the target object is one first object, each multimedia file in the N multimedia files is a multimedia file at least comprising the target object;
and under the condition that at least two first objects are included in the target image and the target object is a plurality of first objects, each multimedia file in the N multimedia files is a multimedia file including the target object.
13. An electronic device comprising a processor, a memory and a program or instructions stored on the memory and executable on the processor, which program or instructions, when executed by the processor, implement the steps of the image processing method according to any one of claims 1 to 6.
14. A readable storage medium, characterized in that it stores thereon a program or instructions which, when executed by a processor, implement the steps of the image processing method according to any one of claims 1 to 6.
CN202110448677.1A 2021-04-25 2021-04-25 Image processing method, device, electronic equipment and medium Active CN113271377B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110448677.1A CN113271377B (en) 2021-04-25 2021-04-25 Image processing method, device, electronic equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110448677.1A CN113271377B (en) 2021-04-25 2021-04-25 Image processing method, device, electronic equipment and medium

Publications (2)

Publication Number Publication Date
CN113271377A true CN113271377A (en) 2021-08-17
CN113271377B CN113271377B (en) 2023-12-22

Family

ID=77229407

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110448677.1A Active CN113271377B (en) 2021-04-25 2021-04-25 Image processing method, device, electronic equipment and medium

Country Status (1)

Country Link
CN (1) CN113271377B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110955788A (en) * 2019-11-28 2020-04-03 维沃移动通信有限公司 Information display method and electronic equipment
CN111917979A (en) * 2020-07-27 2020-11-10 维沃移动通信有限公司 Multimedia file output method and device, electronic equipment and readable storage medium
CN111966842A (en) * 2020-08-28 2020-11-20 维沃移动通信有限公司 Multimedia file pushing method, device and equipment
CN112486385A (en) * 2020-11-30 2021-03-12 维沃移动通信有限公司 File sharing method and device, electronic equipment and readable storage medium
CN113271378A (en) * 2021-04-25 2021-08-17 维沃移动通信有限公司 Image processing method and device and electronic equipment
CN113271379A (en) * 2021-04-25 2021-08-17 维沃移动通信有限公司 Image processing method and device and electronic equipment
CN113282768A (en) * 2021-04-25 2021-08-20 维沃移动通信有限公司 Multimedia file processing method and device and electronic equipment

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110955788A (en) * 2019-11-28 2020-04-03 维沃移动通信有限公司 Information display method and electronic equipment
CN111917979A (en) * 2020-07-27 2020-11-10 维沃移动通信有限公司 Multimedia file output method and device, electronic equipment and readable storage medium
CN111966842A (en) * 2020-08-28 2020-11-20 维沃移动通信有限公司 Multimedia file pushing method, device and equipment
CN112486385A (en) * 2020-11-30 2021-03-12 维沃移动通信有限公司 File sharing method and device, electronic equipment and readable storage medium
CN113271378A (en) * 2021-04-25 2021-08-17 维沃移动通信有限公司 Image processing method and device and electronic equipment
CN113271379A (en) * 2021-04-25 2021-08-17 维沃移动通信有限公司 Image processing method and device and electronic equipment
CN113282768A (en) * 2021-04-25 2021-08-20 维沃移动通信有限公司 Multimedia file processing method and device and electronic equipment

Also Published As

Publication number Publication date
CN113271377B (en) 2023-12-22

Similar Documents

Publication Publication Date Title
EP3125135B1 (en) Picture processing method and device
US10942616B2 (en) Multimedia resource management method and apparatus, and storage medium
KR20230095895A (en) Method and apparatus for processing metadata
CN111612873B (en) GIF picture generation method and device and electronic equipment
CN107330858B (en) Picture processing method and device, electronic equipment and storage medium
CN111866392B (en) Shooting prompting method and device, storage medium and electronic equipment
WO2020048392A1 (en) Application virus detection method, apparatus, computer device, and storage medium
CN112083854A (en) Application program running method and device
CN112911147A (en) Display control method, display control device and electronic equipment
CN113010738A (en) Video processing method and device, electronic equipment and readable storage medium
CN112734661A (en) Image processing method and device
CN113271378B (en) Image processing method and device and electronic equipment
CN113271379B (en) Image processing method and device and electronic equipment
CN108052506B (en) Natural language processing method, device, storage medium and electronic equipment
CN113271377B (en) Image processing method, device, electronic equipment and medium
CN112702258B (en) Chat message sharing method and device and electronic equipment
CN111796736B (en) Application sharing method and device and electronic equipment
CN113473012A (en) Virtualization processing method and device and electronic equipment
CN114489414A (en) File processing method and device
CN113542599A (en) Image shooting method and device
CN114125226A (en) Image shooting method and device, electronic equipment and readable storage medium
CN113835582B (en) Terminal equipment, information display method and storage medium
CN113873081B (en) Method and device for sending associated image and electronic equipment
CN112887481B (en) Image processing method and device
CN117389667A (en) Display method and display device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant