CN107330859B - Image processing method and device, storage medium and terminal - Google Patents

Image processing method and device, storage medium and terminal Download PDF

Info

Publication number
CN107330859B
CN107330859B CN201710526120.9A CN201710526120A CN107330859B CN 107330859 B CN107330859 B CN 107330859B CN 201710526120 A CN201710526120 A CN 201710526120A CN 107330859 B CN107330859 B CN 107330859B
Authority
CN
China
Prior art keywords
image
processing
instruction
images
target image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710526120.9A
Other languages
Chinese (zh)
Other versions
CN107330859A (en
Inventor
梁昆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201710526120.9A priority Critical patent/CN107330859B/en
Publication of CN107330859A publication Critical patent/CN107330859A/en
Application granted granted Critical
Publication of CN107330859B publication Critical patent/CN107330859B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Processing Or Creating Images (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides an image processing method, which comprises the following steps: receiving a processing instruction of an initial image; generating a selection instruction according to the processing instruction; matching the initial image with a plurality of images according to the selection instruction so as to determine a target image from the plurality of images; and processing the initial image and the target image according to the preset parameters. According to the method and the device, when the processing instruction for the initial image is received, the selection instruction is generated according to the processing instruction, the target image corresponding to the initial image is determined according to the similarity, the initial image and the target image are processed in batch based on the preset parameters of the processing instruction, the problems that the processing process of a plurality of images is complicated, pixels are reduced due to the fact that a plurality of images are processed through a jigsaw puzzle are solved, and the processing efficiency of the images is improved. The embodiment of the invention also provides an image processing device, a storage medium and a terminal.

Description

Image processing method and device, storage medium and terminal
Technical Field
The present invention relates to the field of terminal technologies, and in particular, to an image processing method and apparatus, a storage medium, and a terminal.
Background
With the continuous development and popularization of terminals, terminals such as tablet and mobile phone are more and more deeply inserted into the lives of people, mobile phones generally have a photographing function, and photographing by using mobile phones has become the mainstream due to the continuous improvement of photographing pixels of mobile phones and the portability of mobile phones.
At present, as mobile phone photographing becomes mainstream, more and more aesthetic drawing software is applied. A user can adjust and beautify the photos by using the beautifying software, but only one photo can be beautified each time, if a plurality of photos are needed to be beautified, the plurality of photos are needed to be made into photo jigsaw combination for unified beautification, the process is complicated, and the pixels of each photo can be reduced after the jigsaw combination.
Disclosure of Invention
The embodiment of the invention provides an image processing method, an image processing device, a storage medium and a terminal, which can improve the image processing efficiency.
In a first aspect, an embodiment of the present invention provides an image processing method, including:
receiving a processing instruction for an initial image, wherein the processing instruction carries preset parameters;
generating a selection instruction according to the processing instruction, wherein the selection instruction indicates a selection target image;
matching the initial image with a plurality of images respectively according to the selection instruction to determine a target image from the plurality of images, wherein the similarity between the target image and the initial image is greater than a preset threshold value;
and processing the initial image and the target image according to the preset parameters.
In a second aspect, an embodiment of the present invention provides an image processing apparatus, including:
the receiving module is used for receiving a processing instruction for the initial image, and the processing instruction carries preset parameters;
the generating module is used for generating a selection instruction according to the processing instruction, and the selection instruction indicates a selection target image;
the matching module is used for respectively matching the initial image with a plurality of images according to the selection instruction so as to determine a target image from the plurality of images, wherein the similarity between the target image and the initial image is greater than a preset threshold value;
and the processing module is used for processing the initial image and the target image according to the preset parameters.
In a third aspect, the present invention provides a storage medium, on which a computer program is stored, where the computer program is executed by a processor to implement the steps of any one of the image processing methods provided by the embodiments of the present invention.
In a fourth aspect, an embodiment of the present invention provides a terminal, including:
a memory storing executable program code;
a processor coupled with the memory;
the processor calls the executable program code stored in the memory to execute any image processing method provided by the embodiment of the invention.
According to the image processing method, the image processing device, the storage medium and the terminal, when a processing instruction for an initial image is received, a selection instruction is generated according to the processing instruction, a target image corresponding to the initial image is determined according to the similarity, batch processing is performed on the initial image and the target image based on the preset parameters of the processing instruction, the problems that a plurality of image processing processes are complicated, pixels are reduced due to the fact that a plurality of images are processed through a jigsaw puzzle are solved, and the image processing efficiency is improved.
Drawings
The technical solution and other advantages of the present invention will become apparent from the following detailed description of specific embodiments of the present invention, which is to be read in connection with the accompanying drawings.
Fig. 1 is a schematic view of a scene of image processing according to an embodiment of the present invention.
Fig. 2 is a schematic flowchart of an image processing method according to an embodiment of the present invention.
Fig. 3 is another schematic flow chart of the image processing method according to the embodiment of the present invention.
Fig. 4 is a schematic view of another scene of image processing according to an embodiment of the present invention.
Fig. 5 is a schematic view of another scene of image processing according to an embodiment of the present invention.
Fig. 6 is a block diagram of an image processing apparatus according to an embodiment of the present invention.
Fig. 7 is a schematic block diagram of an image processing apparatus according to an embodiment of the present invention.
Fig. 8 is a schematic structural diagram of a terminal according to an embodiment of the present invention.
Detailed Description
Referring to the drawings, wherein like reference numbers refer to like elements, the principles of the present invention are illustrated as being implemented in a suitable computing environment. The following description is based on illustrated embodiments of the invention and should not be taken as limiting the invention with regard to other embodiments that are not detailed herein.
The term "module" as used herein may be considered a software object executing on the computing system. The various components, modules, engines, and services described herein may be viewed as objects implemented on the computing system. The apparatus and method described herein can be implemented in software, but can also be implemented in hardware, and are within the scope of the present invention.
Referring to fig. 1, fig. 1 is a schematic view of a scene of image processing according to an embodiment of the present invention. And the user correspondingly adjusts the initial image to generate a processing instruction, wherein the processing instruction comprises adjusting parameters of brightness, contrast, color temperature and saturation. And the terminal generates a selection instruction according to the processing instruction, and matches the initial image with the plurality of images respectively to determine a target image from the plurality of images, wherein the similarity between the target image and the initial image is greater than a preset threshold value. And processing the initial image and the target image based on the adjusting parameters.
In the present embodiment, description will be made from the viewpoint of an image processing apparatus, which may be integrated in a terminal such as a mobile phone, a tablet computer, a Personal Digital Assistant (PDA), or the like.
Referring to fig. 2, fig. 2 is a flowchart illustrating an image processing method according to an embodiment of the invention. The process may include:
in step S101, a processing instruction for an initial image is received.
Wherein, the processing instruction carries preset parameters. The preset parameters are beautifying parameters for the initial image, such as some adjusting parameters for brightness/contrast, color level, natural saturation, hue balance, etc. of the initial image.
In an embodiment, when a user beautifies an image by adjusting a preset parameter, the terminal generates and receives a corresponding processing instruction, where the processing instruction carries the preset parameter to be adjusted by the user.
In step S102, a selection instruction is generated in accordance with the processing instruction.
It should be noted that, in the current image beautifying method, only one image can be beautified at a time, if a plurality of images are needed to be beautified, the plurality of images are made into image pasteups and synthesized, and then the beautification is uniformly carried out, the process is complicated, and the pixels of each image are reduced after the pasteups are synthesized.
Based on the method, after the terminal receives the processing instruction, a selection instruction is correspondingly generated, and the selection instruction indicates the selection of the target image.
In step S103, the initial image is matched with the plurality of images according to the selection instruction, respectively, to determine the target image from the plurality of images.
It should be noted that, in the current terminal, image information of a large number of users is stored, and when a user performs image beautification, the user often wants to perform beautification on a plurality of images at the same time, but it takes a long time to find out a corresponding target image from the image information of a large number of users.
Based on the method, the initial image is matched with the plurality of images in the terminal gallery respectively according to the selection instruction so as to determine the target image from the plurality of images in the terminal gallery, and the similarity between the target image and the initial image is greater than a preset threshold value.
In one embodiment, the matching the initial image with the plurality of images respectively according to the selection instruction to determine the target image from the plurality of images may include acquiring shooting position information of the initial image, and determining the target image with the same shooting position information according to the shooting position information.
In an embodiment, the matching the initial image with the plurality of images according to the selection instruction to determine the target image from the plurality of images may further include:
(1) image features of the initial image are analyzed based on the selection instructions to extract image elements.
The analyzing of the image characteristics of the initial image may include color characteristic analysis, texture characteristic analysis, shape characteristic analysis, and the like of the initial image. After the corresponding analysis, image elements of the initial image may be extracted, which may include color feature element information, texture feature element information, shape feature element information, and the like.
(2) And matching the plurality of images according to the image elements to obtain the target image.
And performing similarity matching on all images in the terminal image library based on the image elements of the initial image to obtain a corresponding target image.
In an embodiment, the matching the plurality of images according to the image element to obtain the target image may include:
(1) and respectively carrying out similarity matching on the plurality of images according to the image elements so as to obtain a plurality of similarity values.
And matching the image elements of the initial image with the image elements of each image in the terminal image library in a one-to-one correspondence manner to obtain a plurality of similarity values. The higher the similarity value is, the closer the pixel structures representing the initial image and the target image are; the lower the similarity, the greater the difference in the statue structure between the representative original image and the target image.
(2) And determining the image with the similarity value exceeding a preset threshold value as the target image.
When the similarity between the initial image and the image in the terminal image library exceeds a preset threshold, the pixel structure similarity between the initial image and the image exceeds a certain threshold, the image is determined as a target image, and when the similarity between the initial image and the image in the terminal image library does not exceed the preset threshold, the pixel structure similarity between the initial image and the image does not reach the certain threshold, and the image is skipped.
In a possible implementation manner, the terminal may automatically record user picture parameters such as a shooting location and a shooting time of a picture within a period of time, and may perform classification learning on pictures of which the user is in the same location and at a shooting time close to each other by using a learning algorithm, for example, classify pictures of which the user is in the same shooting location and at a time close to each other within a week, and when the user beautifies a picture, quickly extract all pictures in the corresponding classification of the picture, so that the user may quickly perform batch processing on a plurality of pictures.
In step S104, the initial image and the target image are processed according to preset parameters.
And processing the initial image and the target image with the similarity greater than a preset threshold value with the initial image according to preset parameters carried by the processing instruction so as to achieve the effect of intelligently and quickly carrying out batch processing on the images.
In an embodiment, before processing the initial image and the target image according to preset parameters, the method may further include:
(1) a modification instruction for the target image is received.
Wherein, the modification instruction is used for indicating to delete the target image and/or add a new target image.
(2) And correspondingly modifying the target image according to the modification instruction.
Based on the method, the user can add or delete the target image obtained according to the image element matching, and the flexibility of image processing is improved.
In an embodiment, after the processing of the initial image and the target image according to the preset parameters, when a cancel instruction is received, the processing of the initial image and the target image may be performed before the processing.
As can be seen from the above, according to the image processing method provided in this embodiment, when a processing instruction for an initial image is received, a selection instruction is generated according to the processing instruction, a target image corresponding to the initial image is determined according to the similarity, and the initial image and the target image are processed in batch based on the preset parameters of the processing instruction, so that the problems that a plurality of image processing processes are complicated, pixels are reduced due to the fact that a plurality of images are processed by a jigsaw, and the image processing efficiency is improved.
The method described in the above embodiments is further illustrated in detail by way of example.
Referring to fig. 3, fig. 3 is another flow chart illustrating an image processing method according to an embodiment of the invention.
The process may include:
in step S201, a processing instruction for an initial image is received.
Wherein, the processing instruction carries preset parameters. The preset parameters are beautifying parameters for the initial image, such as some adjusting parameters for brightness/contrast, color level, natural saturation, hue balance, etc. of the initial image.
For example, when a user beautifies a photo using a mobile phone, for example, the mobile phone may automatically record the adjustment parameters and generate a processing instruction to adjust the brightness, color level, natural saturation, and the like of the photo.
In step S202, a selection instruction is generated in accordance with the processing instruction.
After the terminal receives the processing instruction, a selection instruction is correspondingly generated, and the selection instruction indicates the selection of the target image.
In one embodiment, after the terminal receives the processing instruction, the terminal correspondingly generates prompt information for prompting a user whether to perform batch processing operation, and when the user selects and determines, a selection instruction is generated based on the processing instruction for selecting a corresponding target image; and when the user selects no, adjusting the initial image based on the preset parameters.
In step S203, image elements are extracted from the initial image based on the selection instruction.
Wherein the image element comprises one or more of color feature information, texture feature information, and shape feature information in a target region of the initial image.
Further, the analyzing of the image feature information of the initial image may include color feature analysis, texture feature analysis, shape feature analysis, and the like of the initial image. After the corresponding analysis, image elements of the initial image may be extracted, which may include color feature information, texture feature information, shape feature information, and the like.
For example, the mobile phone analyzes the image feature information of the initial photo based on the selection instruction, and if the person takes a photo by itself, the texture feature and the face shape feature of the person are correspondingly analyzed to obtain a person image element; if the scenery is taken, the color feature and the texture feature of the scenery in the picture are correspondingly analyzed to obtain scenery image elements.
In step S204, the plurality of images are matched according to the image elements to determine a target image from the plurality of images.
And performing similarity matching on all images in the terminal image library based on the image elements of the initial image to obtain a corresponding target image.
In one embodiment, the matching the plurality of images according to the image element to determine the target image from the plurality of images may include:
(1) and respectively carrying out similarity matching on the plurality of images according to the image elements so as to obtain a plurality of similarity values.
And matching the image elements of the initial image with the image elements of each image in the terminal image library in a one-to-one correspondence manner to obtain a plurality of similarity values. The higher the similarity value is, the closer the pixel structures representing the initial image and the target image are; the lower the similarity, the greater the difference in pixel structure between the representative initial image and the target image.
For example, the mobile phone performs one-to-one corresponding matching on the image elements of the three images in the mobile phone gallery according to the image elements, and obtains corresponding similarity values of 0.2, 0.5, and 0.8. The closer the similarity is to 1, the more similar the image elements of the two images are illustrated.
(2) And determining the image with the similarity value exceeding a preset threshold value as the target image.
When the similarity between the initial image and the image in the terminal image library exceeds a preset threshold, the pixel structure similarity between the initial image and the image exceeds a certain threshold, the image is determined as a target image, and when the similarity between the initial image and the image in the terminal image library does not exceed the preset threshold, the pixel structure similarity between the initial image and the image does not reach the certain threshold, and the image is skipped.
For example, the preset threshold is 0.7, the corresponding similarity values 0.2, 0.5, and 0.8 are compared with the preset threshold 0.7, and the picture with the similarity value exceeding the preset threshold 0.7 is determined as the target image.
In step S205, a modification instruction for the target image is received.
Wherein, the modification instruction is used for indicating to delete the target image and/or add a new target image.
For example, after the target photo is determined on the mobile phone, a modification button may be generated, and a modification instruction for the target image may be received through the modification button.
In step S206, the target image is modified correspondingly according to the modification instruction.
Based on the method, the user can add or delete the matched target image, and the flexibility of image processing is improved.
For example, when the user clicks the modify button, the user may manually add or delete the target photo.
In step S207, the initial image and the target image are processed according to preset parameters.
After the corresponding target images are determined, the terminal simultaneously performs batch processing on the initial images and the determined target images according to preset parameters carried by the processing instruction, so that the efficiency of simultaneously processing a plurality of images by the terminal can be improved.
In step S208, when the cancel instruction is received, the original image and the target image after the processing are adjusted to the original image and the target image before the processing.
After the user performs the batch processing of the images, the effect of the image processing may be unsatisfactory.
Based on the above, when the terminal receives the cancel instruction, the initial image and the target image after being processed by the preset parameters are adjusted to be the initial image and the target image before being processed.
In an embodiment, when an undo instruction is received, the undo for one of the initial image and the target image after processing may be selected. This can increase the flexibility of the image undo process.
For better describing the embodiment of the present invention, another scene diagram of the image processing shown in fig. 4 and another scene diagram of the image processing shown in fig. 5 are used for detailed description: the prior art of beautifying images only can beautify one image at a time, if a plurality of images are needed to be beautified, the plurality of images are made into image pasteups to be synthesized and then beautified uniformly, the process is complicated, and the pixels of each image are reduced after the pasteups are synthesized.
Based on this, as shown in fig. 4, when it is detected that the user performs the beautification processing on the initial image, the terminal displays a prompt message to prompt the user whether to perform the batch processing operation after completing the adjustment, and when the user selects yes, a processing instruction is generated, the processing instruction includes a preset parameter for performing the beautification processing on the initial image, a selection instruction is generated according to the processing instruction, the initial image is respectively subjected to similarity matching with a plurality of images stored in the terminal, a target image is determined from the plurality of images, and the similarity between the target image and the initial image is greater than a preset threshold.
As shown in fig. 5, the terminal performs similarity matching between the initial image and the plurality of images stored in the terminal, determines the target images to be the image 2 and the image 3 from the plurality of images, and performs batch processing on the image 2 and the image 3 according to the preset parameters in the processing instruction.
As can be seen from the above, in the image processing method provided in this embodiment, when a processing instruction for an initial image is received, a selection instruction is generated according to the processing instruction, image features of the initial image are analyzed based on the selection instruction, image elements are extracted, similarity matching is performed on multiple images according to the image elements, an image with a similarity value exceeding a preset threshold is determined as a target image, and batch processing is performed on the initial image and the target image based on preset parameters of the processing instruction, so that the problems of pixel reduction caused by tedious processing processes of multiple images and puzzle processing of multiple images are solved, and the processing efficiency of the images is improved.
Referring to fig. 6, fig. 6 is a block diagram of an image processing apparatus according to an embodiment of the present invention. The image processing apparatus 300 includes: a receiving module 31, a generating module 32, a matching module 33, and a processing module 34.
The receiving module 31 is configured to receive a processing instruction for the initial image, where the processing instruction carries a preset parameter.
The preset parameters in the receiving module 31 are beautifying parameters for the initial image, such as some adjusting parameters for brightness/contrast, color level, natural saturation, hue balance, etc. of the initial image.
The generating module 32 is configured to generate a selection instruction according to the processing instruction, where the selection instruction indicates a selection target image.
In one embodiment, after the generating module 32 receives the processing instruction, a prompt message is correspondingly generated to prompt the user whether to perform batch processing operation, and when the user selects and determines, a selection instruction is generated based on the processing instruction to select a corresponding target image; and when the user selects no, adjusting the initial image based on the preset parameters.
The matching module 33 is configured to match the initial image with a plurality of images according to the selection instruction, so as to determine a target image from the plurality of images, where a similarity between the target image and the initial image is greater than a preset threshold.
In an embodiment, the matching module 33 may be configured to obtain shooting position information of an initial image, and determine a target image with the same shooting position information according to the shooting position information.
The processing module 34 is configured to process the initial image and the target image according to the preset parameter.
Before executing the processing module 34, the method may further include:
(1) a modification instruction for the target image is received.
Wherein the modification instruction is used for indicating that the target image is subjected to deletion operation and/or adding a new target image.
(2) And correspondingly modifying the target image according to the modification instruction.
Based on the method, the user can add or delete the target image obtained according to the image element matching, and the flexibility of image processing is improved.
In an embodiment, after executing the processing module 34, the method may further include adjusting the initial image and the target image after the processing to the initial image and the target image before the processing when the undo instruction is received.
Referring to fig. 7, fig. 7 is a schematic block diagram of an image processing apparatus according to an embodiment of the present invention, where the image processing apparatus 300 further includes:
the matching module 33 may further include an extraction sub-module 331 and a matching sub-module 332.
The extracting sub-module 331 is configured to extract image elements from the initial image based on the selection instruction, where the image elements include one or more of color feature information, texture feature information, and shape feature information in a target region of the initial image. The matching sub-module 332 is configured to match the multiple images according to the image elements to determine a target image from the multiple images.
The analysis sub-module 331 may analyze the image feature information of the initial image, including color feature analysis, texture feature analysis, and shape feature analysis of the initial image. After the corresponding analysis, image elements of the initial image may be extracted, which may include color feature information, texture feature information, shape feature information, and the like.
In an embodiment, the matching sub-module 332 is configured to perform similarity matching on a plurality of images according to the image element, so as to obtain a plurality of similarity values; and determining the image with the similarity value exceeding a preset threshold value as the target image.
When the similarity between the initial image and the image in the terminal image library exceeds a preset threshold, the pixel structure similarity between the initial image and the image exceeds a certain threshold, the image is determined as a target image, and when the similarity between the initial image and the image in the terminal image library does not exceed the preset threshold, the pixel structure similarity between the initial image and the image does not reach the certain threshold, and the image is skipped.
As can be seen from the above description, in the image processing apparatus provided in this embodiment, when a processing instruction for an initial image is received, a selection instruction is generated according to the processing instruction, image features of the initial image are analyzed based on the selection instruction, image elements are extracted, similarity matching is performed on multiple images according to the image elements, an image with a similarity value exceeding a preset threshold is determined as a target image, and batch processing is performed on the initial image and the target image based on preset parameters of the processing instruction, so that the problems of tedious processing processes of multiple images and pixel reduction caused by processing multiple images by a jigsaw puzzle are solved, and the processing efficiency of the images is improved.
An embodiment of the present invention further provides a terminal, as shown in fig. 8, the terminal 400 may include a memory 401 having one or more computer-readable storage media, a sensor 402, an input unit 403, a display 404, a processor 405 having one or more processing cores, and other components. Those skilled in the art will appreciate that the terminal structure shown in fig. 8 is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
The memory 401 may be used to store applications and data. The memory 401 stores applications containing executable code. The application programs may constitute various functional modules. The processor 405 executes various functional applications and data processing by running the application programs stored in the memory 401. Further, the memory 401 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, the memory 401 may also include a memory controller to provide the processor 405 and the input unit 403 with access to the memory 401.
The terminal may also include at least one sensor 402, such as a light sensor, motion sensor, and other sensors. The light sensor may include an ambient light sensor that adjusts the brightness of the display panel according to the brightness of ambient light, and a proximity sensor that turns off the display panel and/or the backlight when the terminal is moved to the ear. As one of the motion sensors, the gravity acceleration sensor can detect the magnitude of acceleration in each direction (generally, three axes), can detect the magnitude and direction of gravity when the mobile phone is stationary, and can be used for applications of recognizing the posture of the mobile phone (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration recognition related functions (such as pedometer and tapping), and the like; as for other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which can be configured in the terminal, detailed description is omitted here.
The input unit 403 may be used to receive input numbers, character information, or user characteristic information, such as a fingerprint, and generate a keyboard, mouse, joystick, optical, or trackball signal input related to user setting and function control. In an embodiment, the input unit 403 may include a touch-sensitive surface as well as other input devices. The touch-sensitive surface, also referred to as a touch display screen or a touch pad, may collect touch operations by a user (e.g., operations by a user on or near the touch-sensitive surface using a finger, a stylus, or any other suitable object or attachment) thereon or nearby, and drive the corresponding connection device according to a predetermined program. Alternatively, the touch sensitive surface may comprise two parts, a touch detection means and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 405, and can receive and execute commands sent by the processor 405. In addition, touch sensitive surfaces may be implemented using various types of resistive, capacitive, infrared, and surface acoustic waves. The input unit 403 may include other input devices in addition to the touch-sensitive surface. Other input devices may include, but are not limited to, one or more of a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), fingerprint recognition module, trackball, mouse, joystick, and the like.
The display screen 404 may be used to display information entered by or provided to the user as well as various graphical user interfaces of the terminal, which may be composed of graphics, text, icons, video, and any combination thereof. The display screen 404 may include a display panel. Alternatively, the Display panel may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like. Further, the touch-sensitive surface may overlay the display panel, and when a touch operation is detected on or near the touch-sensitive surface, the touch operation is transmitted to the processor 405 to determine the type of touch event, and then the processor 405 provides a corresponding visual output on the display panel according to the type of touch event. Although in FIG. 8 the touch sensitive surface and the display panel are two separate components to implement input and output functions, in some embodiments the touch sensitive surface may be integrated with the display panel to implement input and output functions.
The processor 405 is a control center of the terminal, connects various parts of the entire terminal using various interfaces and lines, and performs various functions of the terminal and processes data by running or executing an application program stored in the memory 401 and calling data stored in the memory 401, thereby performing overall monitoring of the terminal. Optionally, processor 405 may include one or more processing cores; the processor 405 may integrate an application processor and a modem processor, wherein the application processor mainly processes an operating system, a user interface, an application program, and the like.
Although not shown in fig. 8, the terminal may further include a camera, a bluetooth module, a power supply, and the like, which are not described in detail herein.
In this embodiment, the processor 405 in the terminal loads the executable code corresponding to the process of one or more application programs into the memory 401 according to the following instructions, and the processor 405 runs the application programs stored in the memory 401, thereby implementing various functions:
a processing instruction for the initial image is received by the processor 405, and the processing instruction carries preset parameters.
A selection instruction indicating a selection target image is generated by the processor 405 according to the processing instruction.
The initial image is matched with a plurality of images respectively by the processor 405 according to the selection instruction so as to determine a target image from the plurality of images, wherein the similarity between the target image and the initial image is greater than a preset threshold value.
The initial image and the target image are processed by the processor 405 according to the preset parameters.
The processor 405, when performing matching of the initial image with the plurality of images according to the selection instruction to determine the target image from the plurality of images, may include: extracting image elements from the initial image based on the selection instruction, the image elements including one or more of color feature information, texture feature information, and shape feature information in a target region of the initial image; and matching the plurality of images according to the image element so as to determine the target image from the plurality of images.
The processor 405, when performing matching of the plurality of images according to the image element to determine the target image from the plurality of images, may include: respectively carrying out similarity matching on a plurality of images according to the image elements to obtain a plurality of similarity values; and determining the image with the similarity value exceeding a preset threshold value as the target image.
Before the processor 405 performs the processing on the initial image and the target image according to the preset parameters, the processing may further include: receiving a modification instruction for the target image, wherein the modification instruction is used for indicating that the target image is subjected to deletion operation and/or adding a new target image; and correspondingly modifying the target image according to the modification instruction.
After the processor 405 performs the processing on the initial image and the target image according to the preset parameters, the method may further include: and when receiving the cancel instruction, adjusting the initial image and the target image after the processing into the initial image and the target image before the processing.
Since the terminal can execute any image processing method provided by the embodiment of the present invention, the beneficial effects that can be achieved by any image processing method provided by the embodiment of the present invention can be achieved, which are detailed in the foregoing embodiment and will not be described herein again.
In an embodiment, the above units may be implemented as independent entities, or may be combined arbitrarily and implemented as the same entity or several entities, and the implementation of the above units may refer to the foregoing method embodiments, and is not described herein again.
In the above embodiments, the descriptions of the embodiments have respective emphasis, and parts that are not described in detail in a certain embodiment may refer to the above detailed description of the image processing method, and are not described herein again.
The image processing method, the image processing apparatus, the storage medium, and the terminal provided in the embodiments of the present invention are, for example, a mobile phone, a tablet computer, a Personal Digital Assistant (PDA), and the like, and the terminal, the image processing apparatus, and the image processing method belong to the same concept.
It should be noted that, for the image processing method of the present invention, it can be understood by those skilled in the art that all or part of the processes of implementing the image processing method of the embodiments of the present invention can be completed by controlling the relevant hardware through a computer program, where the computer program can be stored in a computer readable storage medium, such as a memory of a terminal, and executed by at least one processor in the terminal, and during the execution, the processes of the embodiments of the image processing method can be included. The storage medium may be a magnetic disk, an optical disk, a Read Only Memory (ROM), a Random Access Memory (RAM), or the like.
In the image processing apparatus according to the embodiment of the present invention, each functional module may be integrated into one processing chip, each module may exist alone physically, or two or more modules may be integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented as a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium such as a read-only memory, a magnetic or optical disk, or the like.
The foregoing describes an image processing method, an image processing apparatus, a storage medium, and a terminal in detail, and a specific example is applied to illustrate the principles and embodiments of the present invention, and the description of the foregoing embodiments is only used to help understand the method and the core idea of the present invention; meanwhile, for those skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (9)

1. An image processing method applied to a terminal is characterized by comprising the following steps:
receiving a processing instruction for an initial image, wherein the processing instruction carries preset parameters;
generating a selection instruction according to the processing instruction, the selection instruction indicating a selection target image, wherein the generating of the selection instruction according to the processing instruction comprises: when the processing instruction is received, generating prompt information, wherein the prompt information is used for prompting a user whether to perform batch processing operation, and when the user selects yes, generating a selection instruction according to the processing instruction;
respectively matching the initial image with a plurality of images according to the selection instruction, wherein image elements are extracted from the initial image based on the selection instruction, the plurality of images are matched according to the image elements, so as to determine a target image from the plurality of images, and the similarity between the target image and the initial image is greater than a preset threshold value; or classifying images of the user at the same place and with the approximate shooting time by using a learning algorithm, and determining all the images of the same classification with the initial image as the target image;
carrying out batch processing on the initial image and the target image according to the preset parameters;
when receiving the cancel instruction, all or some of the initial image and the target image after the processing may be selected to be adjusted to the image before the processing.
2. The image processing method of claim 1, wherein the image element includes one or more of color feature information, texture feature information, and shape feature information in a target region of the initial image.
3. The image processing method of claim 2, wherein said matching the plurality of images according to the image elements to determine a target image from the plurality of images comprises:
respectively carrying out similarity matching on the multiple images according to the image elements to obtain multiple similarity values;
and determining the image with the similarity value exceeding a preset threshold value as the target image.
4. The image processing method according to claim 3, wherein before processing the initial image and the target image according to the preset parameters, the method further comprises:
receiving a modification instruction of the target image, wherein the modification instruction is used for indicating that the target image is subjected to deletion operation and/or adding a new target image;
and correspondingly modifying the target image according to the modification instruction.
5. An image processing apparatus characterized by comprising:
the receiving module is used for receiving a processing instruction for the initial image, and the processing instruction carries preset parameters;
a generating module, configured to generate a selection instruction according to the processing instruction, where the selection instruction indicates to select a target image, and the generating a selection instruction according to the processing instruction includes: when the processing instruction is received, generating prompt information, wherein the prompt information is used for prompting a user whether to perform batch processing operation, and when the user selects yes, generating a selection instruction according to the processing instruction;
the matching module is used for matching the initial image with a plurality of images according to the selection instruction, extracting image elements from the initial image based on the selection instruction, and matching the images according to the image elements to determine a target image from the images, wherein the similarity between the target image and the initial image is greater than a preset threshold; or classifying images of the user at the same place and with the approximate shooting time by using a learning algorithm, and determining all the images of the same classification with the initial image as the target image;
the processing module is used for carrying out batch processing on the initial image and the target image according to the preset parameters;
when receiving the cancel instruction, all or some of the initial image and the target image after the processing may be selected to be adjusted to the image before the processing.
6. The image processing apparatus of claim 5, wherein the matching module comprises:
an extraction sub-module for extracting image elements from the initial image based on the selection instruction, the image elements including one or more of color feature information, texture feature information, and shape feature information in a target region of the initial image.
7. The image processing apparatus of claim 6, wherein the matching module is to:
respectively carrying out similarity matching on the multiple images according to the image elements to obtain multiple similarity values;
and determining the image with the similarity value exceeding a preset threshold value as the target image.
8. A storage medium on which a computer program is stored, which program, when being executed by a processor, is characterized by carrying out the steps of the image processing method according to any one of claims 1 to 5.
9. A terminal, comprising:
a memory storing executable program code;
a processor coupled with the memory;
the processor calls the executable program code stored in the memory to execute the image processing method according to any one of claims 1 to 4.
CN201710526120.9A 2017-06-30 2017-06-30 Image processing method and device, storage medium and terminal Active CN107330859B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710526120.9A CN107330859B (en) 2017-06-30 2017-06-30 Image processing method and device, storage medium and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710526120.9A CN107330859B (en) 2017-06-30 2017-06-30 Image processing method and device, storage medium and terminal

Publications (2)

Publication Number Publication Date
CN107330859A CN107330859A (en) 2017-11-07
CN107330859B true CN107330859B (en) 2021-06-15

Family

ID=60198603

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710526120.9A Active CN107330859B (en) 2017-06-30 2017-06-30 Image processing method and device, storage medium and terminal

Country Status (1)

Country Link
CN (1) CN107330859B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107862653B (en) * 2017-11-30 2021-08-17 Oppo广东移动通信有限公司 Image display method, image display device, storage medium and electronic equipment
CN108198144A (en) * 2017-12-28 2018-06-22 广东欧珀移动通信有限公司 Image processing method, device, storage medium and electronic equipment
CN108647097B (en) * 2018-05-16 2021-04-13 Oppo广东移动通信有限公司 Text image processing method and device, storage medium and terminal
CN109034150B (en) * 2018-06-15 2021-09-21 北京小米移动软件有限公司 Image processing method and device
CN109102865A (en) * 2018-09-29 2018-12-28 联想(北京)有限公司 A kind of image processing method and device, equipment, storage medium
CN110727810B (en) * 2019-10-15 2023-05-02 联想(北京)有限公司 Image processing method, device, electronic equipment and storage medium
CN112488134A (en) * 2020-12-20 2021-03-12 广东白云学院 Big data image processing method

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102110112A (en) * 2009-12-28 2011-06-29 新奥特(北京)视频技术有限公司 Image sequence batch processing method and device
CN102375987B (en) * 2010-08-17 2014-04-02 国基电子(上海)有限公司 Image processing device and image feature vector extracting and image matching method
CN105069426A (en) * 2015-07-31 2015-11-18 小米科技有限责任公司 Similar picture determining method and apparatus
CN106210522A (en) * 2016-07-15 2016-12-07 广东欧珀移动通信有限公司 A kind of image processing method, device and mobile terminal
CN106372068A (en) * 2015-07-20 2017-02-01 中兴通讯股份有限公司 Method and device for image search, and terminal
CN106657793A (en) * 2017-01-11 2017-05-10 维沃移动通信有限公司 Image processing method and mobile terminal
CN106844381A (en) * 2015-12-04 2017-06-13 富士通株式会社 Image processing apparatus and method
CN106844492A (en) * 2016-12-24 2017-06-13 深圳云天励飞技术有限公司 A kind of method of recognition of face, client, server and system

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102110112A (en) * 2009-12-28 2011-06-29 新奥特(北京)视频技术有限公司 Image sequence batch processing method and device
CN102375987B (en) * 2010-08-17 2014-04-02 国基电子(上海)有限公司 Image processing device and image feature vector extracting and image matching method
CN106372068A (en) * 2015-07-20 2017-02-01 中兴通讯股份有限公司 Method and device for image search, and terminal
CN105069426A (en) * 2015-07-31 2015-11-18 小米科技有限责任公司 Similar picture determining method and apparatus
CN106844381A (en) * 2015-12-04 2017-06-13 富士通株式会社 Image processing apparatus and method
CN106210522A (en) * 2016-07-15 2016-12-07 广东欧珀移动通信有限公司 A kind of image processing method, device and mobile terminal
CN106844492A (en) * 2016-12-24 2017-06-13 深圳云天励飞技术有限公司 A kind of method of recognition of face, client, server and system
CN106657793A (en) * 2017-01-11 2017-05-10 维沃移动通信有限公司 Image processing method and mobile terminal

Also Published As

Publication number Publication date
CN107330859A (en) 2017-11-07

Similar Documents

Publication Publication Date Title
CN107330859B (en) Image processing method and device, storage medium and terminal
CN112567425B (en) Electronic device for adjusting image including plurality of objects and control method thereof
CN108492363B (en) Augmented reality-based combination method and device, storage medium and electronic equipment
CN108701439B (en) Image display optimization method and device
KR102560689B1 (en) Method and apparatus for displaying an ar object
US9706108B2 (en) Information processing apparatus and associated methodology for determining imaging modes
CN107395871B (en) Method and device for opening application, storage medium and terminal
KR20130106833A (en) Use camera to augment input for portable electronic device
CN110246110B (en) Image evaluation method, device and storage medium
CN107748615B (en) Screen control method and device, storage medium and electronic equipment
TW201604719A (en) Method and apparatus of controlling a smart device
CN111159449B (en) Image display method and electronic equipment
CN110442521B (en) Control unit detection method and device
CN111506758A (en) Method and device for determining article name, computer equipment and storage medium
CN110290426B (en) Method, device and equipment for displaying resources and storage medium
CN111127595A (en) Image processing method and electronic device
CN109246474B (en) Video file editing method and mobile terminal
CN108683845A (en) Image processing method, device, storage medium and mobile terminal
CN111353946B (en) Image restoration method, device, equipment and storage medium
CN111105474A (en) Font drawing method and device, computer equipment and computer readable storage medium
KR20150079387A (en) Illuminating a Virtual Environment With Camera Light Data
CN112235650A (en) Video processing method, device, terminal and storage medium
US11907290B2 (en) Electronic device and control method thereof
CN112416172A (en) Electronic equipment control method and device and electronic equipment
CN111344735B (en) Picture editing method, mobile terminal and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18

Applicant after: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS Corp.,Ltd.

Address before: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18

Applicant before: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS Corp.,Ltd.

GR01 Patent grant
GR01 Patent grant