CN109034150B - Image processing method and device - Google Patents

Image processing method and device Download PDF

Info

Publication number
CN109034150B
CN109034150B CN201810620868.XA CN201810620868A CN109034150B CN 109034150 B CN109034150 B CN 109034150B CN 201810620868 A CN201810620868 A CN 201810620868A CN 109034150 B CN109034150 B CN 109034150B
Authority
CN
China
Prior art keywords
editing operation
image
editing
position information
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810620868.XA
Other languages
Chinese (zh)
Other versions
CN109034150A (en
Inventor
杨松
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Mobile Software Co Ltd
Original Assignee
Beijing Xiaomi Mobile Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Mobile Software Co Ltd filed Critical Beijing Xiaomi Mobile Software Co Ltd
Priority to CN201810620868.XA priority Critical patent/CN109034150B/en
Publication of CN109034150A publication Critical patent/CN109034150A/en
Application granted granted Critical
Publication of CN109034150B publication Critical patent/CN109034150B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/40Filling a planar surface by adding surface attributes, e.g. colour or texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The disclosure relates to an image processing method and device, comprising acquiring an image set, wherein the image set comprises a plurality of images; when the editing operation aiming at any image in the image set is detected, acquiring the object and the type of the editing operation; acquiring the same object as the object of the editing operation in each image of the image set; the image processing method and the image processing device can realize batch editing of the objects in a plurality of images, reduce the number of user interaction times and improve the editing efficiency by performing the type editing on the same object as the object of the editing operation and performing the same type editing on the same object in each image of the image set when the editing operation aiming at a certain object is detected.

Description

Image processing method and device
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to an image processing method and apparatus.
Background
For privacy protection purposes, the user may perform some processing on the photos when sharing the images, such as mosaic adding to some areas (license plate, pedestrian, face, etc.). In the related art, after a user takes a plurality of pictures of the same scene, the user needs to process the pictures one by one, which is time-consuming.
Disclosure of Invention
To overcome the problems in the related art, the present disclosure provides an image processing method and apparatus.
According to a first aspect of an embodiment of the present disclosure, there is provided an image processing method, including acquiring an image set, the image set including a plurality of images; when the editing operation aiming at any image in the image set is detected, acquiring the object and the type of the editing operation; acquiring the same object as the object of the editing operation in each image of the image set; and performing the type of editing on the same object as the object of the editing operation.
In a possible implementation manner, the acquiring the same object as the object of the editing operation in each image of the image set includes: performing target detection on each image in the image set, and determining position information and categories of each object in the image; for each object determined in each image, extracting a feature vector of the object according to the position information of the object; determining similarity between the feature vector of each object and the feature vector of the object of the editing operation aiming at the objects which belong to the same category as the object of the editing operation in each image; and determining the object with the similarity larger than a characteristic threshold as the same object as the object of the editing operation in the objects belonging to the same category as the object of the editing operation in each image.
In a possible implementation manner, the obtaining the object of the editing operation includes: acquiring the position information of the editing operation; for each object in the image corresponding to the editing operation: determining the contact ratio of the editing operation and the object according to the position information of the editing operation and the position information of the object; and if the coincidence degree of the editing operation and the object is greater than the coincidence threshold value, determining that the object is the object of the editing operation.
In a possible implementation manner, the performing of the type of editing on the same object as the object of the editing operation includes: for each object which is the same as the object of the editing operation, determining the area where the object is located according to the position information of the object; and respectively editing the type of the area where each object which is the same as the object of the editing operation is located.
In one possible implementation, the method further includes: and in the image where the editing operation object is located, performing the type editing on the area corresponding to the position information of the editing operation object.
According to a second aspect of the embodiments of the present disclosure, there is provided an image processing apparatus, the apparatus including: a set acquisition module for acquiring an image set, the image set comprising a plurality of images; the first object acquisition module is used for acquiring the object and the type of the editing operation when the editing operation aiming at any image in the image set is detected; the second object acquisition module is used for acquiring the same object as the object of the editing operation in each image of the image set; and the first object editing module is used for editing the type of the object which is the same as the object of the editing operation.
In one possible implementation manner, the second object obtaining module includes: the target detection submodule is used for carrying out target detection on each image in the image set and determining the position information and the category of each object in the image; the characteristic extraction submodule is used for extracting a characteristic vector of each object determined in each image according to the position information of the object; the similarity determining submodule is used for determining the similarity between the characteristic vector of each object and the characteristic vector of the object subjected to the editing operation aiming at the objects which belong to the same category as the object subjected to the editing operation in each image; and the first object determining sub-module is used for determining the object with the similarity larger than a characteristic threshold as the same object as the object of the editing operation in the objects belonging to the same category as the object of the editing operation in each image.
In one possible implementation, the first object obtaining module includes: the position acquisition submodule is used for acquiring the position information of the editing operation; the coincidence degree determining submodule is used for determining the coincidence degree of the editing operation and each object in the image corresponding to the editing operation according to the position information of the editing operation and the position information of the object; and the second object determining sub-module is used for determining each object in the image corresponding to the editing operation as the object of the editing operation when the coincidence degree of the editing operation and the object is greater than the coincidence threshold value.
In one possible implementation, the first object editing module includes: the area determining submodule is used for determining the area where each object is located according to the position information of the object, wherein the object is the same as the object of the editing operation; and the area editing submodule is used for respectively editing the types of the areas where the objects which are the same as the objects of the editing operation are located.
In a possible implementation manner, the second object editing module is configured to perform the type editing on an area corresponding to the position information of the editing operation object in the image where the editing operation object is located.
According to a third aspect of the embodiments of the present disclosure, there is provided an image processing apparatus including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to perform the above method.
According to a fourth aspect of embodiments of the present disclosure, there is provided a non-transitory computer-readable storage medium having instructions which, when executed by a processor, enable the processor to perform the above-described method.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects: according to the method and the device, when the editing operation aiming at any image in the image set is detected, the same type of editing can be carried out on the same object in each image of the image set according to the object and the type of the editing operation, batch editing of the objects in the plurality of images can be realized through the editing operation on a certain image, the number of user interaction times is reduced, and the editing efficiency is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
FIG. 1 is a flow diagram illustrating an image processing method according to an exemplary embodiment.
Fig. 2 is a flow chart illustrating an image processing method according to an embodiment of the present disclosure.
Fig. 3 is a flow chart illustrating an image processing method according to an embodiment of the present disclosure.
Fig. 4 is a flow chart illustrating an image processing method according to an embodiment of the present disclosure.
Fig. 5 is a flow chart illustrating an image processing method according to an embodiment of the present disclosure.
Fig. 6 is a block diagram illustrating an image processing apparatus according to an embodiment of the present disclosure.
Fig. 7 is a block diagram illustrating an image processing apparatus according to an embodiment of the present disclosure.
Fig. 8 is a block diagram illustrating an apparatus for image processing according to an exemplary embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
FIG. 1 is a flow diagram illustrating an image processing method according to an exemplary embodiment. The method can be used in terminals such as mobile phones, tablets, computers and the like. Referring to fig. 1, the image processing method may include the steps of:
in step S11, an image set is acquired, the image set including a plurality of images.
The image collection includes a plurality of images. In the embodiment of the present disclosure, the image set may be an image set formed when the user selects images that need to be propagated in batches (for example, sharing, displaying, and the like on a social network site), or may be an image set formed by automatically classifying the terminal according to an image source, a shooting time, a shooting location, and the like. The present disclosure does not limit the images that the image set includes.
In step S12, when an editing operation for any one of the images in the image set is detected, the object and type of the editing operation are acquired.
The editing operation for the image may be a single click, a double click, or a slide operation, and the disclosure is not limited thereto. The type of editing operation may be mosaic, graffiti, fill, etc., and the present disclosure is not limited thereto. The objects of the editing operation can be license plates, pedestrians, human faces and the like. In an example, when a user paints a mosaic on a license plate B in an image a, the terminal may acquire that an object of an editing operation is the license plate B, and the type of the editing operation is the mosaic.
In step S13, the same object as the object of the editing operation is acquired in each image of the image set.
Each image of the image collection may include the same object, correspond to the same scene, etc., e.g., all have someone, some item, etc. One image may comprise a plurality of identical objects, for example comprising a plurality of cups of identical design, comprising a plurality of identical benches, etc.
In step S14, the type of editing is performed on the same object as the object of the editing operation.
After the terminal detects the editing operation on a certain object in a certain image, the terminal can edit the same object in each image (including the image) in the image set corresponding to the editing operation.
According to the method and the device, when the editing operation aiming at any image in the image set is detected, the same type of editing can be carried out on the same object in each image of the image set according to the object and the type of the editing operation, the automatic editing of the objects in the plurality of images can be realized through the editing operation on a certain image, the number of user interaction times is reduced, and the editing efficiency is improved.
Fig. 2 is a flow chart illustrating an image processing method according to an embodiment of the present disclosure. Referring to fig. 2, the step S13 of acquiring the same object as the object of the editing operation in each image of the image set may include steps S131 to S134.
In step S131, for each image in the image set, target detection is performed on the image, and position information and a category of each object in the image are determined.
Image classification may determine the class of objects contained in an image. The target detection can also carry out object positioning on the basis of image classification, and the common positioning representation is an external frame of the object.
The terminal may preset a target class to which the object may belong. After the target detection is performed on the image, the terminal can obtain the position of each object in the image and the probability that each object is of each target class. For each object in the image, the terminal may determine the target class corresponding to the maximum probability as the class of the object.
Through target detection, the position information of the circumscribed frame r of each object in the image can be obtained, wherein r is (x, y, w, h), x and y are a vertex of the circumscribed frame r, and w and h are the width and the length of the circumscribed frame r respectively. In the embodiment of the present disclosure, the position information of the object may be represented by the position information of the circumscribed frame r of the object.
In a possible implementation manner, the terminal may perform target detection on the image by a method such as fast Regions with CNN features (fast convolutional neural networks), SSD (Single Shot multi-box Detector), and the like. In the embodiment of the present disclosure, the target detection may be performed on the image by other methods, which is not limited to the present disclosure.
In step S132, for each object specified in each image, a feature vector of the object is extracted from the position information of the object.
The terminal can extract a local image corresponding to the object from the image in which the object is located according to the position information of the object. The terminal may perform feature extraction on the local image through a CNN (Convolutional Neural Network) to obtain a feature vector of the object.
For each image in the image set, the terminal may perform CNN (Convolutional Neural Network) convolution operation on the image to obtain a feature map F of the imagec. For each determined one of the imagesAn object, mapping the bounding box r of the object to the feature graph FcIn (1), obtaining an outer frame r at FcOf (2) corresponding feature region rc=(xc,yc,wc,hc)=(sc*x,sc*y,sc*w,scH), wherein scThe scaling factor for the image size to the feature size. The terminal can determine the characteristic region r corresponding to the outer frame r of each objectcPerforming pooling operation, and mapping the pool to a fixed-length feature vector fc
In step S133, for an object belonging to the same category as the object of the editing operation in each image, a similarity between the feature vector of each object and the feature vector of the object of the editing operation is determined.
Objects belonging to the same category may be the same object or may be different objects. For example, the faces may be the faces of the same person or the faces of different persons. The terminal may determine whether two objects of the same category are the same object or not according to the similarity between the feature vectors of the two objects.
In one possible implementation manner, the terminal may use the cosine value of the included angle of the two eigenvectors as the similarity between them. The cosine value ranges from-1 to 1, and the closer the value is to 1, the smaller the included angle between the two eigenvectors is, the closer the directions of the two eigenvectors are, and the higher the similarity is.
In step S134, of the objects belonging to the same category as the object of the editing operation in each image, the object whose similarity is greater than the feature threshold is determined as the same object as the object of the editing operation.
When the similarity of the two feature vectors is greater than the feature threshold, it indicates that the objects corresponding to the two feature vectors are the same object. Therefore, when the similarity between the feature vector of an object and the feature vector of the object of the editing operation is greater than the feature threshold, the terminal may determine that the object is the same object as the object of the editing operation. The characteristic threshold may be set as needed, for example, when the cosine value is used as the similarity, the characteristic threshold may be set to 0.8, 0.7, etc., which is not limited in this disclosure.
In the embodiment of the present disclosure, the terminal may determine the same object as the object of the editing operation according to the similarity between the feature vectors, thereby implementing batch editing of the objects.
Fig. 3 is a flow chart illustrating an image processing method according to an embodiment of the present disclosure. Referring to fig. 3, when an editing operation for any one of the images in the image set is detected in step S12, acquiring the object and type of the editing operation may include steps S121 to S123.
In step S121, when an editing operation for any one of the images in the image set is detected, position information and a type of the editing operation are acquired.
In step S122, for each object in the image corresponding to the editing operation, a degree of coincidence between the editing operation and the object is determined according to the position information of the editing operation and the position information of the object.
In step S123, for each object in the image corresponding to the editing operation, if the degree of coincidence between the editing operation and the object is greater than a coincidence threshold value, the object is determined to be the object of the editing operation.
The position information of the editing operation may indicate that the editing operation corresponds to a position in the image, and the terminal may determine the editing region according to the position information of the editing operation, for example, determine the editing region according to a trajectory of the sliding operation, and the like. In one possible implementation, the editing area may be a rectangular area. Referring to step S131, the terminal may determine an outer frame r of the object according to the position information of the object, where the outer frame corresponds to a rectangular region, and the region corresponding to the outer frame is the region where the object is located.
The terminal may determine the overlap ratio of the editing region and the region where the object is located, that is, the overlap ratio of the editing operation and the object, according to IoU (Intersection over Union) values of the editing region and the region where the object is located. The IoU value for two regions is the ratio of the overlapping area of the two regions to the area of the union of the two regions.
When the coincidence degree of the editing operation and a certain object is greater than the coincidence threshold value, indicating that the editing operation and the object are highly coincident, the editing operation can be regarded as an operation for the object. The coincidence threshold may be set as needed, for example, the coincidence threshold may be set to 0.6, 0.7, etc., which is not limited in this disclosure.
In this way, the object of the editing operation can be specified based on the position information of the editing operation and the position information of each object, and the same object as the specified object can be specified for batch editing.
Fig. 4 is a flow chart illustrating an image processing method according to an embodiment of the present disclosure. Referring to fig. 4, the step S14 of performing the type of editing on the same object as the object of the editing operation may include steps S141 and S142.
In step S141, for each object identical to the object of the editing operation, the area where the object is located is determined according to the position information of the object.
In step S142, the types of editing are performed on the regions where the same objects as the objects of the editing operation are located, respectively.
And when the terminal edits the object corresponding to the editing operation, determining an edited area. In the embodiment of the present disclosure, the terminal may determine the region where the object is located according to the position information of the object, so that the terminal may edit the object by performing the type editing on the region corresponding to the position information of the object.
Fig. 5 is a flow chart illustrating an image processing method according to an embodiment of the present disclosure. Referring to fig. 5, the image processing method may further include step S15.
In step S15, the type of editing is performed on the area corresponding to the position information of the editing operation target in the image where the editing operation target is located.
After the terminal determines the object of the editing operation, the automatic editing of the area where the object is located can be realized. Therefore, the user can edit the object under the condition of less operation of the object, the user interaction times during editing of a single object are reduced, and the editing efficiency is improved. For example, when a user performs a mosaic operation on a face, the user may slide on the face several times (the sliding operation does not cover all of the face), and the terminal may perform mosaic on all areas where the face is located.
Fig. 6 is a block diagram illustrating an image processing apparatus according to an embodiment of the present disclosure. The apparatus may be for a terminal. Referring to fig. 6, the image processing apparatus 60 includes a set acquisition module 61, a first object acquisition module 62, a second object acquisition module 63, and a first object editing module 64.
The set acquisition module 61 is configured to acquire a set of images, the set of images comprising a plurality of images;
the first object acquisition module 62 is configured to, when an editing operation for any one image in the image set is detected, acquire an object and a type of the editing operation;
the second object obtaining module 63 is configured to obtain the same object as the object of the editing operation in each image of the image set;
the first object editing module 64 is configured to perform the type of editing on the same object as the object of the editing operation.
Fig. 7 is a block diagram illustrating an image processing apparatus according to an embodiment of the present disclosure. Referring to fig. 7, in one possible implementation, the second object acquisition module 63 includes a target detection sub-module 631, a feature extraction sub-module 632, a similarity determination sub-module 633, and a first object determination sub-module 634.
The target detection sub-module 631 is configured to perform target detection on each image in the set of images, determine position information and a category of each object in the image;
the feature extraction submodule 632 is configured to extract, for each object determined in each image, a feature vector of the object according to the position information of the object;
the similarity determination sub-module 633 is configured to determine, for objects in each image that belong to the same category as the object of the editing operation, a similarity between a feature vector of each object and a feature vector of the object of the editing operation;
the object first determination sub-module 634 is configured to determine, among the objects belonging to the same category as the object of the editing operation in each image, the object whose similarity is greater than a feature threshold as the same object as the object of the editing operation.
In one possible implementation, the first object obtaining module 62 includes a position obtaining sub-module 621, a coincidence determination sub-module 622, and a second object determination sub-module 623.
The position obtaining sub-module 621 is configured to obtain position information of the editing operation;
the contact ratio determining submodule 622 is configured to determine, for each object in the image corresponding to the editing operation, a contact ratio of the editing operation and the object according to the position information of the editing operation and the position information of the object;
the second object determination sub-module 623 is configured to determine, for each object in the image corresponding to the editing operation, that the object is an object of the editing operation when a degree of coincidence of the editing operation with the object is greater than a coincidence threshold.
In one possible implementation, the first object editing module 64 includes an area determination sub-module 641 and an area editing sub-module 642.
The area determination submodule 641 is configured to determine, for each object that is the same as the object of the editing operation, an area in which the object is located according to the position information of the object.
The area editing sub-module 642 is configured to perform the type of editing on the area where each object identical to the object of the editing operation is located.
In one possible implementation, the apparatus 60 further includes a second object editing module 65.
The second object editing module 65 is configured to perform the type of editing on an area corresponding to the position information of the object of the editing operation in the image where the object of the editing operation is located.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
According to the method and the device, when the editing operation aiming at any image in the image set is detected, the same type of editing can be carried out on the same object in each image of the image set according to the object and the type of the editing operation, batch editing of the objects in the plurality of images can be realized through the editing operation on a certain image, the number of user interaction times is reduced, and the editing efficiency is improved.
Fig. 8 is a block diagram illustrating an apparatus 800 for image processing according to an example embodiment. For example, the apparatus 800 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Referring to fig. 8, the apparatus 800 may include one or more of the following components: processing component 802, memory 804, power component 806, multimedia component 808, audio component 810, input/output (I/O) interface 812, sensor component 814, and communication component 816.
The processing component 802 generally controls overall operation of the device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 802 may include one or more processors 820 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interaction between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operations at the apparatus 800. Examples of such data include instructions for any application or method operating on device 800, contact data, phonebook data, messages, images, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
Power components 806 provide power to the various components of device 800. The power components 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the apparatus 800.
The multimedia component 808 includes a screen that provides an output interface between the device 800 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the device 800 is in an operating mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the apparatus 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 also includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 814 includes one or more sensors for providing various aspects of state assessment for the device 800. For example, the sensor assembly 814 may detect the open/closed status of the device 800, the relative positioning of components, such as a display and keypad of the device 800, the sensor assembly 814 may also detect a change in the position of the device 800 or a component of the device 800, the presence or absence of user contact with the device 800, the orientation or acceleration/deceleration of the device 800, and a change in the temperature of the device 800. Sensor assembly 814 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate communications between the apparatus 800 and other devices in a wired or wireless manner. The device 800 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 816 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium comprising instructions, such as the memory 804 comprising instructions, executable by the processor 820 of the device 800 to perform the above-described method is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (8)

1. An image processing method, characterized in that the method comprises:
acquiring an image set, wherein the image set comprises a plurality of images;
when the editing operation aiming at any image in the image set is detected, acquiring the object and the type of the editing operation;
performing target detection on each image in the image set, and determining position information and categories of each object in the image;
for each object determined in each image, extracting a feature vector of the object according to the position information of the object;
determining similarity between the feature vector of each object and the feature vector of the object of the editing operation aiming at the objects which belong to the same category as the object of the editing operation in each image;
determining an object with similarity greater than a characteristic threshold as an object same as the object of the editing operation from among objects belonging to the same category as the object of the editing operation in each image;
performing the type editing on the same object as the object of the editing operation;
and in the image where the editing operation object is located, performing the type editing on the area corresponding to the position information of the editing operation object.
2. The method of claim 1, wherein the obtaining the object of the editing operation comprises:
acquiring the position information of the editing operation;
for each object in the image corresponding to the editing operation:
determining the contact ratio of the editing operation and the object according to the position information of the editing operation and the position information of the object;
and if the coincidence degree of the editing operation and the object is greater than the coincidence threshold value, determining that the object is the object of the editing operation.
3. The method of claim 1, wherein said performing the type of editing on the same object as the object of the editing operation comprises:
for each object which is the same as the object of the editing operation, determining the area where the object is located according to the position information of the object;
and respectively editing the type of the area where each object which is the same as the object of the editing operation is located.
4. An image processing apparatus, characterized in that the apparatus comprises:
a set acquisition module for acquiring an image set, the image set comprising a plurality of images;
the first object acquisition module is used for acquiring the object and the type of the editing operation when the editing operation aiming at any image in the image set is detected;
the second object acquisition module is used for acquiring the same object as the object of the editing operation in each image of the image set, and comprises a target detection sub-module, a feature extraction sub-module, a similarity determination sub-module and a first object determination sub-module;
the target detection submodule is used for carrying out target detection on each image in the image set and determining the position information and the category of each object in the image;
the feature extraction submodule is used for extracting a feature vector of each object determined in each image according to the position information of the object;
the similarity determining submodule is used for determining the similarity between the characteristic vector of each object and the characteristic vector of the object subjected to the editing operation aiming at the objects which belong to the same category as the object subjected to the editing operation in each image;
the first object determining sub-module is used for determining an object with similarity greater than a characteristic threshold as an object same as the object of the editing operation in the images, wherein the object belongs to the same category as the object of the editing operation;
the first object editing module is used for editing the type of the object which is the same as the object of the editing operation;
and the second object editing module is used for editing the type of the area corresponding to the position information of the editing operation object in the image where the editing operation object is positioned.
5. The apparatus of claim 4, wherein the first object acquisition module comprises:
the position acquisition submodule is used for acquiring the position information of the editing operation;
the coincidence degree determining submodule is used for determining the coincidence degree of the editing operation and each object in the image corresponding to the editing operation according to the position information of the editing operation and the position information of the object;
and the second object determining sub-module is used for determining each object in the image corresponding to the editing operation as the object of the editing operation when the coincidence degree of the editing operation and the object is greater than the coincidence threshold value.
6. The apparatus of claim 4, wherein the first object editing module comprises:
the area determining submodule is used for determining the area where each object is located according to the position information of the object, wherein the object is the same as the object of the editing operation;
and the area editing submodule is used for respectively editing the types of the areas where the objects which are the same as the objects of the editing operation are located.
7. An image processing apparatus characterized by comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to perform the method of any of claims 1 to 3.
8. A non-transitory computer readable storage medium, wherein instructions in the storage medium, when executed by a processor, enable the processor to perform the method of any of claims 1-3.
CN201810620868.XA 2018-06-15 2018-06-15 Image processing method and device Active CN109034150B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810620868.XA CN109034150B (en) 2018-06-15 2018-06-15 Image processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810620868.XA CN109034150B (en) 2018-06-15 2018-06-15 Image processing method and device

Publications (2)

Publication Number Publication Date
CN109034150A CN109034150A (en) 2018-12-18
CN109034150B true CN109034150B (en) 2021-09-21

Family

ID=64609851

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810620868.XA Active CN109034150B (en) 2018-06-15 2018-06-15 Image processing method and device

Country Status (1)

Country Link
CN (1) CN109034150B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110288679A (en) * 2019-06-30 2019-09-27 于峰 The processing method of image, apparatus and system
CN110489533A (en) * 2019-07-09 2019-11-22 深圳追一科技有限公司 Interactive method and relevant device
CN111325656B (en) * 2020-03-02 2023-08-15 Oppo广东移动通信有限公司 Image processing method, image processing device and terminal equipment
CN115908642A (en) * 2021-09-30 2023-04-04 北京字跳网络技术有限公司 Image editing method and device
CN114661214A (en) * 2022-02-18 2022-06-24 北京达佳互联信息技术有限公司 Image display method, device and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103886549A (en) * 2012-12-21 2014-06-25 北京齐尔布莱特科技有限公司 Method and apparatus for automatic mosaic processing of license plate in picture
CN104378542A (en) * 2013-08-14 2015-02-25 腾讯科技(深圳)有限公司 Media content processing method and device and terminal device
CN104794131A (en) * 2014-01-21 2015-07-22 腾讯科技(深圳)有限公司 File bulk-editing method and device
CN105677325A (en) * 2015-12-29 2016-06-15 努比亚技术有限公司 Mobile terminal and image processing method
CN106485166A (en) * 2016-10-20 2017-03-08 广州三星通信技术研究有限公司 Screenshotss method and apparatus for electric terminal
CN107330859A (en) * 2017-06-30 2017-11-07 广东欧珀移动通信有限公司 A kind of image processing method, device, storage medium and terminal
CN108132749A (en) * 2017-12-21 2018-06-08 维沃移动通信有限公司 A kind of image edit method and mobile terminal
CN108154099A (en) * 2017-12-20 2018-06-12 北京奇艺世纪科技有限公司 A kind of character recognition method, device and electronic equipment

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5634111B2 (en) * 2010-04-28 2014-12-03 キヤノン株式会社 Video editing apparatus, video editing method and program
US9693108B2 (en) * 2012-06-12 2017-06-27 Electronics And Telecommunications Research Institute Method and system for displaying user selectable picture

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103886549A (en) * 2012-12-21 2014-06-25 北京齐尔布莱特科技有限公司 Method and apparatus for automatic mosaic processing of license plate in picture
CN104378542A (en) * 2013-08-14 2015-02-25 腾讯科技(深圳)有限公司 Media content processing method and device and terminal device
CN104794131A (en) * 2014-01-21 2015-07-22 腾讯科技(深圳)有限公司 File bulk-editing method and device
CN105677325A (en) * 2015-12-29 2016-06-15 努比亚技术有限公司 Mobile terminal and image processing method
CN106485166A (en) * 2016-10-20 2017-03-08 广州三星通信技术研究有限公司 Screenshotss method and apparatus for electric terminal
CN107330859A (en) * 2017-06-30 2017-11-07 广东欧珀移动通信有限公司 A kind of image processing method, device, storage medium and terminal
CN108154099A (en) * 2017-12-20 2018-06-12 北京奇艺世纪科技有限公司 A kind of character recognition method, device and electronic equipment
CN108132749A (en) * 2017-12-21 2018-06-08 维沃移动通信有限公司 A kind of image edit method and mobile terminal

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Object detection from images using convolutional neural networks;Olavi Stenroos;《Aalto University School of Science,Master"s Programme in Computer》;20170628;第1-75页 *
图像编辑的动作脚本及批处理应用;黄春华等;《上海应用技术学院学报(自然科学版)》;20120930;第12卷(第3期);第220-223、239页 *

Also Published As

Publication number Publication date
CN109034150A (en) 2018-12-18

Similar Documents

Publication Publication Date Title
CN106651955B (en) Method and device for positioning target object in picture
CN109034150B (en) Image processing method and device
CN106557768B (en) Method and device for recognizing characters in picture
TWI702544B (en) Method, electronic device for image processing and computer readable storage medium thereof
US10452890B2 (en) Fingerprint template input method, device and medium
WO2021031609A1 (en) Living body detection method and device, electronic apparatus and storage medium
RU2577188C1 (en) Method, apparatus and device for image segmentation
CN108010060B (en) Target detection method and device
CN107944447B (en) Image classification method and device
US20170154206A1 (en) Image processing method and apparatus
US20170032219A1 (en) Methods and devices for picture processing
WO2017031901A1 (en) Human-face recognition method and apparatus, and terminal
RU2664003C2 (en) Method and device for determining associate users
CN107944367B (en) Face key point detection method and device
CN106557759B (en) Signpost information acquisition method and device
CN107563994B (en) Image significance detection method and device
CN105631803B (en) The method and apparatus of filter processing
CN106485567B (en) Article recommendation method and device
CN107025441B (en) Skin color detection method and device
CN108009563B (en) Image processing method and device and terminal
US20220222831A1 (en) Method for processing images and electronic device therefor
CN112927122A (en) Watermark removing method, device and storage medium
CN111523346A (en) Image recognition method and device, electronic equipment and storage medium
CN111652107B (en) Object counting method and device, electronic equipment and storage medium
CN112200040A (en) Occlusion image detection method, device and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant