CN109147007B - Label loading method, label loading device, terminal and computer readable storage medium - Google Patents

Label loading method, label loading device, terminal and computer readable storage medium Download PDF

Info

Publication number
CN109147007B
CN109147007B CN201810867377.5A CN201810867377A CN109147007B CN 109147007 B CN109147007 B CN 109147007B CN 201810867377 A CN201810867377 A CN 201810867377A CN 109147007 B CN109147007 B CN 109147007B
Authority
CN
China
Prior art keywords
sticker
target object
sub
image
processed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810867377.5A
Other languages
Chinese (zh)
Other versions
CN109147007A (en
Inventor
郭雄伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Oppo Chongqing Intelligent Technology Co Ltd
Original Assignee
Oppo Chongqing Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo Chongqing Intelligent Technology Co Ltd filed Critical Oppo Chongqing Intelligent Technology Co Ltd
Priority to CN201810867377.5A priority Critical patent/CN109147007B/en
Publication of CN109147007A publication Critical patent/CN109147007A/en
Application granted granted Critical
Publication of CN109147007B publication Critical patent/CN109147007B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application belongs to the technical field of image processing, and particularly relates to a sticker loading method, a device, a terminal and a computer readable storage medium, wherein the sticker loading method comprises the following steps: acquiring an image to be processed; detecting a target object contained in the image to be processed and one or more sub-target objects attached to the target object; extracting characteristic information of each sub-target object, and generating a sticker inquiry instruction corresponding to each sub-target object according to the characteristic information; obtaining a sticker corresponding to each sub-target object according to the sticker inquiry instruction, and loading the sticker to the corresponding sub-target object in the image to be processed; the expression effect of the sticker is enhanced.

Description

Label loading method, label loading device, terminal and computer readable storage medium
Technical Field
The application belongs to the technical field of image processing, and particularly relates to a sticker loading method, a sticker loading device, a sticker loading terminal and a computer readable storage medium.
Background
With the development of photographing technology, people can make a face-beautifying, filter or add a sticker to an image through various photographing applications.
The process of adding the sticker to the image can be generally realized by synthesizing the pre-manufactured fixed materials into the image, however, the processing mode can possibly lead the same sticker to be used in different images, so that the phenomenon of uniformly processing the sticker appears, and the expression effect of the sticker is reduced.
Disclosure of Invention
The embodiment of the application provides a sticker loading method, a sticker loading device, a terminal and a computer readable storage medium, which can solve the technical problem that the same sticker is used in different images, and the expression effect of the sticker is reduced.
A first aspect of an embodiment of the present application provides a method for loading a sticker, including:
acquiring an image to be processed;
detecting a target object contained in the image to be processed and one or more sub-target objects attached to the target object;
extracting characteristic information of each sub-target object, and generating a sticker inquiry instruction corresponding to each sub-target object according to the characteristic information;
and obtaining the sticker corresponding to each sub-target object according to the sticker inquiry instruction, and loading the sticker to the sub-target object corresponding to the image to be processed.
A second aspect of an embodiment of the present application provides a decal loading apparatus, including:
an acquisition unit configured to acquire an image to be processed;
the detection unit is used for detecting a target object contained in the image to be processed and one or more sub-target objects attached to the target object;
the generation unit is used for extracting the characteristic information of each sub-target object and generating a sticker inquiry instruction corresponding to each sub-target object according to the characteristic information;
and the loading unit is used for acquiring the sticker corresponding to each sub-target object according to the sticker inquiry instruction and loading the sticker onto the corresponding sub-target object in the image to be processed.
A third aspect of the embodiments of the present application provides a terminal comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the steps of the above method when executing the computer program.
A fourth aspect of the embodiments of the present application provides a computer-readable storage medium storing a computer program which, when executed by a processor, performs the steps of the above method.
In the embodiment of the application, the target object contained in the image to be processed and one or more sub-target objects attached to the target object are detected, the characteristic information of each sub-target object is extracted, and a sticker inquiry instruction corresponding to each sub-target object carrying the characteristic information is generated, so that a sticker corresponding to each sub-target object is inquired according to the sticker inquiry instruction, and the sticker is loaded onto the corresponding sub-target object in the image to be processed, so that the sticker processing of one or more sub-target objects in the image to be processed is completed; the stickers added to the sub-target objects are obtained by inquiring the characteristic information of the sub-target objects and are not prepared in advance, so that the condition that the same sticker is used in different images to influence the expression effect of the sticker can be avoided.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments will be briefly described below, it being understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic implementation flow chart of a sticker loading method according to an embodiment of the present application;
fig. 2 is a schematic flowchart of a specific implementation of step 103 of a sticker loading method according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a decal loading interface provided by an embodiment of the present application;
FIG. 4 is a schematic diagram of a sticker loading interface loaded with price stickers and purchase link stickers provided by an embodiment of the present application;
fig. 5 is a schematic flowchart of a specific implementation of step 104 of the sticker loading method according to an embodiment of the present application;
FIG. 6 is a schematic diagram of a decal loading interface loaded with decal marks provided by an embodiment of the present application;
fig. 7 is a schematic structural diagram of a sticker loading apparatus according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of a terminal according to an embodiment of the present application.
Detailed Description
The present application will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application. Meanwhile, in the description of the present application, the terms "first", "second", and the like are used only to distinguish the description, and are not to be construed as indicating or implying relative importance.
It should be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the application herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this specification and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in the present specification and the appended claims refers to any and all possible combinations of one or more of the associated listed items, and includes such combinations.
As used in this specification and the appended claims, the term "if" may be interpreted as "when..once" or "in response to a determination" or "in response to detection" depending on the context. Similarly, the phrase "if a determination" or "if a [ described condition or event ] is detected" may be interpreted in the context of meaning "upon determination" or "in response to determination" or "upon detection of a [ described condition or event ]" or "in response to detection of a [ described condition or event ]".
In order to illustrate the above technical solution of the present application, the following description will be made by specific examples.
The method of pasting paper on the image is realized by synthesizing the prefabricated fixed materials into the image, for example, when taking a photo and previewing, obtaining the pasting paper selected by the user, and then placing the pasting paper selected by the user at the corresponding position of the picture obtained by the camera; or after photographing is completed, the sticker selected by the user is placed at the corresponding position of the photograph. However, the presentation of these materials is still generally on a stand-alone or some face detection, and the materials are relatively fixed, and it is impossible to make a personalized sticker according to the attribute of the target object to which the sticker is added.
For example, when the photographed target object is a human body, the above-mentioned sticker processing generally only performs face detection at most, and then adds the pre-manufactured sticker selected by the user to a corresponding position of the face, for example, adds the pre-manufactured "hat" sticker, "glasses" sticker or "blush" sticker to a corresponding position of the face; the method can not further detect articles such as clothes, hats, shoes, ornaments, mobile phones and the like attached to a person, and can not provide the sticker capable of reflecting the properties of the articles, and the method can possibly lead the same sticker to be used in different images, so that the phenomenon of uniformly processing the sticker is caused, the expression effect of the sticker is reduced, and people cannot generally obtain any knowledge content related to the articles from the sticker.
In the embodiment of the application, the target object contained in the image to be processed and one or more sub-target objects attached to the target object are detected, the characteristic information of each sub-target object is extracted, and a sticker inquiry instruction corresponding to each sub-target object carrying the characteristic information is generated, so that a sticker corresponding to each sub-target object is inquired according to the sticker inquiry instruction, and the sticker is loaded onto the corresponding sub-target object in the image to be processed, so that the sticker processing of one or more sub-target objects in the image to be processed is completed; the stickers added to the sub-target objects are obtained by inquiring the characteristic information of the sub-target objects and are not the stickers manufactured in advance, so that the condition that the same sticker is used in different images to influence the expression effect of the stickers can be avoided, the expression effect of the stickers is enhanced, users cannot see the uniform stickers, the beneficial effects of loading different stickers according to different sub-target objects are realized, and the stickers of a plurality of sub-target objects can be loaded simultaneously.
Fig. 1 shows a schematic implementation flow chart of a sticker loading method according to an embodiment of the present application, where the method is applied to a terminal, and may be executed by a sticker loading device configured on the terminal, and is applicable to a situation where a sticker expression effect needs to be improved, and the method includes steps 101 to 104.
Step 101, obtaining an image to be processed.
The terminal comprises terminal equipment such as a smart phone, a tablet personal computer, a learning machine and the like which are provided with a sticker loading device. And, applications such as photographing applications, browsers, weChat and the like can be installed on the terminal equipment.
In some embodiments of the present application, the acquiring the image to be processed may be acquiring a preview frame image in a preview process of the camera; when the photographing application is in a preview state, the camera collects frame images generated by external light signals. The camera acquires data output by an external light signal every time, namely frame data, after a user starts a photographing application on the terminal, the terminal enters a preview mode, and the terminal acquires and displays the frame data acquired by the camera to obtain the preview frame image.
In some embodiments of the present application, the above-mentioned obtaining the image to be processed may also be obtaining an image captured by a photographing application, or may be an image received by another application program, for example, an image sent by another WeChat contact received by a WeChat application; or, an image downloaded from the internet through a browser application; the image to be processed may be a photo image or a video image, and the source and form of the image to be processed are not limited.
Step 102, detecting a target object contained in the image to be processed and one or more sub-target objects attached to the target object.
In the embodiment of the application, the object to be subjected to the sticker processing in the image to be processed, namely, each sub-target object is obtained by detecting the target object contained in the image to be processed and one or more sub-target objects attached to the target object.
That is, in the embodiment of the present application, one of the roles of identifying the target object in the image to be processed is to set the acquisition range of the sub-target object, that is, the range of the sticker processing object, so that when the plurality of sub-target objects are subjected to the sticker loading at a time, the calculation amount of the sticker processing can be more targeted and reduced.
In the embodiment of the application, one or more objects, namely one or more sub-target objects, which need to be subjected to the paper pasting treatment in the image to be processed are obtained by detecting one or more target objects contained in the image to be processed and one or more sub-target objects attached to each target object. The technical problem that the existing sticker objects are single, only one or a certain target object can be subjected to sticker processing, such as faces and clothes, and a plurality of sub-target objects cannot be subjected to sticker processing is solved.
The target object refers to an object occupying a main body in the image to be processed, for example, when the image to be processed is a person image, the target object is a human body, and when the image to be processed is an indoor decoration image, the target object is an object occupying a main body such as a cabinet, a sofa, a wall, a table, etc. for decoration. It should be noted that the target object in the image to be processed may be one or more, for example, when the image to be processed includes a plurality of people, the target object may be a plurality of human bodies.
The sub-target object refers to an object attached to the target object and required to be loaded with a sticker.
In some embodiments of the present application, the target object in the image to be processed and the target sub-object attached to the target object may be detected by a target detection algorithm, and common target detection algorithms include a local binary pattern (Local Binary Pattern, LBP) algorithm, a directional gradient feature combined support vector machine model, a convolutional neural network (Convolutional Neural Network, CNN) model, and the like. Compared with other target detection algorithms, the convolutional neural network model can realize more accurate and rapid detection of the target object, so that the trained convolutional neural network model can be used for detecting the target object and one or more sub-target objects attached to the target object in the image to be processed.
Before the trained convolutional neural network model is used to detect the target object in the image to be processed, the trained convolutional neural network model needs to be obtained. The trained convolutional neural network model is trained according to each sample image and detection results corresponding to each sample image, wherein the detection results corresponding to each sample image are used for indicating all target objects contained in the sample image and one or more sub-target objects to which the target objects are attached.
Optionally, the training step of the convolutional neural network model may include: acquiring a sample image and a detection result corresponding to the sample image; and detecting the sample image by using a convolutional neural network model, and adjusting parameters of the convolutional neural network model according to a detection result until the adjusted convolutional neural network model can detect all target objects in the sample image or detect the target objects in the sample image and the accuracy of one or more sub-target objects attached to the target objects are larger than a preset value, and taking the adjusted convolutional neural network model as a trained convolutional neural network model. The parameters of the convolutional neural network model may include the weight, deviation, coefficient of regression function of each convolutional layer in the convolutional neural network model, and may further include learning rate, iteration number, number of neurons of each layer, and the like.
It should be noted that, the foregoing method for detecting a target object is merely illustrative, and is not meant to limit the scope of the present application, and other methods for detecting a target object are equally applicable to the present application, which is not listed here.
And 103, extracting the characteristic information of each sub-target object, and generating a sticker inquiry instruction corresponding to each sub-target object according to the characteristic information.
After each sub-target object is detected, the characteristic information of each sub-target object needs to be extracted, so that a sticker inquiry command corresponding to each sub-target object is generated according to the characteristic information, and a sticker corresponding to each sub-target object is obtained according to the sticker inquiry command.
The feature information may be geometric feature information of a sub-target object, or an LBP feature extracted by using an LBP algorithm, or an SIFT feature extracted by using a Scale-invariant feature transform (Scale-invariant feature transform, SIFT) algorithm, and the sub-target object is distinguished from other sub-target objects by the feature information, so as to generate a sticker query instruction corresponding to each sub-target object according to the feature information, where the sticker query instruction is used for querying a sticker corresponding to each sub-target object.
And 104, obtaining the sticker corresponding to each sub-target object according to the sticker inquiry instruction, and loading the sticker to the corresponding sub-target object in the image to be processed.
In the embodiment of the present application, the obtaining the sticker corresponding to each sub-target object according to the sticker query instruction may be obtaining the sticker corresponding to each sub-target object by querying a local database according to the sticker query instruction, or obtaining the sticker corresponding to each sub-target object by querying a third party server according to the sticker query instruction, so as to load the obtained sticker onto the corresponding sub-target object in the image to be processed.
Optionally, the obtaining the decal corresponding to each sub-target object according to the decal query instruction includes: the sticker inquiry instruction is sent to a server; and receiving the stickers corresponding to each sub-target object queried by the server according to the sticker query instructions.
For example, a sticker inquiry command carrying the characteristic information of the sub-target object is sent to a server, and after the server receives the sticker inquiry command, the corresponding relation list of the characteristic information and the sticker information stored in advance by the server is searched by utilizing the characteristic information carried by the sticker inquiry command, so that the sticker corresponding to the sub-target object with the characteristic information is obtained.
In the embodiment of the application, the target object contained in the image to be processed and one or more sub-target objects attached to the target object are detected, the characteristic information of each sub-target object is extracted, and a sticker inquiry instruction corresponding to each sub-target object carrying the characteristic information is generated, so that a sticker corresponding to each sub-target object is inquired and obtained according to the sticker inquiry instruction, and the sticker is loaded onto the corresponding sub-target object in the image to be processed, so that the sticker processing of one or more sub-target objects in the image to be processed is completed; the stickers added to the sub-target objects are obtained by inquiring the characteristic information of the sub-target objects and are not prepared in advance, so that the condition that the same sticker is used in different images to influence the expression effect of the sticker can be avoided.
Optionally, as shown in fig. 2, in step 103, generating a sticker query instruction corresponding to each sub-target object according to the feature information may include: steps 201 to 202.
Step 201, obtaining a type of a sticker selected by a user; the above-mentioned types of stickers include price stickers, brand stickers, purchase link stickers, and subject description stickers of target objects;
and 202, generating a sticker inquiry command corresponding to each sub-target object according to the characteristic information and the sticker type.
That is, the above-mentioned sticker inquiry command carries, in addition to the feature information corresponding to the sub-target object, a type of the sticker selected by the user.
For example, as shown in fig. 3, when the image to be processed is a person image, detecting a target object included in the image to be processed and one or more sub-target objects attached to the target object includes: and detecting a target human body contained in the image to be processed and one or more sub-target objects attached to the target human body. The plurality of sub-target objects may be clothes, hats, shoes, pants, backpacks, glasses, necklaces, earrings, rings, hand rings, mobile phones, watches, etc. attached to the person.
Because the target objects are more attached sub-target objects and each sub-target object has multiple attribute information corresponding to the sub-target object, when the character image is subjected to the sticker processing, firstly, the type of the sticker selected by the user on the sticker loading interface 31 needs to be acquired, and then, a sticker inquiry instruction corresponding to each sub-target object is generated according to the type of the sticker and the characteristic information of the sub-target object.
For example, as shown in fig. 4, if the type of the sticker selected by the user in the sticker loading interface is the price sticker 41 and the purchase link sticker 42, a sticker inquiry command carrying the feature information, the price sticker type, and the purchase link sticker type corresponding to each sub-target object is generated, and when the price and the purchase link corresponding to the sub-target object are inquired, the sticker inquiry command is loaded onto the sub-target object corresponding to the character image in the form of the price sticker 41 'and the purchase link sticker 42'.
Optionally, as shown in fig. 5, in step 104, the loading the sticker onto the corresponding sub-target object in the to-be-processed image includes: step 501 to step 502.
In step 501, if the number of the stickers is greater than a preset threshold, loading the sticker marks corresponding to the stickers onto the corresponding sub-target objects in the image to be processed;
in step 502, when a sticker mark triggering instruction is received, a sticker corresponding to the sticker mark triggering instruction is loaded onto a corresponding sub-target object in the image to be processed; the sticker mark triggering instruction is triggered by the user clicking on the sticker mark.
The preset threshold may be a value set by a user, or may be a value set at factory setting.
In the embodiment of the application, when the number of the stickers is larger than the preset threshold, the sticker marks corresponding to the stickers are loaded onto the corresponding sub-target objects in the to-be-processed image instead of directly loading the stickers onto the corresponding sub-target objects in the to-be-processed image, so that the to-be-processed image is not blocked by the stickers, and only when a user clicks the sticker marks, the sticker corresponding to the sticker mark triggering instruction is loaded onto the corresponding sub-target objects in the to-be-processed image, thereby facilitating the user to better view the to-be-processed image.
As shown in fig. 6, the above-mentioned sticker marks may be dots of different colors, or small circles of different colors, for indicating different types of stickers, for example, a sticker mark 61 for indicating a price sticker, and a sticker mark 62 for indicating a purchase of a link sticker.
Optionally, the acquiring the image to be processed includes: acquiring a preview frame image in the process of previewing by a camera; correspondingly, the detecting the target object contained in the image to be processed and one or more sub-target objects attached to the target object includes: in the process of previewing a camera, carrying out structured light detection on an object to be shot, and obtaining a target object in the object to be shot and one or more sub-target objects attached to the target object by combining depth detection to obtain the target object contained in the previewing frame image and one or more sub-target objects attached to the target object.
In the embodiment of the application, when a camera performs shooting preview, a preview frame image is obtained, in the process of camera preview, structural light detection is performed on an object to be shot, and simultaneously, a target object in the object to be shot and one or more sub-target objects attached to the target object are obtained by combining depth detection, so that the target object contained in the preview frame image and the one or more sub-target objects attached to the target object are obtained, and the sub-target objects in the preview frame image are subjected to sticker loading processing.
For example, when a user encounters a person with a unique clothing collocation on the road, the person is taken as an object for shooting preview, information such as brands, theme descriptions, prices and the like of clothing collocated on the person can be obtained, and the information is loaded on a preview frame image in a form of a sticker for reference by the user, so that the user can capture the attractive clothing collocation in real time.
The structured light detection comprises a moire fringe detection method, a stereoscopic vision method and other detection methods. Specifically, the moire fringe detection method adopts two groups of gratings, namely a main grating and a reference grating, detects the main grating on the surface of the outline through the reference grating, and calculates the outline surface shape of the object according to the fringe rule; the method has the characteristic of higher measurement accuracy. The stereoscopic vision method utilizes the imaging equipment to acquire two images of the measured object from different positions, and acquires the three-dimensional geometric information of the object by calculating the position deviation between corresponding points of the images, so that the three-dimensional geometric information is not easily influenced by factors such as object plane properties such as material color and the like, background light and the like.
In the embodiment of the application, the target object in the object to be shot and one or more sub-target objects attached to the target object are detected by utilizing the structured light, and the target object in the object to be shot and the one or more sub-target objects attached to the target object are obtained by combining the depth-of-field detection, so that the object not attached to the target object can be effectively eliminated, and the detection result of the sub-target object is more accurate.
For example, an object which is in the same depth of field as the target object and which intersects with the target object is taken as the sub-target object.
Optionally, after the decal is loaded onto the corresponding sub-target object in the image to be processed, the method further includes: and storing the image to be processed which is loaded by the sticker as an image file in an Exif file format.
Here, exif is an image file format, and when storing image data, photograph shooting information such as various shooting conditions such as aperture, shutter, white balance, ISO, focal length, date and time, and information such as camera brands, models, color codes, sounds recorded at shooting, and GPS global positioning system data, thumbnails, can be inserted into the data header.
In the embodiment of the application, the image to be processed with the sticker loading completed is saved as the image file in the Exif file format, so that the sticker information in the image to be processed with the sticker loading completed can also be saved, for example, the purchasing link contained in the purchasing link sticker is saved, so that a subsequent user can link to a corresponding purchasing webpage by clicking the purchasing link sticker in the image to be processed.
It should be noted that, for simplicity of description, the foregoing method embodiments are all expressed as a series of action combinations, but it should be understood by those skilled in the art that the present application is not limited by the order of action described, as some steps may be performed in other order or simultaneously according to the present application.
Fig. 7 shows a schematic structural diagram of a sticker loading apparatus 700 according to an embodiment of the present application, including an acquisition unit 701, a detection unit 702, a generation unit 703, and a loading unit 704.
An acquiring unit 701 for acquiring an image to be processed;
a detection unit 702, configured to detect a target object included in the image to be processed and one or more sub-target objects to which the target object is attached;
a generating unit 703, configured to extract feature information of each sub-target object, and generate a sticker query instruction corresponding to each sub-target object according to the feature information;
and a loading unit 704, configured to obtain a sticker corresponding to each sub-target object according to the sticker query instruction, and load the sticker onto the corresponding sub-target object in the to-be-processed image.
In some embodiments of the present application, the generating unit 703 is specifically configured to: acquiring a type of the sticker selected by a user; the above-mentioned types of stickers include price stickers, brand stickers, purchase link stickers, and subject description stickers of target objects; and generating a sticker inquiry instruction corresponding to each sub-target object according to the characteristic information and the sticker type.
In some embodiments of the present application, the loading unit 704 is specifically configured to load the sticker label corresponding to the sticker onto the corresponding sub-target object in the to-be-processed image if the number of the stickers is greater than a preset threshold; when receiving a sticker mark triggering instruction, loading a sticker corresponding to the sticker mark triggering instruction onto a corresponding sub-target object in the image to be processed; the sticker mark triggering instruction is triggered by the user clicking on the sticker mark.
In some embodiments of the present application, the acquiring unit 701 is specifically configured to acquire a preview frame image during a preview process of a camera.
Correspondingly, the detection unit 702 is specifically configured to perform structured light detection on an object to be shot in a camera preview process, and obtain a target object in the object to be shot and one or more sub-target objects attached to the target object in combination with depth detection, so as to obtain a target object included in the preview frame image and one or more sub-target objects attached to the target object.
Accordingly, the detecting unit 702 is further specifically configured to detect a target human body included in the image to be processed and one or more sub-target objects attached to the target human body.
Optionally, the generating unit 703 is specifically configured to send the sticker query instruction to a server; and receiving the stickers corresponding to each sub-target object queried by the server according to the sticker query instructions.
Optionally, the above-mentioned sticker loading device further includes a saving unit, specifically configured to save the image to be processed, on which the sticker loading is completed, as an image file in an Exif file format after loading the sticker onto a corresponding sub-target object in the above-mentioned image to be processed.
It should be noted that, for convenience and brevity of description, the specific working process of the above-described sticker loading apparatus 700 may refer to the corresponding process of the method described in fig. 1 to 6, and will not be repeated here.
As shown in fig. 8, the present application provides a terminal for implementing the above-mentioned sticker loading method, where the terminal may be a mobile terminal, and the mobile terminal may be a terminal such as a smart phone, a tablet computer, a Personal Computer (PC), a learning machine, etc., including: one or more input devices 83 (only one shown in fig. 8) and one or more output devices 84 (only one shown in fig. 8). The processor 81, the memory 82, the input device 83 and the output device 84 are connected by a bus 85.
It should be appreciated that in embodiments of the present application, the processor 81 may be a central processing unit (Central Processing Unit, CPU), which may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), field programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The input device 83 may include a virtual keyboard, a touch pad, a fingerprint sensor (for collecting fingerprint information of a user and direction information of a fingerprint), a microphone, etc., and the output device 84 may include a display, a speaker, etc.
Memory 82 may include read only memory and random access memory and provides instructions and data to processor 81. Some or all of the memory 82 may also include non-volatile random access memory. For example, the memory 82 may also store information of the device type.
The memory 82 stores a computer program that is executable on the processor 81, for example, a program of a sticker loading method. The steps of the embodiment of the sticker loading method described above, such as steps 101 to 104 shown in fig. 1, are implemented when the processor 81 executes the computer program. Alternatively, the processor 81 may implement the functions of the modules/units in the above-described apparatus embodiments when executing the computer program, for example, the functions of the units 701 to 704 shown in fig. 7.
The computer program may be divided into one or more modules/units, which are stored in the memory 82 and executed by the processor 81 to complete the present application. The one or more modules/units may be a series of instruction segments of a computer program capable of performing a specific function, the instruction segments describing the execution of the computer program in the terminal for taking pictures. For example, the above-described computer program may be divided into an acquisition unit, a detection unit, a generation unit, and a loading unit, each unit functioning specifically as follows: an acquisition unit configured to acquire an image to be processed; a detection unit, configured to detect a target object included in the image to be processed, and one or more sub-target objects to which the target object is attached; the generating unit is used for extracting the characteristic information of each sub-target object and generating a sticker inquiry instruction corresponding to each sub-target object according to the characteristic information; and the loading unit is used for acquiring the sticker corresponding to each sub-target object according to the sticker inquiry instruction and loading the sticker onto the corresponding sub-target object in the image to be processed.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, the specific names of the functional units and modules are only for distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working process of the units and modules in the above system may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/terminal and method may be implemented in other manners. For example, the apparatus/terminal embodiments described above are merely illustrative, e.g., the division of the modules or units described above is merely a logical functional division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection via interfaces, devices or units, which may be in electrical, mechanical or other forms.
The units described above as separate components may or may not be physically separate, and components shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated modules/units described above, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the present application may implement all or part of the flow of the method of the above embodiment, or may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, and when the computer program is executed by a processor, the steps of each method embodiment may be implemented. The computer program comprises computer program code, and the computer program code can be in a source code form, an object code form, an executable file or some intermediate form and the like. The computer readable medium may include: any entity or device capable of carrying the computer program code described above, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), an electrical carrier wave signal, a telecommunications signal, a software distribution medium, and so forth. It should be noted that the content of the computer readable medium described above can be appropriately increased or decreased according to the requirements of the jurisdiction's legislation and the patent practice, for example, in some jurisdictions, the computer readable medium does not include electrical carrier signals and telecommunication signals according to the legislation and the patent practice.
The above embodiments are only for illustrating the technical solution of the present application, and not for limiting the same; although the application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application, and are intended to be included in the scope of the present application.

Claims (9)

1. A decal loading method, comprising:
acquiring an image to be processed;
detecting a target object contained in the image to be processed and one or more sub-target objects attached to the target object; the sub-target object is an object attached to the target object and needing to be loaded with a sticker;
extracting characteristic information of each sub-target object, and generating a sticker inquiry instruction corresponding to each sub-target object according to the characteristic information;
obtaining a sticker corresponding to each sub-target object according to the sticker inquiry instruction, and loading the sticker to the corresponding sub-target object in the image to be processed;
obtaining the sticker corresponding to each sub-target object according to the sticker inquiry command, including:
sending the sticker inquiry instruction to a server;
receiving the decal corresponding to each sub-target object queried by the server according to the decal query instruction; after receiving the sticker inquiry command, the server searches a corresponding relation list of the feature information and the sticker information stored in the server in advance by utilizing the feature information of the sub-target object carried by the sticker inquiry command, and obtains the sticker corresponding to the sub-target object with the feature information.
2. The decal loading method according to claim 1, wherein the generating a decal query command corresponding to each sub-target object according to the feature information includes:
acquiring a type of the sticker selected by a user; the sticker types comprise price stickers, brand stickers, purchasing link stickers and theme description stickers of target objects;
and generating a sticker inquiry instruction corresponding to each sub-target object according to the characteristic information and the sticker type.
3. The decal loading method according to claim 1 or 2, wherein the loading the decal onto the corresponding sub-target object in the image to be processed comprises:
if the number of the stickers is larger than a preset threshold, loading the sticker marks corresponding to the stickers onto the corresponding sub-target objects in the image to be processed;
when receiving a sticker mark triggering instruction, loading a sticker corresponding to the sticker mark triggering instruction onto a corresponding sub-target object in the image to be processed; the sticker mark triggering instruction is triggered by a user clicking on the sticker mark.
4. A decal loading method according to claim 3, wherein said acquiring an image to be processed comprises:
acquiring a preview frame image in the process of previewing by a camera;
correspondingly, the detecting the target object contained in the image to be processed and one or more sub-target objects attached to the target object includes:
in the process of previewing a camera, carrying out structured light detection on an object to be shot, and obtaining a target object in the object to be shot and one or more sub-target objects attached to the target object by combining depth detection to obtain a target object contained in a previewing frame image and one or more sub-target objects attached to the target object.
5. The decal loading method according to claim 1, wherein the detecting a target object contained in the image to be processed, and one or more sub-target objects to which the target object is attached, includes:
and detecting a target human body contained in the image to be processed and one or more sub-target objects attached to the target human body.
6. The decal loading method according to claim 1, wherein after the decal is loaded onto the corresponding sub-target object in the image to be processed, comprising:
and storing the image to be processed which is loaded by the sticker as an image file in an Exif file format.
7. A decal loading apparatus, comprising:
an acquisition unit configured to acquire an image to be processed;
the detection unit is used for detecting a target object contained in the image to be processed and one or more sub-target objects attached to the target object; the sub-target object is an object attached to the target object and needing to be loaded with a sticker;
the generation unit is used for extracting the characteristic information of each sub-target object and generating a sticker inquiry instruction corresponding to each sub-target object according to the characteristic information;
the loading unit is used for acquiring the sticker corresponding to each sub-target object according to the sticker inquiry instruction and loading the sticker onto the corresponding sub-target object in the image to be processed;
obtaining the sticker corresponding to each sub-target object according to the sticker inquiry command, including:
sending the sticker inquiry instruction to a server;
receiving the decal corresponding to each sub-target object queried by the server according to the decal query instruction; after receiving the sticker inquiry command, the server searches a corresponding relation list of the feature information and the sticker information stored in the server in advance by utilizing the feature information of the sub-target object carried by the sticker inquiry command, and obtains the sticker corresponding to the sub-target object with the feature information.
8. A terminal comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the method according to any one of claims 1 to 6 when the computer program is executed.
9. A computer readable storage medium storing a computer program, characterized in that the computer program when executed by a processor implements the steps of the method according to any one of claims 1 to 6.
CN201810867377.5A 2018-08-01 2018-08-01 Label loading method, label loading device, terminal and computer readable storage medium Active CN109147007B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810867377.5A CN109147007B (en) 2018-08-01 2018-08-01 Label loading method, label loading device, terminal and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810867377.5A CN109147007B (en) 2018-08-01 2018-08-01 Label loading method, label loading device, terminal and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN109147007A CN109147007A (en) 2019-01-04
CN109147007B true CN109147007B (en) 2023-09-01

Family

ID=64798765

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810867377.5A Active CN109147007B (en) 2018-08-01 2018-08-01 Label loading method, label loading device, terminal and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN109147007B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112132859A (en) * 2019-06-25 2020-12-25 北京字节跳动网络技术有限公司 Sticker generation method, apparatus, medium, and electronic device
CN112463268A (en) * 2019-09-06 2021-03-09 北京字节跳动网络技术有限公司 Application data processing method, device, equipment and storage medium
CN111260600B (en) * 2020-01-21 2023-08-22 维沃移动通信有限公司 Image processing method, electronic equipment and medium
CN113473246B (en) * 2020-03-30 2023-09-01 阿里巴巴集团控股有限公司 Method and device for publishing media file and electronic equipment
CN112001872B (en) 2020-08-26 2021-09-14 北京字节跳动网络技术有限公司 Information display method, device and storage medium
CN114598939B (en) * 2022-04-25 2023-07-14 镁佳(北京)科技有限公司 Video watermark adding method and device and electronic equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103218854A (en) * 2013-04-01 2013-07-24 成都理想境界科技有限公司 Method for realizing component marking during augmented reality process and augmented reality system
CN105095362A (en) * 2015-06-25 2015-11-25 深圳码隆科技有限公司 Image display method and device based on target object
CN105678686A (en) * 2015-12-30 2016-06-15 北京金山安全软件有限公司 Picture processing method and device
CN106777329A (en) * 2017-01-11 2017-05-31 维沃移动通信有限公司 The processing method and mobile terminal of a kind of image information
CN107169135A (en) * 2017-06-12 2017-09-15 广州市动景计算机科技有限公司 Image processing method, device and electronic equipment
CN107343211A (en) * 2016-08-19 2017-11-10 北京市商汤科技开发有限公司 Method of video image processing, device and terminal device
CN107993131A (en) * 2017-12-27 2018-05-04 广东欧珀移动通信有限公司 Wear to take and recommend method, apparatus, server and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105868219B (en) * 2015-01-23 2019-09-17 阿里巴巴集团控股有限公司 A kind of information issuing method and device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103218854A (en) * 2013-04-01 2013-07-24 成都理想境界科技有限公司 Method for realizing component marking during augmented reality process and augmented reality system
CN105095362A (en) * 2015-06-25 2015-11-25 深圳码隆科技有限公司 Image display method and device based on target object
CN105678686A (en) * 2015-12-30 2016-06-15 北京金山安全软件有限公司 Picture processing method and device
CN107343211A (en) * 2016-08-19 2017-11-10 北京市商汤科技开发有限公司 Method of video image processing, device and terminal device
CN106777329A (en) * 2017-01-11 2017-05-31 维沃移动通信有限公司 The processing method and mobile terminal of a kind of image information
CN107169135A (en) * 2017-06-12 2017-09-15 广州市动景计算机科技有限公司 Image processing method, device and electronic equipment
CN107993131A (en) * 2017-12-27 2018-05-04 广东欧珀移动通信有限公司 Wear to take and recommend method, apparatus, server and storage medium

Also Published As

Publication number Publication date
CN109147007A (en) 2019-01-04

Similar Documents

Publication Publication Date Title
CN109147007B (en) Label loading method, label loading device, terminal and computer readable storage medium
CN108764091B (en) Living body detection method and apparatus, electronic device, and storage medium
WO2019134560A1 (en) Method for constructing matching model, clothing recommendation method and device, medium, and terminal
CN108495050B (en) Photographing method, photographing device, terminal and computer-readable storage medium
CN105447047B (en) It establishes template database of taking pictures, the method and device for recommendation information of taking pictures is provided
JP6392114B2 (en) Virtual try-on system
CN110059661A (en) Action identification method, man-machine interaction method, device and storage medium
EP3608838A1 (en) Device and method for identifying items according to attributes of an avatar
CN111649690A (en) Handheld 3D information acquisition equipment and method
CN107920211A (en) A kind of photographic method, terminal and computer-readable recording medium
CN108037823B (en) Information recommendation method, Intelligent mirror and computer readable storage medium
WO2014103441A1 (en) Server device and photographing device
KR20170134256A (en) Method and apparatus for correcting face shape
JP5439787B2 (en) Camera device
KR20170034428A (en) Use of camera metadata for recommendations
KR101085762B1 (en) Apparatus and method for displaying shape of wearing jewelry using augmented reality
JP2016208554A (en) Method and apparatus for color balance correction
CN108810406B (en) Portrait light effect processing method, device, terminal and computer readable storage medium
JP2018084890A (en) Information processing unit, information processing method, and program
KR20140071693A (en) Method for measuring foot size using autofocus
CN108876936B (en) Virtual display method and device, electronic equipment and computer readable storage medium
CN108200335A (en) Photographic method, terminal and computer readable storage medium based on dual camera
CN109360222A (en) Image partition method, device and storage medium
JP4090926B2 (en) Image storage method, registered image retrieval method and system, registered image processing method, and program for executing these methods
CN103856708B (en) The method and photographic device of auto-focusing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant