CN110168599B - Data processing method and terminal - Google Patents

Data processing method and terminal Download PDF

Info

Publication number
CN110168599B
CN110168599B CN201780083034.3A CN201780083034A CN110168599B CN 110168599 B CN110168599 B CN 110168599B CN 201780083034 A CN201780083034 A CN 201780083034A CN 110168599 B CN110168599 B CN 110168599B
Authority
CN
China
Prior art keywords
point cloud
cloud information
sticker
terminal
portrait
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201780083034.3A
Other languages
Chinese (zh)
Other versions
CN110168599A (en
Inventor
吴清亮
陈绍君
陈晓晨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Publication of CN110168599A publication Critical patent/CN110168599A/en
Application granted granted Critical
Publication of CN110168599B publication Critical patent/CN110168599B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the application discloses a data processing method which is used for improving shooting experience of a user. The method comprises the following steps: the method comprises the steps that a terminal obtains first point cloud information of a standard portrait, wherein the first point cloud information is used for indicating coordinates of the standard portrait on a display screen of the terminal; the terminal acquires second point cloud information of the sticker, wherein the second point cloud information is at least part of the first point cloud information and is used for indicating the position of the sticker relative to a standard portrait, the second point cloud information is artificially drawn, and the sticker is a pattern which can be displayed in the display screen; when the terminal detects the portrait in the shooting preview state, the terminal generates third point cloud information, wherein the third point cloud information is used for indicating the coordinates of the portrait on the display screen in the shooting preview state; and when the third point cloud information is matched with the first point cloud information and the user selects the paster, the terminal displays the paster according to the second point cloud information. The embodiment of the application further provides a terminal used for improving the shooting experience of a user.

Description

Data processing method and terminal
Technical Field
The present application relates to the field of terminal technologies, and in particular, to a data processing method and a terminal.
Background
Nowadays, with the continuous progress of science and technology, the terminal with the function of shooing is more and more popularized, in order to satisfy the individualized demand of shooing and promote the interest of shooing, the user can select individualized sticker at the in-process of shooing, finally take out the photo that people's face and sticker amalgamation formed.
In the prior art, after the picture design of the sticker is completed, the initial display position of the sticker at the terminal is defined at will by using the display resolution of the terminal as a basis, when the terminal detects a human face in the shooting process, a user can select the sticker, and then the terminal splices the sticker and the human face.
However, the initial display position of the sticker at the terminal is arbitrarily defined, which often differs from the actual display position of the sticker in the actual scene relative to the human face by a long distance, so that the terminal is often required to spend much time to adjust the position of the sticker, resulting in poor user experience.
Disclosure of Invention
The embodiment of the application provides a data processing method and a terminal, which are used for improving user experience.
In view of this, a first aspect of the embodiments of the present application provides a data processing method, including:
the terminal acquires first point cloud information of the standard portrait, wherein the point cloud information is a set of vectors in a three-dimensional coordinate system. The vectors are usually expressed in the form of X, Y and Z three-dimensional coordinates and are generally mainly used for representing the shape of the outer surface of an object, and it can be understood that the first point cloud information can indicate the shape and the position of the standard portrait displayed on the terminal;
the terminal can also obtain second point cloud information of a sticker based on the obtained first point cloud information, the sticker refers to a picture which can be added to a terminal interface for display when a user uses a shooting function, the second point cloud information is at least part of information in the first point cloud information, and it can be understood that the second point cloud information can indicate the position of the sticker on the terminal relative to a standard portrait;
further, when the terminal detects the portrait in the shooting preview state, the terminal generates third point cloud information, and the third point cloud information can indicate the shape and the display position of the portrait, which is detected by the terminal in the shooting preview state in real time, displayed at the terminal.
Next, when the third point cloud information is matched with the first point cloud information and the user selects the sticker, the terminal displays the sticker according to the second point cloud information, it can be understood that the terminal can judge whether the portrait is detected currently according to a certain rule, that is, whether the third point cloud information is matched with the first point cloud information, if the matching indicates that the display shape and the display position of the current portrait are close to the standard portrait, the terminal can determine that the portrait is detected, so that the sticker selected by the user can be displayed on the terminal according to the second point cloud information.
In the embodiment of the application, the terminal may obtain first point cloud information of the standard portrait and second point cloud information of the sticker, where the second point cloud information is used to indicate a position of the sticker relative to the standard portrait, when the terminal detects the portrait, third point cloud information of the portrait may be generated, further, if the third point cloud information matches the first point cloud information and the user selects the sticker, the terminal may display the sticker according to the second point cloud information, since the second point cloud information of the sticker is artificially formulated according to content of the sticker, for example, the content of the sticker is a pair of glasses, the sticker is near eyes of the standard portrait, if the content of the sticker is a hat, the sticker is near a top of the standard portrait, so when the third point cloud information matches the first point cloud information, an error between the second point cloud information of the sticker, that is the initial display position of the sticker, and an actual display position of the sticker relative to the portrait in an actual scene is small, the time that the follow-up terminal adjusted the position of the sticker is saved, and the user experience is improved.
With reference to the first aspect of the embodiment of the present application, in a first implementation manner of the first aspect of the embodiment of the present application, the method further includes:
the terminal selects a target SDK from a Software Development Kit (SDK) set, all the SDKs in the SDK set have uniform SDK interfaces, it can be understood that the SDKs in the SDK set comprise sticker splicing algorithms, portrait beauty algorithms, filter shooting algorithms, portrait detection algorithms and other shooting algorithms developed by a third party, and the camera APP can realize the above multiple shooting functions by calling the SDK of the third party.
With reference to the first implementation manner of the first aspect of the embodiment of the present application, in a second implementation manner of the first aspect of the embodiment of the present application, the selecting, by the terminal, the target SDK from the SDK set includes:
the terminal can select a target SDK from the integrated SDK set in an intelligent selection mode, specifically, each SDK has corresponding SDK parameters, the parameters comprise parameters of a user skin-beautifying grade, parameters of a preview algorithm processing rate and parameters of a sticker following rate, then the three parameters can be substituted into the following formula to obtain a calculation result through calculation, the calculation result with the maximum value is selected from the calculation result set, and the SDK corresponding to the calculation result is determined to be the target SDK.
With reference to the second implementation manner of the first aspect of the embodiment of the present application, in a third real-time implementation manner of the first aspect of the embodiment of the present application, the formula includes:
R=α*B+γ*P+β*F;
wherein R is a calculation result, B is a parameter of the skin-beautifying grade of the user, P is a parameter of the processing rate of the preview algorithm, F is a parameter of the sticker following rate, and alpha, beta and gamma are preset weight coefficients.
It should be noted that the preset weighting factor may be different in different shooting modes, for example, if the requirement for skin beautifying is higher in the shooting mode, the value of α may be adjusted accordingly, and if the requirement for the rate of the sticker following property is higher in the video recording mode, the value of β may be adjusted accordingly, and the values of α, β, and γ are not limited here.
With reference to any one of the first implementation manner of the first aspect of the embodiment of the present application to the third implementation manner of the first aspect of the embodiment of the present application, in a fourth implementation manner of the first aspect of the embodiment of the present application, the unified SDK interface definition includes:
downloading and registering the sticker, initializing a shooting interface, previewing the sticker, processing shooting data and ending the process.
With reference to the first aspect of the embodiment of the present application, in a fifth implementation manner of the first aspect of the embodiment of the present application, the method further includes:
the terminal selects the target shooting algorithm from the shooting algorithm set, and it can be understood that the terminal can also integrate a plurality of shooting algorithms, that is, the developer of the shooting algorithm discloses the shooting algorithm, under such a situation, it is no longer necessary to invoke the SDK, and various shooting functions such as portrait beauty, portrait detection, filter shooting, sticker matching and the like can be realized by directly selecting a certain shooting algorithm, the selection of the target shooting algorithm can be intelligently selected by the terminal, or can be selected by the user according to the preference, and the specific situation is not limited herein.
With reference to the fifth implementation manner of the first aspect of the embodiment of the present application, in a sixth implementation manner of the first aspect of the embodiment of the present application, the target shooting algorithm includes:
a portrait beauty algorithm, a portrait detection algorithm, a filter shooting algorithm, and/or a sticker stitching algorithm.
A second aspect of the embodiments of the present application provides a terminal, including:
the terminal comprises a first acquisition unit, a second acquisition unit and a display unit, wherein the first acquisition unit is used for acquiring first point cloud information of a standard portrait, and the first point cloud information is used for indicating a coordinate of the standard portrait on a display screen of the terminal;
the second acquisition unit is used for acquiring second point cloud information of the sticker, wherein the second point cloud information is at least part of the first point cloud information, the second point cloud information is used for indicating the position of the sticker relative to a standard portrait, the second point cloud information is artificially formulated, and the sticker is a pattern which can be displayed in the display screen;
the generation unit is used for generating third point cloud information when the terminal detects the portrait in the shooting preview state, wherein the third point cloud information is used for indicating the coordinates of the portrait on the display screen in the shooting preview state;
and the display unit is used for displaying the paster according to the second point cloud information when the third point cloud information is matched with the first point cloud information and the user selects the paster.
With reference to the second aspect of the embodiment of the present application, in a first implementation manner of the second aspect of the embodiment of the present application, the terminal further includes:
and the first selection unit is used for selecting a target SDK from the SDK set, wherein all the SDKs in the SDK set have a uniform SDK interface, and the target SDK is used for realizing the splicing of the portrait and the sticker.
With reference to the first implementation manner of the second aspect of the embodiment of the present application, in the second implementation manner of the second aspect of the embodiment of the present application, the first selecting unit includes:
the acquisition module is used for acquiring an SDK parameter set corresponding to the SDK set, wherein the SDK parameter set comprises a parameter set of the skin-beautifying grade of a user, a parameter set of the processing rate of a preview algorithm and a parameter set of the following rate of the sticker;
the calculation module is used for substituting the SDK parameter set into a formula to calculate to obtain a calculation result set;
the first determining module is used for determining a target calculation result from the calculation result set, wherein the target calculation result is the calculation result with the largest median in the calculation result set;
and the second determining module is used for determining the target SDK corresponding to the target calculation result.
With reference to the second aspect of the embodiment of the present application, in a third implementation manner of the second aspect of the embodiment of the present application, the terminal further includes:
and the second selection unit is used for selecting a target shooting algorithm from the shooting algorithm set, and the target shooting algorithm is used for realizing the splicing of the portrait and the sticker.
A third aspect of the embodiments of the present application provides a terminal, including:
the system comprises a processor, a memory, a bus and an input/output interface;
the memory stores program codes;
when the processor calls the program code in the memory, the following operations are executed:
acquiring first point cloud information of the standard portrait, wherein the first point cloud information is used for indicating the coordinate of the standard portrait on a display screen of the terminal;
acquiring second point cloud information of the sticker, wherein the second point cloud information is at least part of the first point cloud information and is used for indicating the position of the sticker relative to a standard portrait, the second point cloud information is artificially formulated, and the sticker is a pattern which can be displayed in the display screen;
when the terminal detects the portrait in the shooting preview state, generating third point cloud information, wherein the third point cloud information is used for indicating the coordinates of the portrait on the display screen in the shooting preview state;
and when the third point cloud information is matched with the first point cloud information and the user selects the paster, displaying the paster according to the second point cloud information.
A fourth aspect of the embodiments of the present application provides a computer-readable storage medium, where instructions are stored, and when the instructions are executed on a computer, the computer is caused to execute the flow in the data processing method according to the first aspect.
A fifth aspect of embodiments of the present application provides a computer program product, which when run on a computer, causes the computer to execute the flow in the data processing method of the first aspect.
According to the technical scheme, the embodiment of the application has the following advantages:
according to the technical scheme, the terminal can obtain first point cloud information of a standard portrait and second point cloud information of a sticker, the second point cloud information is used for indicating the position of the sticker relative to the standard portrait, when the terminal detects the portrait, third point cloud information of the portrait can be generated, further, if the third point cloud information is matched with the first point cloud information and a user selects the sticker, the terminal can display the sticker according to the second point cloud information, the second point cloud information of the sticker is artificially determined according to the content of the sticker, for example, the content of the sticker is a pair of glasses, the sticker is located near the eyes of the standard portrait, if the content of the sticker is a hat, the sticker is located near the top of the standard portrait, and when the third point cloud information is matched with the first point cloud information, the second point cloud information of the sticker, namely the error between the initial display position of the sticker and the actual display position of the sticker relative to the portrait in an actual scene, is larger than the error between the initial display position of the sticker and the actual display position of the sticker in the actual scene relative to And the time for adjusting the position of the paster by the subsequent terminal is saved, and the user experience is improved.
Drawings
Fig. 1 is a schematic view of an application scenario of a data processing method in an embodiment of the present application;
FIG. 2 is a diagram illustrating an embodiment of a data processing method according to an embodiment of the present application;
FIG. 3(a) is a schematic diagram of a standard portrait displayed on a terminal in the embodiment of the present application;
FIG. 3(b) is a schematic diagram of a terminal display of a sticker in an embodiment of the present application;
FIG. 3(c) is a schematic view of a scene applied to combining a portrait with a sticker in an embodiment of the present application;
FIG. 3(d) is a schematic view of another scenario applied to the combination of a portrait and a sticker in the embodiment of the present application;
FIG. 3(e) is a schematic view of another scenario applied to the combination of a portrait and a sticker in the embodiment of the present application;
FIG. 3(f) is a schematic view of another scenario applied to the combination of a portrait and a sticker in the embodiment of the present application;
FIG. 4 is a diagram illustrating another embodiment of a data processing method according to the present application;
fig. 5 is a schematic view of another application scenario of the data processing method in the embodiment of the present application;
FIG. 6 is a schematic view of a scene applied to a user operating a terminal to download a sticker in an embodiment of the present application;
FIG. 7 is a diagram of an embodiment of a terminal in an embodiment of the application;
fig. 8 is a schematic diagram of another embodiment of the terminal in the embodiment of the present application;
fig. 9 is a schematic structural diagram of a terminal in an embodiment of the present application.
Detailed Description
The embodiment of the application provides a data processing method and a terminal, and the data processing method and the terminal are used for predefining the mode of the initial display position of the sticker at the terminal, when a user selects the sticker to shoot, the error between the initial display position of the sticker and the final actual display position of the sticker is small, the time for adjusting the position of the sticker by a subsequent terminal is saved, and the user experience is improved.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims of the present application and in the drawings described above, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that the embodiments described herein may be practiced otherwise than as specifically illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
This embodiment can be applied to in the application scene as shown in fig. 1, the terminal can communicate with the third party server, the third party server storage has the sticker material of third party's design, when camera Application (APP) in the terminal starts, third party's SDK of integrated in camera APP can download the third party sticker material from the third party server and register the third party sticker material that obtains to the local sticker management module of camera APP in, for the user selection, furthermore, the camera can send the image data who shoots to camera APP, the sticker material that combines the user to select, carry out portrait identification and sticker and portrait amalgamation by third party's SDK, finally realize shooting preview and picture save.
It can be understood that, in this embodiment, the third party SDK includes a sticker matching algorithm, a portrait beautifying algorithm, a filter shooting algorithm, a portrait detection algorithm, and other shooting algorithms developed by the third party, and the camera APP can realize the above various shooting functions by calling the third party SDK.
It should be noted that the camera APP is executed in an application processor of the terminal, and the application processor may perform data interaction with the camera through its own external interface, and it is understood that the format of image data output by the camera may be a YUV format, or may be other formats, for example, an RGB format, and is not limited herein.
In addition, the terminal in this embodiment of the application may be a mobile phone, a tablet computer, a wearable device, an Augmented Reality (AR) \ Virtual Reality (VR) device, a notebook computer, an ultra-mobile personal computer (UMPC), a netbook, a Personal Digital Assistant (PDA), and other user devices having a shooting function, which is not limited in this embodiment of the application.
Referring to fig. 2, a data processing method in an embodiment of the present application is described in detail below, where an embodiment of the data processing method in the embodiment of the present application includes:
201. the terminal obtains first point cloud information of the standard portrait.
In this embodiment, the terminal may obtain first point cloud information of the standard portrait, where the point cloud information is a set of vectors in a three-dimensional coordinate system. These vectors are usually expressed in terms of X, Y, Z three-dimensional coordinates and are generally used primarily to represent the shape of the external surface of an object. Most point cloud information is generated by 3D scanning devices, such as laser radar (2D/3D), stereo camera (stereo camera), time-of-flight camera (time-of-flight camera), which measure information of a large number of points on the surface of an object in an automated manner and then output point cloud information using some kind of data file.
The standard portrait is a three-dimensional portrait model defined by people, and it can be understood that the first point cloud information of the standard portrait may be used to embody an outer surface shape of the standard portrait and a display position of the standard portrait on a terminal, where the standard portrait may refer to a model of a standard head or a model including a standard head and a standard body, and is not limited herein.
It is understood that fig. 3(a) is only an example, in practical applications, the standard portrait does not necessarily need to be displayed on the interface of the terminal according to the first point cloud information, and the first point cloud information of the standard portrait is represented in the form of three-dimensional coordinates, which may be specifically as shown in table 1 below, where table 1 below is a part of the three-dimensional coordinates in the first point cloud information of the standard portrait, and in practical applications, the number and value of the specific coordinates in the table may vary, and are not limited herein.
TABLE 1
X coordinate Y coordinate Z coordinate
-3.560004 58.369194 0.284571
-2.719511 58.207195 1.034474
-1.569996 58.049370 1.726545
-4.797817 58.374855 -0.640903
-0.730695 57.884747 2.066551
-0.000000 57.821045 2.139235
-0.658006 57.523785 2.290249
-1.431864 57.719910 2.049603
0.000220 57.451229 2.333996
-2.444017 60.435867 0.135746
-2.659698 60.948441 -0.729706
-4.007832 58.662563 -0.737348
0.000000 66.146385 4.642503
202. And the terminal acquires the second point cloud information of the sticker.
In this embodiment, the terminal obtains second point cloud information of the sticker, where the second point cloud information is at least a part of information of the first point cloud information, and it can be understood that after the picture design of the sticker is completed, a proper amount of three-dimensional coordinates need to be selected from the first point cloud information as the second point cloud information, that is, the position of the sticker relative to the standard portrait is defined, as shown in fig. 3(b), the sticker picture is a pair of sunglasses, then three-dimensional coordinates near the eyes of the standard portrait of the first point cloud information can be selected as the second point cloud information, and the picture is a display position of the sticker on the terminal according to the second point cloud information.
It can be understood that the second point cloud information of the sticker is artificially formulated according to the actual content of the sticker, for example, if the sticker is a hat, the three-dimensional coordinates near the top of the standard portrait in the first point cloud information can be selected as the second point cloud information, so that in practical application, different stickers can correspond to the same point cloud information or different point cloud information respectively, and the specific details are not limited herein.
It should be noted that a complete set of sticker materials includes a sticker picture and second point cloud information of the sticker, where the sticker picture may be in a bitmap file (bmp), or in addition, may be in another format, for example, may also be in a portable network image (png), and is not limited herein.
203. And when the terminal detects the portrait in a shooting preview state, the terminal generates third point cloud information.
In this embodiment, a user opens the camera APP to enter a shooting preview state, the camera can acquire image information, and a preview image is displayed on a display screen of the terminal, the terminal can detect the image acquired by the camera in real time, when a portrait appears in a shooting range of the camera, the terminal can generate third point cloud information of the portrait, it can be understood that the third point cloud information is also represented in a three-dimensional coordinate form and is used for representing an actual display shape and an actual display position of the portrait currently acquired by the camera at the terminal in the shooting preview state, the generation of the third point cloud information is realized according to a portrait detection algorithm of a third-party SDK, and the third point cloud information also changes in real time along with the movement of the portrait acquired by the camera.
204. And when the third point cloud information is matched with the first point cloud information and the user selects the paster, the terminal displays the paster according to the second point cloud information.
In this embodiment, when the third point cloud information matches the first point cloud information, the user may select a sticker on the interface of the terminal, and the terminal displays the sticker according to the second point cloud information, so as to simultaneously display the previewed portrait and the sticker on the display screen of the terminal, thereby achieving the matching of the portrait and the sticker, it can be understood that the display of the sticker is achieved according to the sticker matching algorithm of the third party SDK, as shown in fig. 3(c), when the terminal determines that the third point cloud information matches the first point cloud information, the user may select a sticker from the sticker material list on the interface of the terminal, and the terminal will display the sticker on the interface of the terminal according to the second point cloud information of the sticker, it can be understood that, in practical applications, the third point cloud information of the actual portrait often does not completely coincide with the first point cloud information of the standard portrait, so according to different situations of the portrait on the display screen of the terminal, the terminal can zoom or shift the sticker through the sticker splicing algorithm to achieve splicing of the sticker and the portrait, and specific display scenes of splicing of the portrait and the sticker are described below respectively.
It should be noted that the terminal may determine whether the third point cloud information matches the first point cloud information according to a coincidence ratio of the third point cloud information and the first point cloud information, specifically, the terminal compares a three-dimensional coordinate set of the third point cloud information with a three-dimensional coordinate set of the first point cloud information, and sets a threshold value, for example, 90%, when the number of the three-dimensional coordinates coincident in the third point cloud information and the first point cloud information reaches 90% of the total number of the three-dimensional coordinates in the first point cloud information, the terminal determines that the third point cloud information matches the first point cloud information, it may be understood that the threshold value may also be another numerical value, for example, 95%, and specifically, no limitation is made here.
Based on the above-mentioned manner of determining whether the third point cloud information matches the first point cloud information, the multiple scene terminals may also determine that the third point cloud information matches the first point cloud information, and the following description is separately given:
optionally, as shown in fig. 3(d), when the distance between the camera and the portrait is increased, the shape of the portrait displayed on the terminal interface is also proportionally reduced, and at this time, the third point cloud information may not be matched with the first point cloud information, but if the current third point cloud information is matched with the point cloud information obtained by scaling the first point cloud information according to a certain proportion, the terminal may also determine that the third point cloud information is matched with the first point cloud information, specifically, the scaling ratio may be 30% or 20%, and is not limited herein, it is understood that the second point cloud information of the sticker is also scaled according to the scaling ratio of the first point cloud information according to the sticker stitching algorithm, the terminal may display the sticker according to the point cloud information obtained after scaling the second point cloud information to implement the stitching of the sticker and the portrait, and furthermore, when the distance between the camera and the portrait is decreased, the same as described above, and will not be described herein in detail.
Optionally, as shown in fig. 3(e), the position of the portrait photographed by the terminal relative to the standard portrait has an obvious displacement change, and if the current third point cloud information matches the point cloud information obtained after the first point cloud information is subjected to the corresponding displacement change, or the third point cloud information matches the point cloud information obtained after the third point cloud information is scaled and subjected to the corresponding displacement change, it may also be understood that the third point cloud information matches the first point cloud information, at this time, it may be understood that the point cloud information of the sticker according to the sticker stitching algorithm may also be subjected to the corresponding change according to the change of the first point cloud information, and the terminal may display the sticker according to the point cloud information obtained after the change of the second point cloud information, so as to implement the stitching of the sticker and the portrait.
Optionally, as shown in fig. 3(f), the terminal captures two portraits simultaneously, portraits a and portraits b, and generates point cloud information a and point cloud information b corresponding to the two portraits, at this time, the terminal will respectively determine whether the point cloud information a matches the first point cloud information and whether the point cloud information b matches the first point cloud information, the specific determination method is similar to that described in the embodiments corresponding to fig. 3(c), 3(d), or 3(e), and is not described here any more, so when both the point cloud information a and the point cloud information b match the first point cloud information, the terminal will display the sticker at the corresponding position of the portraits a and the portraits b, it can be understood that, if only the point cloud information a matches the first point cloud information, the terminal will only display the sticker at the corresponding position of the portraits a, and if only the point cloud information b matches the first point cloud information, the terminal displays the sticker only at the corresponding position of the portrait b, and in addition, the number of the portraits shot by the terminal at the same time is not limited to the two illustrated herein, and the terminal is not limited herein specifically based on the actual application scenario.
According to the technical scheme, the terminal can obtain first point cloud information of a standard portrait and second point cloud information of a sticker, the second point cloud information is used for indicating the position of the sticker relative to the standard portrait, when the terminal detects the portrait, third point cloud information of the portrait can be generated, further, if the third point cloud information is matched with the first point cloud information and a user selects the sticker, the terminal can display the sticker according to the second point cloud information, the second point cloud information of the sticker is artificially determined according to the content of the sticker, for example, the content of the sticker is a pair of glasses, the sticker is located near the eyes of the standard portrait, if the content of the sticker is a hat, the sticker is located near the top of the standard portrait, and when the third point cloud information is matched with the first point cloud information, the second point cloud information of the sticker, namely the error between the initial display position of the sticker and the actual display position of the sticker relative to the portrait in an actual scene, is larger than the error between the initial display position of the sticker and the actual display position of the sticker in the actual scene relative to And the time for adjusting the position of the paster by the subsequent terminal is saved, and the user experience is improved.
The display mode of the definition sticker on the terminal is described above, and the data processing method of the present application is described in detail below with reference to a specific shooting flow of the terminal in practical application.
Referring to fig. 4, another embodiment of the data processing method according to the embodiment of the present application includes:
401. the camera APP is started.
In this embodiment, the user operation terminal opens the camera APP, and correspondingly, the camera at the terminal is opened, and the camera sends the acquired image data to the camera APP.
402. A target SDK is selected from the set of SDKs.
In this embodiment, the camera APP may integrate a plurality of SDKs including the sticker shooting algorithm, and the SDKs integrated by the camera APP all have a uniform SDK interface, and it can be understood that developers who need to provide these SDKs develop corresponding shooting algorithms according to a uniform rule and package the SDKs.
It should be noted that the unified SDK interface may include procedures of downloading and registering a sticker, initializing a shooting interface, previewing a sticker, processing shooting data, and ending a process, and it is understood that the SDK interface described herein is only an example, and a specific SDK interface in an actual application may be changed, which is not limited herein.
It should be noted that all the camera APP integrated SDKs are sticker materials defined according to the first point cloud information of the standard portrait, that is, the sticker materials provided by the third party need to be defined according to the first point cloud information, and correspondingly, a developer needs to develop a sticker matching algorithm on the basis of the sticker definition, which is similar to the manner described in the embodiment shown in fig. 2 and is not described herein again.
The terminal can select a target SDK from an SDK set integrated by a camera APP in an intelligent selection mode, specifically, each SDK has corresponding SDK parameters, the parameters include a parameter of a user skin-beautifying level, a parameter of a preview algorithm processing rate and a parameter of a sticker following rate, then the three parameters can be brought into the following formula to be calculated to obtain a calculation result, the calculation result with the maximum value is selected from the calculation result set, and the SDK corresponding to the calculation result is determined to be the target SDK.
The formula is R ═ α × B + γ × P + β × F, where R is a settlement result, B is a parameter of the user skin-beautifying rating, P is a parameter of the preview algorithm processing rate, F is a parameter of the sticker following rate, and α, β, and γ are preset weight coefficients. It should be noted that the preset weighting factor may be different in different shooting modes, for example, if the requirement for skin beauty is higher in the shooting mode, the value of α may be correspondingly increased, and if the requirement for the rate of the sticker following property is higher in the recording mode, the value of β may be correspondingly increased, and the specific values of α, β, and γ are not limited here, and the terminal may be set and adjusted according to the actual need by setting a corresponding algorithm therein.
It should be noted that, in this embodiment, the camera APP may integrate a plurality of shooting algorithms in addition to a plurality of SDKs, that is, a shooting algorithm developed and disclosed by a developer of the shooting algorithm, under such a scenario, the camera APP may not need to invoke an SDK any more, and may implement a plurality of shooting functions such as portrait beauty, portrait detection, filter shooting, and sticker matching by directly selecting a certain shooting algorithm, and the selection of the shooting algorithm may be intelligently selected by a terminal, or may be selected by a user according to a preference, and a specific place is not limited herein.
403. And judging whether the paster selected by the user is downloaded, if so, executing step 406, and if not, executing step 404.
In this embodiment, the terminal needs to determine whether the sticker selected by the user has been downloaded and registered to the sticker management module in the camera APP.
404. The sticker is downloaded from the server.
In this embodiment, if the sticker selected by the user has not been downloaded, the camera APP may download the sticker selected by the user through accessing the server, and it is understood that the server may be a third-party server corresponding to the third-party SDK shown in fig. 1, for example, the camera APP currently uses the SDK developed by company a, and then the camera APP may download the sticker from a private server of company a.
It should be noted that, in this embodiment, the server may also be a public server, as shown in fig. 5, the camera APP may download a sticker designed by another third party from the public server in addition to downloading the sticker from the third party server, and since the SDKs integrated by the camera APP all have a uniform interface and are sticker materials defined according to the first cloud information of the standard portrait, as long as the third party sticker materials on the public server are also defined according to the first cloud information of the standard portrait, any one of the SDKs used by the current camera APP may present the sticker on the public server on the interface of the terminal.
Specific scenes in practical applications can refer to fig. 6, for example, the camera APP currently uses the SDK developed by company a, the a sticker, the B sticker, and the C sticker in the sticker material list are all stickers designed and published by company a, the current SDK can download new stickers from the private server of company a and update the sticker material list according to the needs of the user, and the user can also select to download sticker materials published by company a, company B, or company C from the public server by clicking the "+" button on the interface, and can add the materials to the sticker material list of the camera APP for use.
405. The sticker is registered to the sticker management module.
In this embodiment, the camera APP can register the sticker downloaded from the server to the local sticker management module for storage, and the user does not need to repeatedly download the sticker material that has been used before when opening the camera APP for shooting next time.
406. And judging whether the portrait is detected, if so, executing step 407, otherwise, repeatedly executing step 406.
In this embodiment, if the sticker selected by the user is downloaded, further the camera APP will continuously perform portrait detection until it is determined that a portrait has been detected currently, wherein the description of the portrait detection performed by the camera APP is similar to the description of step 203 and step 204 in the embodiment shown in fig. 2, and details thereof are not repeated here.
407. The user selected sticker is displayed.
In this embodiment, when the camera APP determines that the portrait is detected, the second point cloud information of the sticker may be displayed on the interface of the terminal, where a description about the display of the sticker according to the second point cloud information is similar to that in step 204 in the embodiment shown in fig. 2, and is not repeated here.
408. And splicing the portrait and the sticker.
In this embodiment, the camera APP can adjust the display position or the display size of the sticker according to the integrated SDK or the sticker stitching algorithm to finally complete the stitching of the sticker and the portrait.
According to the technical scheme, the terminal can obtain first point cloud information of a standard portrait and second point cloud information of a sticker, the second point cloud information is used for indicating the position of the sticker relative to the standard portrait, when the terminal detects the portrait, third point cloud information of the portrait can be generated, further, if the third point cloud information is matched with the first point cloud information and a user selects the sticker, the terminal can display the sticker according to the second point cloud information, the second point cloud information of the sticker is artificially determined according to the content of the sticker, for example, the content of the sticker is a pair of glasses, the sticker is located near the eyes of the standard portrait, if the content of the sticker is a hat, the sticker is located near the top of the standard portrait, and when the third point cloud information is matched with the first point cloud information, the second point cloud information of the sticker, namely the error between the initial display position of the sticker and the actual display position of the sticker relative to the portrait in an actual scene, is larger than the error between the initial display position of the sticker and the actual display position of the sticker in the actual scene relative to And the time for adjusting the position of the paster by the subsequent terminal is saved, and the user experience is improved.
Secondly, in this application embodiment, the camera APP may integrate a plurality of SDKs or shooting algorithms, the SDKs all have uniform SDK interfaces and are all sticker materials defined according to the first point cloud information of the standard portrait, so as long as the third-party sticker materials on the public server are also defined according to the first point cloud information of the standard portrait, then any one of the SDKs used by the current camera APP may present a sticker on the public server at an interface of the terminal, the user is no longer limited to download a sticker from a private server of a certain third party, the user may download any published sticker from the public server, the sticker materials selectable by the user are richer, and user experience is improved.
The following describes a terminal in an embodiment of the present application:
referring to fig. 7, an embodiment of a terminal in the embodiment of the present application includes:
the terminal comprises a first acquisition unit 701 and a second acquisition unit, wherein the first acquisition unit is used for acquiring first point cloud information of a standard portrait, and the first point cloud information is used for indicating coordinates of the standard portrait on a display screen of the terminal;
a second obtaining unit 702, configured to obtain second point cloud information of a sticker, where the second point cloud information is at least a part of the first point cloud information, the second point cloud information is used to indicate a position of the sticker relative to the standard portrait, the second point cloud information is artificially drawn, and the sticker is a pattern that may be displayed on the display screen;
the generating unit 703 is configured to generate third point cloud information when the terminal detects a portrait in a shooting preview state, where the third point cloud information is used to indicate coordinates of the portrait on a display screen in the shooting preview state of the terminal;
the display unit 704 is configured to display the sticker according to the second point cloud information when the third point cloud information matches the first point cloud information and the sticker is selected by a user.
In the technical solution provided in the embodiment of the present application, the first obtaining unit 701 may obtain first point cloud information of a standard portrait, the second obtaining unit 702 may obtain second point cloud information, the second point cloud information is used to indicate a position of a sticker relative to the standard portrait, when the portrait is detected, the generating unit 703 may generate third point cloud information of the portrait, further, if the third point cloud information matches the first point cloud information and the user selects the sticker, the display unit 704 may display the sticker according to the second point cloud information, since the second point cloud information of the sticker is personally proposed according to content of the sticker, for example, the content of the sticker is a pair of glasses, the sticker is located near eyes of the standard portrait, if the content of the sticker is a hat, the sticker is located near a top of the head of the standard portrait, and when the third point cloud information matches the first point cloud information, the second point cloud information of the sticker, namely the error between the initial display position of the sticker and the actual display position of the sticker in the actual scene relative to the portrait is small, so that the time for adjusting the position of the sticker by a subsequent terminal is saved, and the user experience is improved.
For the sake of understanding, the following detailed description is made on a terminal in an embodiment of the present application, and referring to fig. 8, another embodiment of the terminal in the embodiment of the present application includes:
the first obtaining unit 801 is configured to obtain first point cloud information of a standard portrait, where the first point cloud information is used to indicate coordinates of the standard portrait on a display screen of the terminal;
the second obtaining unit 802 is configured to obtain second point cloud information of a sticker, where the second point cloud information is at least a part of the first point cloud information, the second point cloud information is used to indicate a position of the sticker relative to the standard portrait, the second point cloud information is artificially drawn, and the sticker is a pattern that can be displayed on the display screen;
the generating unit 803 is configured to generate third point cloud information when the terminal detects a portrait in a shooting preview state, where the third point cloud information is used to indicate coordinates of the portrait on a display screen in the shooting preview state;
the display unit 804 is configured to display the sticker according to the second point cloud information when the third point cloud information matches the first point cloud information and the sticker is selected by a user.
The first selecting unit 805 is configured to select a target SDK from an SDK set, where all SDKs in the SDK set have a uniform SDK interface, and the target SDK is used to implement the combination of the portrait and the sticker.
A second selecting unit 806, configured to select a target shooting algorithm from the set of shooting algorithms, where the target shooting algorithm is used to implement the mosaic of the portrait and the sticker.
The first selection unit 805 in the embodiment of the present application further includes:
an obtaining module 8051, configured to obtain an SDK parameter set corresponding to the SDK set, where the SDK parameter set includes a parameter set of a skin-care level of a user, a parameter set of a preview algorithm processing rate, or a parameter set of a sticker following rate;
the calculating module 8052 is configured to substitute the SDK parameter set into a formula to calculate a calculation result set;
a first determining module 8053, configured to determine a target calculation result from the calculation result set, where the target calculation result is a calculation result with a largest value in the calculation result set;
a second determining module 8054, configured to determine the target SDK corresponding to the target calculation result.
In the above, the terminal in the embodiment of the present application is described from the perspective of the modular functional entity, and in the following, the terminal in the embodiment of the present application is described from the perspective of hardware processing, please refer to fig. 9, and another embodiment of the terminal in the embodiment of the present application includes:
as shown in fig. 9, for convenience of description, only the portions related to the embodiments of the present application are shown, and details of the specific technology are not disclosed, please refer to the method portion of the embodiments of the present application. The terminal may be any terminal including a mobile phone, a tablet computer, a PDA (Personal Digital Assistant), a POS (Point of Sales), a vehicle-mounted computer, etc., taking the terminal as the mobile phone as an example:
fig. 9 is a block diagram illustrating a partial structure of a mobile phone related to a terminal provided in an embodiment of the present application. Referring to fig. 9, the handset includes: radio Frequency (RF) circuit 910, memory 920, input unit 930, display unit 940, sensor 950, audio circuit 960, wireless fidelity (WiFi) module 970, processor 980, and power supply 990. Those skilled in the art will appreciate that the handset configuration shown in fig. 9 is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
The following describes each component of the mobile phone in detail with reference to fig. 9:
the RF circuit 910 may be used for receiving and transmitting signals during information transmission and reception or during a call, and in particular, for receiving downlink information of a base station and then processing the received downlink information to the processor 980; in addition, the data for designing uplink is transmitted to the base station. In general, the RF circuit 910 includes, but is not limited to, an antenna, at least one Amplifier, a transceiver, a coupler, a Low Noise Amplifier (LNA), a duplexer, and the like. In addition, the RF circuit 910 may also communicate with networks and other devices via wireless communication. The wireless communication may use any communication standard or protocol, including but not limited to Global System for Mobile communication (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE), email, Short Messaging Service (SMS), and the like.
The memory 920 may be used to store software programs and modules, and the processor 980 may execute various functional applications and data processing of the mobile phone by operating the software programs and modules stored in the memory 920. The memory 920 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 920 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The input unit 930 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the cellular phone. Specifically, the input unit 930 may include a touch panel 931 and other input devices 932. The touch panel 931, also referred to as a touch screen, may collect a touch operation performed by a user on or near the touch panel 931 (e.g., a user's operation on or near the touch panel 931 using a finger, a stylus, or any other suitable object or accessory), and drive a corresponding connection device according to a preset program. Alternatively, the touch panel 931 may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 980, and can receive and execute commands sent by the processor 980. In addition, the touch panel 931 may be implemented by various types, such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. The input unit 930 may include other input devices 932 in addition to the touch panel 931. In particular, other input devices 932 may include, but are not limited to, one or more of a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like.
The display unit 940 may be used to display information input by the user or information provided to the user and various menus of the mobile phone. The Display unit 940 may include a Display panel 941, and optionally, the Display panel 941 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like. Further, the touch panel 931 may cover the display panel 941, and when the touch panel 931 detects a touch operation on or near the touch panel 931, the touch panel transmits the touch operation to the processor 980 to determine the type of the touch event, and then the processor 980 provides a corresponding visual output on the display panel 941 according to the type of the touch event. Although in fig. 9, the touch panel 931 and the display panel 941 are two independent components to implement the input and output functions of the mobile phone, in some embodiments, the touch panel 931 and the display panel 941 may be integrated to implement the input and output functions of the mobile phone.
The handset may also include at least one sensor 950, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor may include an ambient light sensor that adjusts the brightness of the display panel 941 according to the brightness of ambient light, and a proximity sensor that turns off the display panel 941 and/or backlight when the mobile phone is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally, three axes), can detect the magnitude and direction of gravity when stationary, and can be used for applications of recognizing the posture of a mobile phone (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration recognition related functions (such as pedometer and tapping), and the like; as for other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which can be configured on the mobile phone, further description is omitted here.
Audio circuitry 960, speaker 961, microphone 962 may provide an audio interface between a user and a cell phone. The audio circuit 960 may transmit the electrical signal converted from the received audio data to the speaker 961, and convert the electrical signal into a sound signal for output by the speaker 961; on the other hand, the microphone 962 converts the collected sound signal into an electrical signal, converts the electrical signal into audio data after being received by the audio circuit 960, and outputs the audio data to the processor 980 for processing, and then transmits the audio data to, for example, another mobile phone through the RF circuit 910, or outputs the audio data to the memory 920 for further processing.
WiFi belongs to short-distance wireless transmission technology, and the mobile phone can help a user to receive and send e-mails, browse webpages, access streaming media and the like through the WiFi module 970, and provides wireless broadband Internet access for the user. Although fig. 9 shows the WiFi module 970, it is understood that it does not belong to the essential constitution of the handset, and can be omitted entirely as needed within the scope of not changing the essence of the application.
The processor 980 is a control center of the mobile phone, connects various parts of the entire mobile phone by using various interfaces and lines, and performs various functions of the mobile phone and processes data by operating or executing software programs and/or modules stored in the memory 920 and calling data stored in the memory 920, thereby integrally monitoring the mobile phone. Alternatively, processor 980 may include one or more processing units; preferably, the processor 980 may integrate an application processor, which primarily handles operating systems, user interfaces, applications, etc., and a modem processor, which primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 980.
The handset also includes a power supply 990 (e.g., a battery) for supplying power to the various components, which may preferably be logically connected to the processor 980 via a power management system, thereby providing management of charging, discharging, and power consumption via the power management system.
Although not shown, the mobile phone may further include a camera, a bluetooth module, etc., which are not described herein.
In the embodiment of the present application, the processor 980 included in the terminal further has the following functions:
acquiring first point cloud information of the standard portrait, wherein the first point cloud information is used for indicating the coordinate of the standard portrait on a display screen of the terminal;
acquiring second point cloud information of the sticker, wherein the second point cloud information is at least part of the first point cloud information and is used for indicating the position of the sticker relative to a standard portrait, the second point cloud information is artificially formulated, and the sticker is a pattern which can be displayed in the display screen;
when the terminal detects the portrait in the shooting preview state, generating third point cloud information, wherein the third point cloud information is used for indicating the coordinates of the portrait on the display screen in the shooting preview state;
and when the third point cloud information is matched with the first point cloud information and the user selects the paster, displaying the paster according to the second point cloud information.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and the like.
The above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (13)

1. A data processing method, comprising:
the method comprises the steps that a terminal obtains first point cloud information of a standard portrait, wherein the first point cloud information is used for indicating the coordinate of the standard portrait on a display screen of the terminal;
the terminal acquires second point cloud information of a sticker, wherein the second point cloud information is at least part of the first point cloud information, the second point cloud information is used for indicating the position of the sticker relative to the standard portrait, the second point cloud information is artificially formulated, and the sticker is a pattern which can be displayed in the display screen;
when the terminal detects a portrait in a shooting preview state, the terminal generates third point cloud information, wherein the third point cloud information is used for indicating the coordinates of the portrait on the display screen in the shooting preview state;
and when the third point cloud information is matched with the first point cloud information and the sticker is selected by a user, the terminal displays the sticker according to the second point cloud information.
2. The method according to claim 1, wherein before the terminal generates the third point cloud information, the method further comprises:
the terminal selects a target SDK from a Software Development Kit (SDK) set, all SDKs in the SDK set have a uniform SDK interface, and the target SDK is used for realizing the splicing of the portrait and the sticker.
3. The method of claim 2, wherein the terminal selecting the target SDK from the set of SDKs comprises:
the terminal acquires an SDK parameter set in the SDK set, wherein the SDK parameter set comprises a parameter set of a user skin-beautifying grade, a parameter set of a preview algorithm processing rate and a parameter set of a sticker following rate;
the terminal obtains a calculation result set according to the SDK parameter set;
the terminal determines a target calculation result from the calculation result set, wherein the target calculation result is the calculation result with the largest median in the calculation result set;
and the terminal determines the target SDK corresponding to the target calculation result.
4. The method of claim 3, wherein the obtaining, by the terminal, a set of computation results according to the set of SDK parameters comprises:
the terminal substitutes the SDK parameter set into the following formula to obtain a calculation result set through calculation, wherein the formula is as follows:
R=α*B+γ*P+β*F;
the R is a settlement result;
the alpha, the beta and the gamma are preset weight coefficients;
b is a parameter of the skin beautifying grade of the user;
the P is a parameter of the preview algorithm processing rate;
and F is a parameter of the following rate of the paster.
5. The method of any of claims 2 to 4, wherein the unified SDK interface comprises:
downloading and registering the sticker, initializing a shooting interface, previewing the sticker, processing shooting data and ending the process.
6. The method according to claim 1, wherein before the terminal generates the third point cloud information, the method further comprises:
and the terminal selects a target shooting algorithm from a shooting algorithm set, and the target shooting algorithm is used for realizing the splicing of the portrait and the sticker.
7. The method of claim 6, wherein the target capture algorithm comprises:
a portrait beauty algorithm, a portrait detection algorithm, a filter shooting algorithm, and/or a sticker stitching algorithm.
8. A terminal, comprising:
the terminal comprises a first acquisition unit, a second acquisition unit and a display unit, wherein the first acquisition unit is used for acquiring first point cloud information of a standard portrait, and the first point cloud information is used for indicating coordinates of the standard portrait on a display screen of the terminal;
the second acquisition unit is used for acquiring second point cloud information of a sticker, wherein the second point cloud information is at least part of the first point cloud information, the second point cloud information is used for indicating the position of the sticker relative to the standard portrait, the second point cloud information is artificially drawn, and the sticker is a pattern which can be displayed in the display screen;
the generation unit is used for generating third point cloud information when the terminal detects a portrait in a shooting preview state, wherein the third point cloud information is used for indicating the coordinates of the portrait on the display screen in the shooting preview state of the terminal;
and the display unit is used for displaying the paster according to the second point cloud information when the third point cloud information is matched with the first point cloud information and the paster is selected by a user.
9. The terminal of claim 8, wherein the terminal further comprises:
the first selection unit is used for selecting a target SDK from an SDK set, wherein all the SDKs in the SDK set have a uniform SDK interface, and the target SDK is used for realizing the splicing of the portrait and the sticker.
10. The terminal of claim 9, wherein the first selecting unit comprises:
the acquisition module is used for acquiring an SDK parameter set corresponding to the SDK set, wherein the SDK parameter set comprises a parameter set of the skin-beautifying grade of a user, a parameter set of the processing rate of a preview algorithm and a parameter set of the following rate of a sticker;
the calculation module is used for substituting the SDK parameter set into a formula to calculate to obtain a calculation result set;
a first determining module, configured to determine a target computation result from the computation result set, where the target computation result is a computation result with a largest value in the computation result set;
and the second determining module is used for determining the target SDK corresponding to the target calculation result.
11. The terminal of claim 8, wherein the terminal further comprises:
and the second selection unit is used for selecting a target shooting algorithm from the shooting algorithm set, and the target shooting algorithm is used for realizing the splicing of the portrait and the sticker.
12. A terminal, comprising:
the system comprises a processor, a memory, a bus and an input/output interface;
the memory has program code stored therein;
when the processor calls the program codes in the memory, the following operations are executed:
acquiring first point cloud information of a standard portrait, wherein the first point cloud information is used for indicating a coordinate of the standard portrait on a display screen of the terminal;
acquiring second point cloud information of a sticker, wherein the second point cloud information is at least part of the first point cloud information, the second point cloud information is used for indicating the position of the sticker relative to the standard portrait, the second point cloud information is artificially formulated, and the sticker is a pattern which can be displayed in the display screen;
when the terminal detects a portrait in a shooting preview state, generating third point cloud information, wherein the third point cloud information is used for indicating the coordinates of the portrait on the display screen in the shooting preview state of the terminal;
and when the third point cloud information is matched with the first point cloud information and the user selects the paster, displaying the paster according to the second point cloud information.
13. A computer-readable storage medium comprising instructions that, when executed on a computer, cause the computer to perform the method of any one of claims 1 to 7.
CN201780083034.3A 2017-10-13 2017-10-13 Data processing method and terminal Active CN110168599B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/106014 WO2019071562A1 (en) 2017-10-13 2017-10-13 Data processing method and terminal

Publications (2)

Publication Number Publication Date
CN110168599A CN110168599A (en) 2019-08-23
CN110168599B true CN110168599B (en) 2021-01-29

Family

ID=66101123

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201780083034.3A Active CN110168599B (en) 2017-10-13 2017-10-13 Data processing method and terminal

Country Status (2)

Country Link
CN (1) CN110168599B (en)
WO (1) WO2019071562A1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113923355A (en) * 2021-09-30 2022-01-11 上海商汤临港智能科技有限公司 Vehicle, image shooting method, device, equipment and storage medium
CN113936269B (en) * 2021-11-17 2022-07-01 深圳市镭神智能系统有限公司 Method for identifying staying object and method for controlling motor vehicle
CN114501079A (en) * 2022-01-29 2022-05-13 京东方科技集团股份有限公司 Method for processing multimedia data and related device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016019715A1 (en) * 2014-08-07 2016-02-11 中兴通讯股份有限公司 Human eye locating method and device and storage medium
CN105678686A (en) * 2015-12-30 2016-06-15 北京金山安全软件有限公司 Picture processing method and device
CN106952221A (en) * 2017-03-15 2017-07-14 中山大学 A kind of three-dimensional automatic Beijing Opera facial mask making-up method

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20150039049A (en) * 2013-10-01 2015-04-09 삼성전자주식회사 Method and Apparatus For Providing A User Interface According to Size of Template Edit Frame
CN103632136B (en) * 2013-11-11 2017-03-29 北京天诚盛业科技有限公司 Human-eye positioning method and device
US9519950B2 (en) * 2013-12-20 2016-12-13 Furyu Corporation Image generating apparatus and image generating method
CN105096246B (en) * 2014-05-08 2019-09-17 腾讯科技(深圳)有限公司 Image composition method and system
JP6428183B2 (en) * 2014-11-14 2018-11-28 フリュー株式会社 Photo sticker creation apparatus, photo sticker creation method, and photo sticker creation processing program
CN104778712B (en) * 2015-04-27 2018-05-01 厦门美图之家科技有限公司 A kind of face chart pasting method and system based on affine transformation
CN105551070A (en) * 2015-12-09 2016-05-04 广州市久邦数码科技有限公司 Camera system capable of loading map elements in real time
CN106339201A (en) * 2016-09-14 2017-01-18 北京金山安全软件有限公司 Map processing method and device and electronic equipment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016019715A1 (en) * 2014-08-07 2016-02-11 中兴通讯股份有限公司 Human eye locating method and device and storage medium
CN105678686A (en) * 2015-12-30 2016-06-15 北京金山安全软件有限公司 Picture processing method and device
CN106952221A (en) * 2017-03-15 2017-07-14 中山大学 A kind of three-dimensional automatic Beijing Opera facial mask making-up method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ULSee人脸跟踪技术入驻LINE相机动态贴纸;唐晶晶;《计算机与网络》;20170326;79 *

Also Published As

Publication number Publication date
WO2019071562A1 (en) 2019-04-18
CN110168599A (en) 2019-08-23

Similar Documents

Publication Publication Date Title
CN113132618B (en) Auxiliary photographing method and device, terminal equipment and storage medium
CN109040643B (en) Mobile terminal and remote group photo method and device
CN109361865B (en) Shooting method and terminal
CN108184050B (en) Photographing method and mobile terminal
CN108495029B (en) Photographing method and mobile terminal
WO2018228168A1 (en) Image processing method and related product
CN108038825B (en) Image processing method and mobile terminal
CN107835367A (en) A kind of image processing method, device and mobile terminal
CN109361867B (en) Filter processing method and mobile terminal
CN107846583B (en) Image shadow compensation method and mobile terminal
CN108924412B (en) Shooting method and terminal equipment
CN110062222B (en) Video evaluation method, terminal, server and related products
CN107730460B (en) Image processing method and mobile terminal
CN109495616B (en) Photographing method and terminal equipment
CN111294625B (en) Method, device, terminal equipment and storage medium for combining equipment service capability
JP2016511875A (en) Image thumbnail generation method, apparatus, terminal, program, and recording medium
CN110168599B (en) Data processing method and terminal
CN108718389B (en) Shooting mode selection method and mobile terminal
CN110198413A (en) A kind of video capture method, video capture device and electronic equipment
CN108984143B (en) Display control method and terminal equipment
CN110825897A (en) Image screening method and device and mobile terminal
CN108174109B (en) Photographing method and mobile terminal
CN107330867B (en) Image synthesis method, image synthesis device, computer-readable storage medium and computer equipment
CN110807769B (en) Image display control method and device
CN109639981B (en) Image shooting method and mobile terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant