WO2019071562A1 - Procédé de traitement de données et terminal - Google Patents
Procédé de traitement de données et terminal Download PDFInfo
- Publication number
- WO2019071562A1 WO2019071562A1 PCT/CN2017/106014 CN2017106014W WO2019071562A1 WO 2019071562 A1 WO2019071562 A1 WO 2019071562A1 CN 2017106014 W CN2017106014 W CN 2017106014W WO 2019071562 A1 WO2019071562 A1 WO 2019071562A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- point cloud
- cloud information
- sticker
- terminal
- portrait
- Prior art date
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 17
- 238000004422 calculation algorithm Methods 0.000 claims description 62
- 238000004364 calculation method Methods 0.000 claims description 42
- 238000000034 method Methods 0.000 claims description 34
- 238000012545 processing Methods 0.000 claims description 15
- 230000008569 process Effects 0.000 claims description 14
- 238000001514 detection method Methods 0.000 claims description 12
- 230000003796 beauty Effects 0.000 claims description 6
- 238000004590 computer program Methods 0.000 claims description 2
- 230000006870 function Effects 0.000 description 18
- 238000010586 diagram Methods 0.000 description 15
- 239000000463 material Substances 0.000 description 15
- 238000004891 communication Methods 0.000 description 6
- 239000011521 glass Substances 0.000 description 6
- 230000008859 change Effects 0.000 description 5
- 239000013598 vector Substances 0.000 description 4
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000006073 displacement reaction Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 210000003128 head Anatomy 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 230000005236 sound signal Effects 0.000 description 2
- 241000699670 Mus sp. Species 0.000 description 1
- 241000283973 Oryctolagus cuniculus Species 0.000 description 1
- 240000007711 Peperomia pellucida Species 0.000 description 1
- 230000001133 acceleration Effects 0.000 description 1
- 230000003190 augmentative effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000007599 discharging Methods 0.000 description 1
- 210000005069 ears Anatomy 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 238000012905 input function Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000010079 rubber tapping Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000010897 surface acoustic wave method Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
Definitions
- the present application relates to the field of terminal technologies, and in particular, to a data processing method and a terminal.
- the coordinates are set based on the display resolution of the terminal to arbitrarily define the initial display position of the sticker in the terminal.
- the terminal detects the face during the shooting process, the user can select the sticker, and then the user can select the sticker.
- the terminal combines the sticker with the face.
- the sticker is a pair of rabbit ears.
- the initial display position of the terminal is near the mouth of the face, and then the terminal adjusts the rabbit ear to the top position of the face to complete the stitching.
- the initial display position of the arbitrarily defined sticker in the terminal is often far from the actual display position of the sticker in the actual scene relative to the face, so that the terminal often needs to spend more time to adjust the position of the sticker, resulting in poor user experience. .
- the embodiment of the present application provides a data processing method and a terminal, which are used to improve user experience.
- the first aspect of the embodiments of the present application provides a data processing method, including:
- the terminal acquires the first point cloud information of the standard portrait
- the point cloud information refers to a set of vectors in a three-dimensional coordinate system. These vectors are usually expressed in the form of X, Y, Z three-dimensional coordinates, and are generally used to represent the outer surface shape of an object. It can be understood that the first point cloud information can indicate the shape and display of the standard portrait displayed on the terminal. position;
- the terminal can also obtain the second point cloud information of the sticker based on the obtained first point cloud information.
- the sticker refers to the picture that the user can add to the terminal interface when using the shooting function, and the second point cloud information is the first point. At least a part of the information in the cloud information, it can be understood that the second point cloud information can indicate the position of the sticker on the terminal relative to the standard portrait.
- the second point cloud information is artificially formulated according to the actual content of the sticker, for example,
- the sticker content is a pair of glasses, then the sticker is designed near the eyes of the standard portrait. If the sticker content is a hat, the sticker is designed near the top of the standard portrait;
- the terminal when the terminal detects the portrait in the shooting preview state, the terminal generates third point cloud information, and the third point cloud information may indicate the shape and the displayed position of the portrait displayed by the terminal in the shooting preview state in real time. It can be understood that the third point cloud information is not fixed like the first point cloud information, and the third point cloud information will change according to the change of the portrait in the lens.
- the terminal displays the sticker according to the second point cloud information.
- the terminal can determine whether the portrait is currently detected according to a certain rule. , that is, it will judge whether the third point cloud information matches the first point cloud information, and if it matches, the display shape of the current portrait is displayed. And the display position is similar to the standard portrait, the terminal can recognize that the portrait is detected, so the sticker selected by the user can be displayed on the terminal according to the second point cloud information.
- the terminal may obtain the first point cloud information of the standard portrait and the second point cloud information of the sticker, and the second point cloud information is used to indicate the position of the sticker relative to the standard portrait, when the terminal detects the portrait
- the third point cloud information of the portrait may be generated. Further, if the third point cloud information matches the first point cloud information and the user selects the sticker, the terminal displays the sticker according to the second point cloud information, because the second point cloud of the sticker
- the information is artificially based on the content of the sticker. For example, the sticker content is a pair of glasses, then the sticker is near the eye of the standard portrait.
- the sticker content is a hat
- the sticker is near the top of the standard portrait, so when the third point cloud information matches the first point cloud information, the second point cloud information of the sticker is that the initial display position of the sticker and the actual display position of the sticker in the actual scene relative to the portrait are smaller. The time for the subsequent terminal to adjust the position of the sticker is saved, and the user experience is improved.
- the method further includes:
- the terminal selects a target SDK from a set of software development kits (SDKs), and all SDKs in the SDK set have a unified SDK interface.
- SDKs software development kits
- the SDKs in the SDK set include third-party development.
- the camera stitching algorithm, the portrait beauty algorithm, the filter shooting algorithm and the portrait detection algorithm and the like, the camera APP can implement the above various camera functions by calling a third-party SDK.
- the terminal selects the target SDK from the SDK set, including:
- the terminal can select the target SDK from the integrated SDK set by means of intelligent selection.
- each SDK has a corresponding SDK parameter, which includes parameters of the user skin level, parameters of the preview algorithm processing rate, and stickers.
- the three parameters can be brought into the following formula to calculate the calculation result, and the calculation result with the largest value is selected from the set of calculation results, and the SDK corresponding to the calculation result is determined as the target SDK.
- the formula includes:
- R is the calculation result
- B is the parameter of the user's skin level
- P is the parameter of the preview algorithm processing rate
- F is the parameter of the sticker followability rate
- ⁇ , ⁇ , and ⁇ are preset weight coefficients.
- the preset weight coefficient can be different in different shooting modes. For example, in the camera mode, the requirement for skin color is higher, then the value of ⁇ can be adjusted correspondingly, and the sticker followability rate requirement in the recording mode is required. If the value is higher, the value of ⁇ can be adjusted accordingly.
- the values of ⁇ , ⁇ , and ⁇ are not limited herein.
- the unified SDK interface definition includes:
- the method further includes:
- the terminal selects the target shooting algorithm from the set of shooting algorithms. It can be understood that the terminal can also integrate multiple shooting algorithms, that is, the developer of the shooting algorithm discloses the shooting algorithm. In this scenario, the SDK is no longer needed. By directly selecting a certain shooting algorithm, various shooting functions such as portrait beauty, portrait detection, filter shooting, and sticker stitching can be realized.
- the selection of the target shooting algorithm can be selected by the terminal intelligently, or can be selected by the user according to preferences. This is not limited here.
- the target shooting algorithm includes:
- a second aspect of the embodiment of the present application provides a terminal, including:
- a first acquiring unit configured to acquire first point cloud information of the standard portrait, where the first point cloud information is used to indicate coordinates of the standard portrait on the display screen of the terminal;
- a second acquiring unit configured to acquire second point cloud information of the sticker, where the second point cloud information is at least a part of information in the first point cloud information, and the second point cloud information is used to indicate a position of the sticker relative to the standard portrait,
- the two-point cloud information is artificially formulated, and the sticker is a pattern that can be displayed in the display screen;
- a generating unit configured to generate a third point cloud information when the terminal detects the portrait in the shooting preview state, where the third point cloud information is used to indicate the coordinates of the portrait on the display screen in the shooting preview state;
- a display unit configured to display the sticker according to the second point cloud information when the third point cloud information matches the first point cloud information and the user selects the sticker.
- the terminal further includes:
- the first selection unit is configured to select a target SDK from the SDK set, and all the SDKs in the SDK set have a unified SDK interface, and the target SDK is used to implement the flattening of the portrait and the sticker.
- the first selecting unit includes:
- An obtaining module configured to obtain an SDK parameter set corresponding to the SDK set, where the SDK parameter set includes a parameter set of the user skin level, a parameter set of a preview algorithm processing rate, and a parameter set of a sticker followability rate;
- a calculation module configured to substitute the SDK parameter set into a formula to obtain a calculation result set
- a first determining module configured to determine a target calculation result from the set of calculation results, where the target calculation result is a calculation result with the largest value in the calculation result set;
- the second determining module is configured to determine a target SDK corresponding to the target calculation result.
- the terminal further includes:
- a second selecting unit configured to select a target shooting algorithm from the set of shooting algorithms, where the target shooting algorithm is used to implement the stitching of the portrait and the sticker.
- a third aspect of the embodiment of the present application provides a terminal, including:
- a processor a memory, a bus, and an input and output interface
- a program code is stored in the memory
- the first point cloud information is used to indicate the coordinates of the standard portrait on the display screen of the terminal;
- the second point cloud information is at least part of the information in the first point cloud information, the second point cloud information is used to indicate the position of the sticker relative to the standard portrait, and the second point cloud information is artificially drawn
- the sticker is a pattern that can be displayed in the display screen;
- the third point cloud information is generated, and the third point cloud information is used to indicate the coordinates of the portrait on the display screen when the terminal is in the preview state;
- the sticker is displayed according to the second point cloud information.
- a fourth aspect of the embodiments of the present application provides a computer readable storage medium having instructions stored in a computer to cause a computer to execute the flow in the data processing method of the first aspect described above when running on a computer.
- a fifth aspect of the embodiments of the present application provides a computer program product, which, when run on a computer, causes the computer to execute the flow in the data processing method of the first aspect described above.
- the embodiments of the present application have the following advantages:
- the terminal may obtain the first point cloud information of the standard portrait and the second point cloud information of the sticker, and the second point cloud information is used to indicate the position of the sticker relative to the standard portrait, when the terminal When the portrait is detected, the third point cloud information of the portrait may be generated. Further, if the third point cloud information matches the first point cloud information and the user selects the sticker, the terminal displays the sticker according to the second point cloud information, due to the sticker The second point cloud information is artificially based on the content of the sticker. For example, the sticker content is a pair of glasses, then the sticker is near the eye of the standard portrait. If the sticker content is a hat, the sticker is on the top of the standard portrait.
- the second point cloud information of the sticker is the initial display position of the sticker and the actual display position of the sticker relative to the portrait in the actual scene.
- the error is small, which saves the time for the subsequent terminal to adjust the position of the sticker, thereby improving the user experience.
- FIG. 1 is a schematic diagram of an application scenario of a data processing method according to an embodiment of the present application
- FIG. 2 is a schematic diagram of an embodiment of a data processing method in an embodiment of the present application.
- 3(a) is a schematic diagram showing a standard portrait displayed on a terminal in the embodiment of the present application.
- FIG. 3(b) is a schematic diagram showing a sticker displayed on a terminal in an embodiment of the present application.
- FIG. 3(c) is a schematic diagram of a scene applied to a portrait and a sticker in the embodiment of the present application;
- FIG. 3( d ) is a schematic diagram of another scene applied to the combination of a portrait and a sticker in the embodiment of the present application;
- FIG. 3( e ) is a schematic diagram of another scene applied to a portrait and a sticker in the embodiment of the present application;
- FIG. 3(f) is a schematic diagram of another scene applied to a portrait and a sticker in the embodiment of the present application;
- FIG. 4 is a schematic diagram of another embodiment of a data processing method according to an embodiment of the present application.
- FIG. 5 is a schematic diagram of another application scenario of a data processing method according to an embodiment of the present application.
- FIG. 6 is a schematic diagram of a scenario applied to a user operating terminal to download a sticker according to an embodiment of the present application
- FIG. 7 is a schematic diagram of an embodiment of a terminal in an embodiment of the present application.
- FIG. 8 is a schematic diagram of another embodiment of a terminal according to an embodiment of the present application.
- FIG. 9 is a schematic structural diagram of a terminal in an embodiment of the present application.
- the embodiment of the present application provides a data processing method and a terminal, between the initial display position of the sticker and the final actual display position when the user selects the sticker by pre-defining the initial display position of the sticker in the terminal.
- the error is small, which saves the time for the subsequent terminal to adjust the position of the sticker and improves the user experience.
- the embodiment can be applied to the application scenario shown in FIG. 1.
- the terminal can communicate with a third-party server, and the third-party server stores a sticker material designed by a third party, and the camera application (application, APP) in the terminal starts.
- the third-party SDK integrated in the camera APP can download the third-party sticker material from the third-party server and register the downloaded third-party sticker material into the sticker management module of the camera APP for the user to select, in addition, the camera will
- the captured image data is sent to the camera APP, combined with the sticker material selected by the user, and the third party SDK performs portrait recognition and the sticker and the portrait are combined to finally realize the shooting preview and the image saving.
- the third-party SDK includes a third-party developed sticker flattening algorithm, a portrait beauty algorithm, a filter shooting algorithm, and a portrait detection algorithm
- the camera APP can be implemented by calling a third-party SDK.
- the camera APP is executed in the application processor of the terminal, and the application processor can perform data interaction with the camera through its external interface.
- the image data format output by the camera may be a YUV format, or may be Other formats, such as the RGB format, are not limited herein.
- the terminal in the embodiment of the present application may be a mobile phone, a tablet computer, a wearable device, an augmented reality (AR), a virtual reality (VR) device, a notebook computer, a super mobile personal computer (ultra-mobile).
- the personal computer (UMPC), the netbook, the personal digital assistant (PDA), and the like have a shooting function, and the embodiment of the present application does not impose any limitation.
- an embodiment of the data processing method in the embodiment of the present application includes:
- the terminal acquires the first point cloud information of the standard portrait.
- the terminal may acquire the first point cloud information of the standard portrait, where the point cloud information refers to a set of vectors in a three-dimensional coordinate system. These vectors are usually represented in the form of X, Y, Z three-dimensional coordinates, and are generally used primarily to represent the outer surface shape of an object. Most point cloud information is generated by 3D scanning devices, such as lidar (2D/3D), stereo camera, time-of-flight camera, These devices measure information on a large number of points on the surface of the object in an automated manner, and then output point cloud information using a data file.
- 3D scanning devices such as lidar (2D/3D), stereo camera, time-of-flight camera
- the standard portrait is an artificially defined 3D portrait model. It can be understood that the first point cloud information of the standard portrait can be used to reflect the outer surface shape of the standard portrait and its display position at the terminal.
- the standard portrait can refer to the standard.
- the model of the human head can also refer to a model including a standard human head and a standard human torso, which is not limited herein.
- This embodiment describes the standard portrait as a standard human head, as shown in FIG. 3(a), and the terminal is a mobile phone. For example, the figure shows the display position and display shape of the standard portrait based on the first point cloud information on the terminal interface.
- FIG. 3( a ) is only an example.
- the standard portrait does not have to be displayed on the interface of the terminal according to the first point cloud information, and the first point cloud information of the standard portrait is in three-dimensional coordinates.
- the form can be expressed as shown in Table 1 below.
- Table 1 below is a part of the three-dimensional coordinates of the first point cloud information of the standard portrait.
- the number and value of the specific coordinates in the table may vary. Make a limit.
- the terminal acquires a second point cloud information of the sticker.
- the terminal acquires the second point cloud information of the sticker, where the second point cloud information is at least a part of the information of the first point cloud information.
- the cloud information is selected as the second point cloud information, that is, the position of the sticker relative to the standard portrait is defined.
- the sticker picture is a pair of sunglasses, then the first point can be selected.
- the three-dimensional coordinates in the vicinity of the standard portrait eyes in the cloud information are used as the second point cloud information, and the figure is the display position of the sticker according to the second point cloud information at the terminal.
- the second point cloud information of the sticker is artificially formulated according to the actual content of the sticker.
- the sticker is a hat
- the three-dimensional coordinates near the top of the standard portrait image in the first point cloud information can be selected as the first Two points of cloud information, so in actual applications, different stickers can correspond to the same point cloud information, and can also correspond to different point cloud information, which is not limited herein.
- a complete set of sticker material includes a picture of the sticker and a second point cloud information of the sticker, wherein the image format of the sticker may be a bitmap file (bitmap, bmp), or other formats.
- bitmap bitmap, bmp
- png portable network image
- the terminal When the terminal detects the portrait in the shooting preview state, the terminal generates third point cloud information.
- the user opens the camera APP to enter the shooting preview state
- the camera can acquire image information, and display a preview image on the terminal display screen
- the terminal can detect the image acquired by the camera in real time, and the portrait appears in the shooting range of the camera.
- the terminal generates the third point cloud information of the portrait.
- the third point cloud information is also represented in the form of three-dimensional coordinates, and is used to indicate that the portrait currently acquired by the camera is in the state of the preview preview.
- the actual display shape and the actual display position of the terminal, the third point cloud information is generated according to the portrait detection algorithm of the third party SDK, and the third point cloud information changes in real time as the portrait image acquired by the camera moves.
- the terminal displays the sticker according to the second point cloud information.
- the user may select a sticker on the interface of the terminal, and the terminal displays the sticker according to the second point cloud information, so that the preview is simultaneously displayed on the display screen of the terminal.
- Portraits and stickers to achieve the combination of portraits and stickers, it can be understood that the display of stickers is based on the third-party SDK's sticker flattening algorithm, as shown in Figure 3 (c), the terminal determines the third point cloud information and When a little cloud information matches, the user can select a sticker from the list of sticker materials on the terminal interface, and the terminal will display the sticker on the terminal interface according to the second point cloud information of the sticker.
- the third point cloud information of the actual portrait is not completely coincident with the first point cloud information of the standard portrait. Therefore, according to the different situations of the portrait on the terminal display screen, the terminal will scale or shift the sticker by the sticker flatning algorithm. The stitching of the sticker and the portrait is realized, and the display scenes of the specific portrait and the sticker are separately described below.
- the terminal may determine, according to the coincidence ratio of the third point cloud information and the first point cloud information, whether the third point cloud information matches the first point cloud information, specifically, the terminal compares the third point cloud information with the third point.
- the terminal determines that the third point cloud information matches the first point cloud information.
- the threshold value may be other values, for example, 95%, which is not limited herein.
- the third point cloud information may not be the same.
- the terminal may also determine that the third point cloud information matches the first point cloud information, Specifically, the scaling may be 30% or 20%, which is not limited herein.
- the second point cloud information according to the sticker flattening algorithm sticker is also scaled according to the first point cloud information. To zoom, the terminal will display the sticker according to the point cloud information obtained by scaling the second point cloud information to realize the flattening of the sticker and the portrait.
- the description is the same as the above description, and details are not described herein. .
- the position of the portrait captured by the terminal has a significant displacement change relative to the position of the standard portrait, if the current third point cloud information and the first point cloud information are changed by corresponding displacements.
- the point cloud information is matched or the third point cloud information is matched with the point cloud information obtained after the first point cloud information is scaled and the corresponding displacement is changed, it can also be understood that the third point cloud information matches the first point cloud information.
- the point cloud information according to the sticker flattening algorithm sticker will also change according to the change of the first point cloud information, and the terminal will display the sticker according to the point cloud information obtained after the second point cloud information changes. Achieve the combination of stickers and portraits.
- the terminal simultaneously captures two portraits, a portrait a and a portrait b, and the terminal generates a point cloud information and b point cloud information corresponding to the two portraits respectively.
- the terminal Will determine whether the point cloud information matches the first point cloud information and whether the point b cloud information matches the first point cloud information.
- the specific judgment method is as shown in Figure 3(c), 3(d) or 3(e). The judgment manners described in the corresponding embodiments are similar, and are not described herein again. Therefore, when the a point cloud information and the b point cloud information are matched with the first point cloud information, the terminal also has the corresponding positions of the portrait a and the portrait b.
- the terminal displays the sticker in the corresponding position of the portrait a. If only the b point cloud information matches the first point cloud information, Then, the terminal displays the sticker only in the corresponding position of the portrait b.
- the number of the images captured by the terminal is not limited to the two examples. The actual application scenario is preferred, and is not limited herein.
- the terminal may obtain the first point cloud information of the standard portrait and the second point cloud information of the sticker, and the second point cloud information is used to indicate the position of the sticker relative to the standard portrait, when the terminal When the portrait is detected, the third point cloud information of the portrait may be generated. Further, if the third point cloud information matches the first point cloud information and the user selects the sticker, the terminal displays the sticker according to the second point cloud information, due to the sticker The second point cloud information is artificially based on the content of the sticker. For example, the sticker content is a pair of glasses, then the sticker is near the eye of the standard portrait. If the sticker content is a hat, the sticker is on the top of the standard portrait.
- the second point cloud information of the sticker is the initial display position of the sticker and the actual display position of the sticker relative to the portrait in the actual scene.
- the error is small, which saves the time for the subsequent terminal to adjust the position of the sticker, thereby improving the user experience.
- FIG. 4 another embodiment of the data processing method in the embodiment of the present application includes:
- the user operates the terminal to open the camera APP, and correspondingly, the camera of the terminal is turned on, and the camera sends the acquired image data to the camera APP.
- the camera APP can integrate multiple SDKs including a sticker shooting algorithm, and the SDK integrated by the camera APP has a unified SDK interface. It can be understood that developers who need to provide these SDKs develop corresponding rules according to uniform rules. Shoot the algorithm and package the SDK.
- the unified SDK interface may include a process of downloading and registering a sticker, initializing a shooting interface, performing a sticker preview, processing shooting data, and ending a process. It can be understood that the SDK interface described herein is only an example. The specific SDK interface in the actual application may be changed, and is not limited herein.
- the SDK integrated by all camera APPs is a sticker material defined according to the first point cloud information of the standard portrait, that is to say, the sticker material provided by the third party needs to be defined according to the first point cloud information, and correspondingly, The developer is required to develop a sticker flattening algorithm based on the definition of the sticker.
- the specific manner is similar to that described in the embodiment shown in FIG. 2, and details are not described herein again.
- the terminal can select the target SDK from the SDK set integrated by the camera APP by means of intelligent selection.
- each SDK has a corresponding SDK parameter, and the parameter includes a parameter of the user skin level and a parameter of the preview algorithm processing rate.
- the parameters of the sticker followability rate, and then the three parameters can be brought into the following formula to calculate the calculation result, and the calculation result with the largest value is selected from the set of calculation results, and the SDK corresponding to the calculation result is determined as the target. SDK.
- the parameters of the sticker followability rate, ⁇ , ⁇ , and ⁇ are preset weight coefficients. It should be noted that the preset weight coefficient can be different in different shooting modes. For example, in the camera mode, the requirement for skin color is higher, then the value of ⁇ can be increased accordingly, and the sticker followability rate in the recording mode. If the requirement is high, the value of ⁇ can be increased accordingly.
- the values of ⁇ , ⁇ , and ⁇ are not limited here.
- the terminal can be built and the corresponding algorithm can be set and adjusted according to actual needs.
- the camera APP can integrate multiple shooting algorithms in addition to multiple SDKs, that is, a shooting algorithm developed and disclosed by a developer of the shooting algorithm.
- the camera The APP no longer needs to call the SDK and can directly select a certain shooting algorithm to achieve various shooting functions such as portrait beauty, portrait detection, filter shooting and sticker flattening.
- the selection of the shooting algorithm can be selected by the terminal intelligently, or The user chooses it according to his or her preference, which is not limited here.
- step 403. Determine whether the sticker selected by the user has been downloaded. If yes, execute step 406. If no, perform step 404.
- the terminal needs to determine whether the sticker selected by the user has been downloaded and registered to the sticker management module in the camera APP.
- the camera APP can download the sticker selected by the user by accessing the server.
- the server may be a third-party server corresponding to the third-party SDK shown in FIG. 1 .
- the camera APP currently uses an SDK developed by Company A, then the camera app can download stickers from A company's private server.
- the server may also be a public server.
- the camera APP may also download other third-party designed stickers from the public server.
- the camera APP integrated SDK has a unified interface, and the SDK is a sticker material defined according to the first point cloud information of the standard portrait, so that the third-party sticker material on the public server is also the first point cloud according to the standard portrait.
- the information is defined, then any kind of SDK used by the current camera APP can present the sticker on the public server in the interface of the terminal.
- the camera APP currently uses the SDK developed by the company A, and the A sticker, the B sticker, and the C sticker in the sticker material list are all designed and announced by the company A, and the current SDK can be Download new stickers and update the sticker material list from company A's private server according to user needs.
- users can also To download the sticker material published by the company, such as company A, company B or company C, from the public server by clicking the "+" button on the interface, and add it to the sticker list of the camera app for use.
- the camera APP registers the sticker downloaded from the server to the local sticker management module for saving.
- the user opens the camera APP for the next time to shoot, there is no need to repeatedly download the sticker material that has been used before.
- step 406. Determine whether a portrait is detected. If yes, execute step 407. If no, repeat step 406.
- the camera app will continue to perform the portrait detection until it is determined that the portrait has been detected, wherein the description of the portrait detection with respect to the camera APP and the steps in the embodiment shown in FIG.
- the descriptions of 203 and 204 are similar, and details are not described herein again.
- the second point cloud information of the sticker may be displayed on the interface of the terminal, wherein the description of the sticker according to the second point cloud information is shown in FIG. 2
- the description of step 204 in the embodiment is similar, and details are not described herein again.
- the camera APP can adjust the display position or display size of the sticker according to the integrated SDK or sticker flattening algorithm, and finally complete the flattening of the sticker and the portrait.
- the terminal may obtain the first point cloud information of the standard portrait and the second point cloud information of the sticker, and the second point cloud information is used to indicate the position of the sticker relative to the standard portrait, when the terminal When the portrait is detected, the third point cloud information of the portrait may be generated. Further, if the third point cloud information matches the first point cloud information and the user selects the sticker, the terminal displays the sticker according to the second point cloud information, due to the sticker The second point cloud information is artificially based on the content of the sticker. For example, the sticker content is a pair of glasses, then the sticker is near the eye of the standard portrait. If the sticker content is a hat, the sticker is on the top of the standard portrait.
- the second point cloud information of the sticker is the initial display position of the sticker and the actual display position of the sticker relative to the portrait in the actual scene.
- the error is small, which saves the time for the subsequent terminal to adjust the position of the sticker, thereby improving the user experience.
- the camera APP can integrate multiple SDKs or shooting algorithms, and all of the SDKs have a unified SDK interface, and are all defined according to the first point cloud information of the standard portrait, so as long as the public server
- the third-party sticker material is also defined according to the first point cloud information of the standard portrait.
- any SDK used by the current camera APP can present the sticker on the public server on the interface of the terminal, and the user is no longer limited to only Users can download stickers from a third-party private server, and users can download any published stickers from the public server, which makes the user's choice of stickers more rich and enhances the user experience.
- an embodiment of a terminal in this embodiment of the present application includes:
- a first acquiring unit 701 configured to acquire first point cloud information of a standard portrait, where the first point cloud information is used to indicate coordinates of the standard portrait on a display screen of the terminal;
- a second acquiring unit 702 configured to acquire second point cloud information of the sticker, the second point cloud information is at least a part of the first point cloud information, and the second point cloud information is used to indicate the Sticker relative to the standard portrait Position, the second point cloud information is artificially formulated, and the sticker is a pattern that can be displayed in the display screen;
- the generating unit 703 is configured to generate third point cloud information when the terminal detects the portrait in the shooting preview state, where the third point cloud information is used to indicate that the terminal is in the shooting preview state The coordinates on the screen;
- the display unit 704 is configured to display the sticker according to the second point cloud information when the third point cloud information matches the first point cloud information and the user selects the sticker.
- the first acquiring unit 701 may obtain the first point cloud information of the standard portrait
- the second acquiring unit 702 may obtain the second point cloud information
- the second point cloud information is used to indicate that the sticker is opposite to the sticker.
- the generating unit 703 may generate the third point cloud information of the portrait, and further if the third point cloud information matches the first point cloud information and the user selects the sticker, the display unit 704 The sticker will be displayed according to the second point cloud information.
- the sticker content is a pair of glasses
- the sticker is near the eye of the standard portrait
- the sticker content If it is a hat, then the sticker is near the top of the standard portrait, so when the third point cloud information matches the first point cloud information, the second point cloud information of the sticker is the initial display position of the sticker and the sticker.
- the error between the actual display position and the portrait is small, which saves the time for the subsequent terminal to adjust the position of the sticker. The user experience.
- FIG. 8 another embodiment of the terminal in this embodiment of the present application includes:
- a first acquiring unit 801 configured to acquire first point cloud information of a standard portrait, where the first point cloud information is used to indicate coordinates of the standard portrait on a display screen of the terminal;
- a second acquiring unit 802 configured to acquire second point cloud information of the sticker, where the second point cloud information is at least a part of the first point cloud information, where the second point cloud information is used to indicate the
- the second point cloud information is artificially defined by the position of the sticker relative to the standard portrait, and the sticker is a pattern that can be displayed in the display screen;
- the generating unit 803 is configured to generate third point cloud information when the terminal detects the portrait in the shooting preview state, where the third point cloud information is used to indicate that the terminal is in the shooting preview state The coordinates on the screen;
- the display unit 804 is configured to display the sticker according to the second point cloud information when the third point cloud information matches the first point cloud information and the user selects the sticker.
- the first selection unit 805 is configured to select a target SDK from the SDK set, and all the SDKs in the SDK set have a unified SDK interface, and the target SDK is used to implement the stitching of the portrait and the sticker.
- the second selecting unit 806 is configured to select a target shooting algorithm from the set of shooting algorithms, and the target shooting algorithm is used to implement the stitching of the portrait and the sticker.
- the first selection unit 805 in the embodiment of the present application further includes:
- the obtaining module 8051 is configured to obtain an SDK parameter set corresponding to the SDK set, where the SDK parameter set includes a parameter set of a user skin level, a parameter set of a preview algorithm processing rate, or a parameter set of a sticker followability rate;
- a calculation module 8052 configured to substitute the SDK parameter set into a formula to obtain a calculation result set
- a first determining module 8053 configured to determine, from the set of calculation results, a target calculation result, where the target calculation result is a calculation result having the largest value in the calculation result set;
- the second determining module 8054 is configured to determine the target SDK corresponding to the target calculation result.
- the terminal in the embodiment of the present application is described above from the perspective of a modular functional entity, and is processed from hardware below.
- the terminal in the embodiment of the present application is described.
- another embodiment of the terminal in this embodiment of the present application includes:
- the embodiment of the present application further provides another terminal.
- FIG. 9 for the convenience of description, only the part related to the embodiment of the present application is shown. If the specific technical details are not disclosed, please refer to the method part of the embodiment of the present application.
- the terminal can be any terminal including a mobile phone, a tablet computer, a PDA (Personal Digital Assistant), a POS (Point of Sales), a vehicle-mounted computer, and the terminal is a mobile phone as an example:
- FIG. 9 is a block diagram showing a partial structure of a mobile phone related to a terminal provided by an embodiment of the present application.
- the mobile phone includes: a radio frequency (RF) circuit 910, a memory 920, an input unit 930, a display unit 940, a sensor 950, an audio circuit 960, a wireless fidelity (WiFi) module 970, and a processor 980.
- RF radio frequency
- the structure of the handset shown in FIG. 9 does not constitute a limitation to the handset, and may include more or less components than those illustrated, or some components may be combined, or different components may be arranged.
- the RF circuit 910 can be used for receiving and transmitting signals during the transmission or reception of information or during a call. Specifically, after receiving the downlink information of the base station, it is processed by the processor 980. In addition, the uplink data is designed to be sent to the base station. Generally, RF circuit 910 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a Low Noise Amplifier (LNA), a duplexer, and the like. In addition, RF circuitry 910 can also communicate with the network and other devices via wireless communication. The above wireless communication may use any communication standard or protocol, including but not limited to Global System of Mobile communication (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (Code Division). Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE), E-mail, Short Messaging Service (SMS), and the like.
- GSM Global System of Mobile communication
- GPRS General Packet Radio Service
- the memory 920 can be used to store software programs and modules, and the processor 980 executes various functional applications and data processing of the mobile phone by running software programs and modules stored in the memory 920.
- the memory 920 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application required for at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may be stored according to Data created by the use of the mobile phone (such as audio data, phone book, etc.).
- memory 920 can include high speed random access memory, and can also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
- the input unit 930 can be configured to receive input numeric or character information and to generate key signal inputs related to user settings and function controls of the handset.
- the input unit 930 may include a touch panel 931 and other input devices 932.
- the touch panel 931 also referred to as a touch screen, can collect touch operations on or near the user (such as a user using a finger, a stylus, or the like on the touch panel 931 or near the touch panel 931. Operation), and drive the corresponding connecting device according to a preset program.
- the touch panel 931 can include two parts: a touch detection device and a touch controller.
- the touch detection device detects the touch orientation of the user, and detects a signal brought by the touch operation, and transmits the signal to the touch controller; the touch controller receives the touch information from the touch detection device, converts the touch information into contact coordinates, and sends the touch information.
- the processor 980 is provided and can receive commands from the processor 980 and execute them.
- the touch panel 931 can be implemented in various types such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave.
- the input unit 930 may also include other input devices 932.
- other input devices 932 may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control buttons, switch buttons, etc.), trackballs, mice, joysticks, and the like.
- the display unit 940 can be used to display information input by the user or information provided to the user as well as various menus of the mobile phone.
- the display unit 940 can include a display panel 941.
- the display panel 941 can be configured in the form of a liquid crystal display (LCD), an organic light-emitting diode (OLED), or the like.
- the touch panel 931 can cover the display panel 941. When the touch panel 931 detects a touch operation on or near it, the touch panel 931 transmits to the processor 980 to determine the type of the touch event, and then the processor 980 according to the touch event. The type provides a corresponding visual output on display panel 941.
- touch panel 931 and the display panel 941 are used as two independent components to implement the input and input functions of the mobile phone in FIG. 9, in some embodiments, the touch panel 931 and the display panel 941 may be integrated. Realize the input and output functions of the phone.
- the handset may also include at least one type of sensor 950, such as a light sensor, motion sensor, and other sensors.
- the light sensor may include an ambient light sensor and a proximity sensor, wherein the ambient light sensor may adjust the brightness of the display panel 941 according to the brightness of the ambient light, and the proximity sensor may close the display panel 941 and/or when the mobile phone moves to the ear. Or backlight.
- the accelerometer sensor can detect the magnitude of acceleration in all directions (usually three axes). When it is stationary, it can detect the magnitude and direction of gravity.
- the mobile phone can be used to identify the gesture of the mobile phone (such as horizontal and vertical screen switching, related Game, magnetometer attitude calibration), vibration recognition related functions (such as pedometer, tapping), etc.; as for the mobile phone can also be configured with gyroscopes, barometers, hygrometers, thermometers, infrared sensors and other sensors, no longer Narration.
- the gesture of the mobile phone such as horizontal and vertical screen switching, related Game, magnetometer attitude calibration
- vibration recognition related functions such as pedometer, tapping
- the mobile phone can also be configured with gyroscopes, barometers, hygrometers, thermometers, infrared sensors and other sensors, no longer Narration.
- An audio circuit 960, a speaker 961, and a microphone 962 can provide an audio interface between the user and the handset.
- the audio circuit 960 can transmit the converted electrical data of the received audio data to the speaker 961, and convert it into a sound signal output by the speaker 961.
- the microphone 962 converts the collected sound signal into an electrical signal, and the audio circuit 960 After receiving, it is converted into audio data, and then processed by the audio data output processor 980, sent to the other mobile phone via the RF circuit 910, or outputted to the memory 920 for further processing.
- WiFi is a short-range wireless transmission technology
- the mobile phone can help users to send and receive emails, browse web pages, and access streaming media through the WiFi module 970, which provides users with wireless broadband Internet access.
- FIG. 9 shows the WiFi module 970, it can be understood that it does not belong to the essential configuration of the mobile phone, and can be omitted as needed within the scope of not changing the essence of the application.
- the processor 980 is the control center of the handset, which connects various portions of the entire handset using various interfaces and lines, by executing or executing software programs and/or modules stored in the memory 920, and invoking data stored in the memory 920, executing The phone's various functions and processing data, so that the overall monitoring of the phone.
- the processor 980 may include one or more processing units; preferably, the processor 980 may integrate an application processor and a modem processor, where the application processor mainly processes an operating system, a user interface, an application, and the like.
- the modem processor primarily handles wireless communications. It will be appreciated that the above described modem processor may also not be integrated into the processor 980.
- the handset also includes a power source 990 (such as a battery) that powers the various components.
- a power source can be logically coupled to the processor 980 via a power management system to manage charging, discharging, and power management through the power management system. And other functions.
- the mobile phone may further include a camera, a Bluetooth module, and the like, and details are not described herein again.
- the processor 980 included in the terminal further has the following functions:
- the first point cloud information is used to indicate the coordinates of the standard portrait on the display screen of the terminal;
- the second point cloud information is at least part of the information in the first point cloud information, the second point cloud information is used to indicate the position of the sticker relative to the standard portrait, and the second point cloud information is artificially drawn
- the sticker is a pattern that can be displayed in the display screen;
- the third point cloud information is generated, and the third point cloud information is used to indicate the coordinates of the portrait on the display screen when the terminal is in the preview state;
- the sticker is displayed according to the second point cloud information.
- the disclosed system, apparatus, and method may be implemented in other manners.
- the device embodiments described above are merely illustrative.
- the division of the unit is only a logical function division.
- there may be another division manner for example, multiple units or components may be combined or Can be integrated into another system, or some features can be ignored or not executed.
- the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be in an electrical, mechanical or other form.
- the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
- each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
- the above integrated unit can be implemented in the form of hardware or in the form of a software functional unit.
- the integrated unit if implemented in the form of a software functional unit and sold or used as a standalone product, may be stored in a computer readable storage medium.
- a computer readable storage medium A number of instructions are included to cause a computer device (which may be a personal computer, server, or network device, etc.) to perform all or part of the steps of the methods described in various embodiments of the present application.
- the foregoing storage medium includes: a U disk, a mobile hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk, and the like, which can store program code. .
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- User Interface Of Digital Computer (AREA)
- Processing Or Creating Images (AREA)
Abstract
Selon ses modes de réalisation, la présente invention concerne un procédé de traitement de données, utilisé pour améliorer l'expérience de capture d'image d'un utilisateur, et comprenant les étapes suivantes : un terminal obtient des premières informations de nuage de points d'un portrait standard, ces premières informations de nuage de points servant à indiquer les coordonnées du portrait standard sur un écran du terminal ; le terminal obtient des deuxièmes informations de nuage de points d'une vignette, ces deuxièmes informations de nuage de points représentant au moins une partie des premières informations de nuage de points et servant à indiquer l'emplacement de la vignette par rapport au portrait standard, et étant également formulées artificiellement, et la vignette étant une forme qui peut être affichée sur l'écran ; lorsque le terminal détecte un portrait dans un état d'aperçu de capture d'image, il génère des troisièmes informations de nuage de points, ces troisièmes informations de nuage de points servant à indiquer les coordonnées du portrait sur l'écran du terminal dans l'état d'aperçu de capture d'image ; et, quand les troisièmes informations de nuage de points correspondent aux premières informations de nuage points et que l'utilisateur sélectionne la vignette, le terminal affiche la vignette en fonction des deuxièmes informations de nuage de points. Selon ses modes de réalisation, la présente invention se rapporte également à un terminal qui permet d'améliorer l'expérience de capture d'image d'un utilisateur.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2017/106014 WO2019071562A1 (fr) | 2017-10-13 | 2017-10-13 | Procédé de traitement de données et terminal |
CN201780083034.3A CN110168599B (zh) | 2017-10-13 | 2017-10-13 | 一种数据处理方法及终端 |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2017/106014 WO2019071562A1 (fr) | 2017-10-13 | 2017-10-13 | Procédé de traitement de données et terminal |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2019071562A1 true WO2019071562A1 (fr) | 2019-04-18 |
Family
ID=66101123
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2017/106014 WO2019071562A1 (fr) | 2017-10-13 | 2017-10-13 | Procédé de traitement de données et terminal |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN110168599B (fr) |
WO (1) | WO2019071562A1 (fr) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113923355B (zh) * | 2021-09-30 | 2024-08-13 | 上海商汤临港智能科技有限公司 | 一种车辆及图像拍摄方法、装置、设备、存储介质 |
CN113936269B (zh) * | 2021-11-17 | 2022-07-01 | 深圳市镭神智能系统有限公司 | 滞留对象的识别方法及机动车控制方法 |
CN114501079B (zh) * | 2022-01-29 | 2024-10-01 | 京东方科技集团股份有限公司 | 用于对多媒体数据进行处理的方法及相关设备 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104778712A (zh) * | 2015-04-27 | 2015-07-15 | 厦门美图之家科技有限公司 | 一种基于仿射变换的人脸贴图方法和系统 |
US20150206310A1 (en) * | 2013-12-20 | 2015-07-23 | Furyu Corporation | Image generating apparatus and image generating method |
CN105096246A (zh) * | 2014-05-08 | 2015-11-25 | 腾讯科技(深圳)有限公司 | 图像合成方法和系统 |
CN105551070A (zh) * | 2015-12-09 | 2016-05-04 | 广州市久邦数码科技有限公司 | 一种实时加载贴图元素的拍照系统 |
US20160142580A1 (en) * | 2014-11-14 | 2016-05-19 | Furyu Corporation | Photograph sticker creating apparatus, and a method of generating photograph sticker |
CN106339201A (zh) * | 2016-09-14 | 2017-01-18 | 北京金山安全软件有限公司 | 贴图处理方法、装置和电子设备 |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20150039049A (ko) * | 2013-10-01 | 2015-04-09 | 삼성전자주식회사 | 템플릿 편집 프레임 크기에 따른 사용자 인터페이스 제공 방법 및 그 장치 |
CN103632136B (zh) * | 2013-11-11 | 2017-03-29 | 北京天诚盛业科技有限公司 | 人眼定位方法和装置 |
CN105469018B (zh) * | 2014-08-07 | 2020-03-13 | 中兴通讯股份有限公司 | 一种人眼定位的方法和设备 |
CN105678686B (zh) * | 2015-12-30 | 2019-06-14 | 北京金山安全软件有限公司 | 一种图片处理方法及装置 |
CN106952221B (zh) * | 2017-03-15 | 2019-12-31 | 中山大学 | 一种三维京剧脸谱自动化妆方法 |
-
2017
- 2017-10-13 WO PCT/CN2017/106014 patent/WO2019071562A1/fr active Application Filing
- 2017-10-13 CN CN201780083034.3A patent/CN110168599B/zh active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150206310A1 (en) * | 2013-12-20 | 2015-07-23 | Furyu Corporation | Image generating apparatus and image generating method |
CN105096246A (zh) * | 2014-05-08 | 2015-11-25 | 腾讯科技(深圳)有限公司 | 图像合成方法和系统 |
US20160142580A1 (en) * | 2014-11-14 | 2016-05-19 | Furyu Corporation | Photograph sticker creating apparatus, and a method of generating photograph sticker |
CN104778712A (zh) * | 2015-04-27 | 2015-07-15 | 厦门美图之家科技有限公司 | 一种基于仿射变换的人脸贴图方法和系统 |
CN105551070A (zh) * | 2015-12-09 | 2016-05-04 | 广州市久邦数码科技有限公司 | 一种实时加载贴图元素的拍照系统 |
CN106339201A (zh) * | 2016-09-14 | 2017-01-18 | 北京金山安全软件有限公司 | 贴图处理方法、装置和电子设备 |
Also Published As
Publication number | Publication date |
---|---|
CN110168599B (zh) | 2021-01-29 |
CN110168599A (zh) | 2019-08-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP7206388B2 (ja) | 仮想キャラクターの顔の表示方法、装置、コンピュータデバイス、およびコンピュータプログラム | |
CN108668083B (zh) | 一种拍照方法及终端 | |
CN108184050B (zh) | 一种拍照方法、移动终端 | |
WO2019184889A1 (fr) | Procédé et appareil d'ajustement de modèle de réalité augmentée, support d'informations et dispositif électronique | |
CN109361865B (zh) | 一种拍摄方法及终端 | |
WO2020192465A1 (fr) | Procédé et dispositif de reconstruction d'objet tridimensionnel | |
WO2019034142A1 (fr) | Procédé et dispositif d'affichage d'image tridimensionnelle, terminal et support d'informations | |
WO2019174628A1 (fr) | Procédé de photographie, et terminal mobile | |
WO2017071219A1 (fr) | Procédé de détection de région de peau et dispositif de détection de région de peau | |
CN108924412B (zh) | 一种拍摄方法及终端设备 | |
CN109685915B (zh) | 一种图像处理方法、装置及移动终端 | |
CN108307106B (zh) | 一种图像处理方法、装置及移动终端 | |
CN108683850B (zh) | 一种拍摄提示方法及移动终端 | |
WO2020233403A1 (fr) | Procédé et appareil d'affichage de visage personnalisé pour un personnage tridimensionnel et dispositif et support de stockage | |
CN107730460B (zh) | 一种图像处理方法及移动终端 | |
CN109803165A (zh) | 视频处理的方法、装置、终端及存储介质 | |
CN109495616B (zh) | 一种拍照方法及终端设备 | |
CN107741814B (zh) | 一种显示控制方法及移动终端 | |
WO2022062808A1 (fr) | Procédé et dispositif de génération de portraits | |
CN109978996B (zh) | 生成表情三维模型的方法、装置、终端及存储介质 | |
CN109448069B (zh) | 一种模板生成方法及移动终端 | |
CN109246351B (zh) | 一种构图方法及终端设备 | |
CN108833791B (zh) | 一种拍摄方法和装置 | |
CN106210510B (zh) | 一种基于图像调整的拍照方法、装置及终端 | |
WO2019071562A1 (fr) | Procédé de traitement de données et terminal |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 17928495 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 17928495 Country of ref document: EP Kind code of ref document: A1 |