WO2021258978A1 - 一种操作控制的方法及装置 - Google Patents

一种操作控制的方法及装置 Download PDF

Info

Publication number
WO2021258978A1
WO2021258978A1 PCT/CN2021/096269 CN2021096269W WO2021258978A1 WO 2021258978 A1 WO2021258978 A1 WO 2021258978A1 CN 2021096269 W CN2021096269 W CN 2021096269W WO 2021258978 A1 WO2021258978 A1 WO 2021258978A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
detected
display form
target virtual
information
Prior art date
Application number
PCT/CN2021/096269
Other languages
English (en)
French (fr)
Inventor
郑华
丛延东
周泽新
Original Assignee
北京字节跳动网络技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京字节跳动网络技术有限公司 filed Critical 北京字节跳动网络技术有限公司
Publication of WO2021258978A1 publication Critical patent/WO2021258978A1/zh

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/21Input arrangements for video game devices characterised by their sensors, purposes or types
    • A63F13/213Input arrangements for video game devices characterised by their sensors, purposes or types comprising photodetecting means, e.g. cameras, photodiodes or infrared cells
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition

Definitions

  • the present disclosure relates to the field of Internet technology, and in particular to a method and device for operation control.
  • the embodiments of the present disclosure provide at least one method and device for operation control.
  • an operation control method including:
  • the display form of the target virtual item is adjusted.
  • the display form includes the shape and/or size of the display.
  • the adjusting the display form of the target virtual prop according to the detected state information of the target part includes:
  • the display form of the target virtual item is adjusted.
  • the display of the target virtual prop is adjusted Morphology, including:
  • the detected facial shape change information of the target user is used to determine the The adjustment range of the display form of the target virtual prop;
  • the display form of the target virtual prop is adjusted.
  • adjusting the display form of the target virtual prop according to the detected state information of the target part includes:
  • the display form of the target virtual prop is adjusted.
  • the state attribute of the target part meets a preset state attribute condition, including:
  • the state of the target part conforms to the state of a beeping mouth.
  • the sound attribute meeting a preset sound attribute condition includes:
  • the size of the detected sound is greater than the set threshold, and/or the type of the detected sound is a preset type of sound.
  • the method further includes:
  • the target animation special effect corresponding to the target virtual prop is displayed.
  • displaying the target animation special effect of the target virtual prop includes:
  • displaying the target animation special effect corresponding to the target virtual prop includes:
  • the target animation special effect matching the prop attribute information is displayed.
  • the method further includes:
  • the recorded number of successful operations is updated, and the target virtual prop in the initial state is redisplayed.
  • the method further includes:
  • the target virtual item is generated.
  • the method further includes:
  • the display special effect of the auxiliary virtual prop is changed.
  • the face image of the target user includes face images of multiple target users
  • the target virtual props in the initial form are respectively displayed at the relative position corresponding to the detected position information of each target part on the face image.
  • adjusting the display form of the target virtual prop according to the detected state information of the target part includes:
  • the detected state information of the target part of each target user among the plurality of target users, and the face shape change information corresponding to each target user determine a selected user from the plurality of target users, and adjust the selected user The display form of the corresponding target virtual item.
  • adjusting the display form of the target virtual prop according to the detected state information of the target part includes:
  • the detected state information of the target part of each target user among the plurality of target users respectively adjust the display form of the target virtual item corresponding to each target user among the plurality of target users.
  • the target virtual prop corresponds to a real target operation object in a real scene; the relative position between the target part and the target virtual prop matches the said target in the real scene The relative position of the real target operation object relative to the target part when being operated.
  • embodiments of the present disclosure also provide an operation control method, including:
  • the display form of the target virtual prop is adjusted.
  • an operation control device including:
  • the acquisition module is used to acquire the face image of the target user.
  • the detection module is used to detect the position information of the target part in the face image.
  • the display module is configured to display the target virtual item in the initial display form at a relative position corresponding to the detected position information on the face image based on the detected position information.
  • the adjustment module is used to adjust the display form of the target virtual props according to the detected state information of the target part.
  • the display form includes the shape and/or size of the display.
  • the adjustment module is specifically configured to: when it is detected that the state attribute of the target part meets the preset state attribute condition, and it is detected that the sound attribute meets the preset sound attribute condition, Adjust the display form of the target virtual props.
  • the adjustment module is specifically configured to perform according to the condition that the state attribute of the target part meets the preset state attribute condition and the detected sound attribute meets the preset sound attribute condition
  • the detected face shape change information of the target user determines the display form adjustment range of the target virtual prop in a unit time; adjusts the display form of the target virtual prop according to the determined display form adjustment range.
  • the adjustment module is specifically configured to determine, based on the detected face shape change information of the target user, when the state attribute of the target part meets a preset state attribute condition The adjustment range of the display form of the target virtual prop in a unit time; according to the determined adjustment range of the display form, the display form of the target virtual prop is adjusted.
  • the state of the target part corresponds to the state of a beeping mouth.
  • the sound attribute meeting the preset sound attribute condition includes: detecting that the size of the sound is greater than a set threshold, and/or detecting that the type of the sound is a preset type of sound.
  • the device further includes: a target animation special effect display module, configured to display the target animation corresponding to the target virtual prop after adjusting the display form of the target virtual prop to meet a preset condition Special effects.
  • a target animation special effect display module configured to display the target animation corresponding to the target virtual prop after adjusting the display form of the target virtual prop to meet a preset condition Special effects.
  • the target animation special effect display module is specifically configured to display the target animation special effect in which the virtual balloon is blown or blown off.
  • the target animation special effect display module is specifically configured to display the target animation special effect matching the prop attribute information according to the prop attribute information of the target virtual prop.
  • the device further includes: a counting update module, configured to update the recorded number of successful operations after adjusting the display form of the target virtual item to meet a preset condition, and redisplay the initial state The target virtual props.
  • a counting update module configured to update the recorded number of successful operations after adjusting the display form of the target virtual item to meet a preset condition, and redisplay the initial state The target virtual props.
  • the device further includes: a personalization setting module, configured to obtain a personalization addition object; based on the obtained personalization addition object and a preset virtual item model, the target is generated Virtual props.
  • a personalization setting module configured to obtain a personalization addition object; based on the obtained personalization addition object and a preset virtual item model, the target is generated Virtual props.
  • the device further includes: an auxiliary virtual prop display module, configured to display auxiliary virtual props in a preset position area on the screen on which the face image is displayed.
  • the auxiliary virtual prop display effect adjustment module is used to respond to the adjustment of the display form of the target virtual prop to meet a preset condition, and change the special display effect of the auxiliary virtual prop.
  • the face image of the target user includes face images of multiple target users; the display module is further configured to be based on the detected position of the target part of each target user Information, respectively displaying the target virtual props in the initial form at the relative position corresponding to the detected position information of each target part on the face image.
  • the adjustment module is further specifically configured to detect the status information of the target part of each target user among the plurality of target users, and the face shape change information corresponding to each target user , Determine a selected user from the multiple target users, and adjust the display form of the target virtual item corresponding to the selected user.
  • the adjustment module is further specifically configured to adjust each of the plurality of target users separately according to the detected state information of the target part of each of the plurality of target users.
  • the target virtual prop corresponds to a real target operation object in a real scene; the relative position between the target part and the target virtual prop matches the said target in the real scene The relative position of the real target operation object relative to the target part when being operated.
  • an operation control device including:
  • the acquisition module is used to acquire the face image of the target user.
  • the display module is used to display the target virtual props in the initial state according to the acquired face image.
  • the adjustment module is configured to adjust the display form of the target virtual prop according to the facial expression information in the detected facial image and the detected sound information.
  • embodiments of the present disclosure also provide a computer device, including a processor, a memory, and a bus.
  • the memory stores machine-readable instructions executable by the processor.
  • the processing communicates with the memory through a bus.
  • the machine-readable instructions are executed by the processor, the above-mentioned first aspect or the steps in any one of the possible implementations of the first aspect are executed, or the above-mentioned first aspect is executed. The steps in the two aspects.
  • the embodiments of the present disclosure also provide a computer-readable storage medium having a computer program stored on the computer-readable storage medium, and the computer program executes the first aspect or any of the first aspects when the computer program is run by a processor.
  • a processor executes the first aspect or any of the first aspects when the computer program is run by a processor.
  • the embodiments of the present disclosure can realize the real-time control of the display form of the virtual props by the user, realize the coordinated display of the user's face image and the virtual props, and enhance the real experience of operating the virtual props.
  • the virtual props replace the real props, It also plays a role in saving material costs, protecting the environment (reducing garbage from real props), and facilitating statistical operation results.
  • Fig. 1 shows a flowchart of an operation control method provided by an embodiment of the present disclosure
  • FIG. 2 shows a schematic diagram of an interface diagram for acquiring a face image provided by an embodiment of the present disclosure
  • FIG. 3 shows a schematic diagram of an interface display diagram of a target virtual prop in an initial form provided by an embodiment of the present disclosure
  • FIG. 4 shows a schematic diagram of an interface display diagram of a target virtual prop after adjustment provided by an embodiment of the present disclosure
  • FIG. 5 shows a schematic diagram of an interface display diagram of a blow-through special effect provided by an embodiment of the present disclosure
  • FIG. 6 shows a schematic diagram of an interface display diagram of a blowing special effect provided by an embodiment of the present disclosure
  • FIG. 7 shows a schematic diagram of an interface display diagram of an auxiliary virtual prop display special effect provided by an embodiment of the present disclosure
  • FIG. 8 shows a schematic diagram of an interface display diagram of a target virtual prop in an initial form corresponding to a plurality of target users according to an embodiment of the present disclosure
  • FIG. 9 shows a schematic diagram of an interface display diagram of adjusted target virtual props corresponding to multiple target users according to an embodiment of the present disclosure
  • FIG. 10 shows a flowchart of another operation control method provided by an embodiment of the present disclosure.
  • FIG. 11 shows a schematic diagram of an operation control device provided by an embodiment of the present disclosure
  • FIG. 12 shows a schematic diagram of another operation control device provided by an embodiment of the present disclosure.
  • FIG. 13 shows a schematic diagram of an electronic device provided by an embodiment of the present disclosure
  • FIG. 14 shows a schematic diagram of an electronic device provided by an embodiment of the present disclosure.
  • the embodiments of the present disclosure provide a method and device for operation control, which can realize the real-time control of the display form of the virtual props by the user, realize the coordinated display of the user's face image and the virtual props, and enhance the performance of the virtual props.
  • the actual experience of the operation because virtual props replace the real props, it also plays a role in saving material costs, protecting the environment (reducing the garbage of real props), and facilitating statistical operation results.
  • the embodiments of the present disclosure determine the display position of the target virtual prop according to the position information of the target part, so that the display position of the target virtual prop can conform to the relative position relationship in the real scene, and further enhance the reality experience.
  • the execution subject of the operation control method provided in the embodiment of the present disclosure is generally a computer device with a certain computing capability.
  • the computer equipment includes, for example, terminal equipment or servers or other processing equipment.
  • the terminal equipment may be User Equipment (UE), mobile equipment, user terminal, terminal, cellular phone, cordless phone, personal digital assistant (Personal Digital Assistant, PDA), handheld devices, computing devices, vehicle-mounted devices, wearable devices, etc.
  • UE User Equipment
  • PDA Personal Digital Assistant
  • handheld devices computing devices, vehicle-mounted devices, wearable devices, etc.
  • the method of operation control may be implemented by a processor invoking computer-readable instructions stored in the memory.
  • FIG. 1 it is a flowchart of an operation control method provided by an embodiment of the present disclosure.
  • the method includes steps S101 to S104, wherein:
  • the face image of the target user can be acquired through the front camera of the terminal device.
  • the front camera will automatically search for and shoot the face image of the target user.
  • the terminal device may be a smart phone, a tablet computer, etc.
  • the specific interface diagram for acquiring the user's face image may include the following parts: face image, action prompt instructing the target user to start the game, the shape and style of the next target virtual item, the number of successful operations, auxiliary virtual items, and me Pets, balloons DIY (Do It Yourself, DIY), leaderboards and other trigger buttons that indicate user operations; among them, the shape and style of the next target virtual item can be used to remind the user of the shape and style of the next target virtual item
  • the number of successful operations can indicate the number of times the user successfully blows the balloon, that is, the number of the user successfully blows the balloon; the trigger button of my pet can be used to instruct the user to perform other operations on the auxiliary virtual props; balloon DIY
  • the trigger button can be used to instruct the target user to select DIY objects such as photos and stickers that they like or are interested in, and design the target virtual props by themselves.
  • the specific interface diagram for acquiring the user's face image is shown in Fig. 2, taking the terminal device as a mobile phone as an example.
  • S102 Detect location information of a target part in the face image.
  • the target part may be the mouth; the position information of the target part is used to indicate the position of the mouth on the terminal screen.
  • the target virtual prop may correspond to the real target operation object in the real scene; at this time, the relative position between the target part and the target virtual prop may match the real target in the real scene The relative position of the operating object relative to the target part when it is operated.
  • the way the user operates the target virtual item also matches the way the user operates the corresponding real target operating object in the real scene, thereby further enhancing the reality experience.
  • the target virtual item may be a balloon, and the relative position may be below the mouth;
  • the style of the balloon may include a variety of styles, for example, it may include a rabbit style, a donut style, and so on.
  • the target virtual prop display function of S103 may be activated, for example, a virtual balloon is displayed after the user's mouth is detected.
  • the embodiments of the present disclosure may also perform statistical recording of the user's operations on the target virtual item.
  • the user may be given a certain amount of preparation time.
  • the countdown may be started after the user initiates a preset trigger operation (such as beeping), the user is asked to start preparations, and after the countdown is over, the countdown of user operation records is started.
  • the initial display form is used to indicate the state of the target virtual item in the initial display stage; for example, the target virtual item in the initial display form may be a deflated (uninflated) small balloon.
  • a deflated small balloon is displayed below the mouth.
  • the specific display interface is shown in FIG. 3, and the terminal device is a mobile phone as an example.
  • S104 Adjust the display form of the target virtual item according to the detected state information of the target part.
  • the state information of the target part can be determined by performing feature extraction on the image of the target part.
  • the status information of the target part may include posture information of the target part, such as a beeping mouth.
  • the display form of the virtual balloon can be adjusted under the precondition that the status information of the target part meets the state of pouting.
  • the display form of the target virtual item may include the shape and/or size of the display; for example, the display shape of the virtual balloon may include the shape of a rabbit, a doughnut, etc.; the size is used to indicate the degree of inflation of the balloon, which can be The multiple of the size of the initial display form, for example: 1.5 times the size of the initial display form.
  • the display form of the target virtual prop can be adjusted according to the detected state information of the target part and the detected sound information.
  • the state information of the target user’s mouth is determined; the voice data of the target user is acquired, the voice data is processed, and the voice information corresponding to the voice data is determined; Adjust the display size of the balloon.
  • the display size of the balloon is adjusted according to the above-mentioned mouth state information and the above-mentioned sound information.
  • the specific description is as follows: the state attribute of the target part meets the preset state attribute condition, and the sound attribute is detected to meet the predetermined condition.
  • the sound attribute condition is set, the display form of the target virtual prop is adjusted.
  • the state attribute may include the posture characteristics of the target part, etc.; for example, for a balloon blowing scene, the state attribute of the mouth includes whether the mouth is pushed up or not.
  • the preset state attribute conditions may include a beeping mouth and different beeping amplitudes, which can be a slight beeping mouth, a large beating beating mouth, and the like.
  • the state of the target part conforms to the preset state attribute condition may be that the state of the target part conforms to the state of the mouth.
  • the sound attribute conditions can include sound type, sound size, and sound duration; among them, for the balloon blowing scene, the sound type can be divided into blowing sound and other sounds; the sound level can be determined by detecting the sound of the target user. The volume is obtained; the sound duration is used to indicate the duration of the sound.
  • the preset sound attribute conditions may include: sound type: blowing (the sound type may not be limited), sound size: greater than or equal to 1 decibel (only for example, not the actual threshold in actual operation), sound duration: The duration is greater than or equal to 3 seconds (the duration can also be unlimited).
  • the size of the balloon under the mouth is adjusted .
  • the sound attribute meeting the preset sound attribute condition may be: the detected sound size is greater than a set threshold, and/or the detected sound type is a preset type of sound.
  • the sound meets the preset sound attribute condition can be that the sound size is greater than the set threshold, and the sound type is blowing.
  • the volume of the sound is greater than 1.
  • Decibel adjust the size of the balloon below the mouth.
  • the display form of the target virtual item is adjusted, and the specific description is As follows: in the case that the state attribute of the target part meets the preset state attribute condition and the detected sound attribute meets the preset sound attribute condition, determine the target within a unit time according to the detected face shape change information of the target user The display form adjustment range of the virtual props; according to the determined display form adjustment range, the display form of the target virtual props is adjusted.
  • the face shape change information can be used to indicate the strength of the corresponding action (for example: blowing strength); it can include the face shape change range, that is, the mouth opening range and the cheek bulging range.
  • the face shape change information can affect the inflation speed of the balloon; specifically, the relationship between the face shape change information and the balloon inflation speed is: when the mouth opening range and the cheek bulge range are greater, that is, the blowing force is greater, then the corresponding The balloon expands quickly; when the mouth opening and cheek bulge are small, that is, the blowing force is small, and the drinking balloon expands slowly.
  • the mouth opening range and cheek bulging range corresponding to the target user’s face are detected, based on the aforementioned current target user’s
  • the relationship between the width of the mouth opening and the bulging of the cheeks, as well as the above-determined face shape change information and the balloon inflation speed determine the degree of change in the balloon size per unit time (that is, the degree of balloon inflation); according to the above determined balloon size per unit time Change the degree of change, adjust the display size of the balloon on the terminal screen.
  • the current target user beeps, and the sound meets the preset sound attribute conditions, and it is detected that the target user’s face has a large mouth opening and a large cheek bulge, it will be adjusted according to a larger expansion speed
  • the display size of the balloon on the terminal screen and the adjusted display interface are shown in Figure 4, taking the terminal device as a mobile phone as an example.
  • the display form of the target virtual item is adjusted to the initial form. That is to say, in the process of the target user blowing up the balloon, if the target user changes the state information of the mouth (that is, the process from beeping to not beeping), the balloon under the target user's mouth is adjusted to the initial deflated The state of the small balloon.
  • the display size of the balloon on the terminal screen only according to the detected mouth state information and face shape change information of the target user.
  • the specific description is as follows: the state of the target part When the attributes meet the preset state attribute conditions, determine the display form adjustment range of the target virtual item in a unit time according to the detected face shape change information of the target user; adjust the target virtual item according to the determined display form adjustment range The display form of the props.
  • the mouth opening range and cheek bulging range corresponding to the target user’s face are detected, and the current target user’s mouth opening range and The bulge of the cheek and the relationship between the information of the face shape change determined above and the inflation speed of the balloon determine the degree of change of the balloon size per unit time (that is, the degree of balloon inflation); adjust the balloon according to the degree of change of the balloon size determined above per unit time
  • the display size on the terminal screen is the degree of change of the balloon size per unit time (that is, the degree of balloon inflation); adjust the balloon according to the degree of change of the balloon size determined above per unit time.
  • the embodiments of the present disclosure can realize the real-time control of the display form of the virtual props by the user, realize the fusion display of the user's face image and the display form of the virtual props, and enhance the real experience of operating the virtual props.
  • the virtual props replace the real props , It also plays a role in saving material costs, protecting the environment (reducing real props garbage), and facilitating statistical operation results.
  • the relative position between the target part and the target virtual item matches the relative position of the real target operation object relative to the target part when the real target operation object is operated in the real scene. The operation of virtual props more closely matches the real scene.
  • the method further includes: adjusting the display form of the target virtual item After the preset conditions are met, the target animation special effects corresponding to the target virtual props are displayed.
  • the preset condition refers to the size threshold of the target virtual item; here, it refers to the maximum inflation size of the balloon.
  • the target animation special effects can be blow-through, blow-up, etc.
  • the target animation special effect showing the target virtual prop may be an animation special effect showing that the virtual balloon is blown or blown off.
  • the special effect of blowing or blowing the balloon is displayed .
  • the target user is blowing a balloon
  • the terminal device detects that the size of the balloon on the terminal screen has reached the maximum expansion size, when it is detected that the target user is still beeping, and the sound attributes meet the preset sound attribute conditions (ie When you continue to blow the balloon), the special effect of the animation when the balloon explodes on the terminal screen.
  • the target animation special effect matching the item attribute information is displayed.
  • the item attribute information may include the type of the item and the actual effect corresponding to each type; wherein the type of the item may be a bomb, a cloud, etc., and the actual effect corresponding to the bomb is an explosion, and the actual effect corresponding to the cloud is floating.
  • the target virtual prop is a bomb balloon
  • it is determined to display the animation special effect that corresponds to the bomb with the same realistic effect, that is, the blow-through special effect of the bomb balloon is displayed on the terminal screen.
  • the specific display interface is shown in Figure 5, taking the terminal device as a mobile phone as an example.
  • the target virtual prop is a cloud balloon
  • it is determined to display the animation special effect that corresponds to the cloud with the same realistic effect, that is, the blowing special effect of the cloud balloon is displayed on the terminal screen.
  • the specific display interface is shown in Figure 6, taking the terminal device as a mobile phone as an example.
  • the method further includes: adjusting the display form of the target virtual item After the preset conditions are met, the recorded number of successful operations is updated, and the target virtual item in the initial state is redisplayed.
  • the number of successful operations can be the number of successfully blown balloons, that is, the number of successfully blown balloons;
  • the prop properties of the target virtual props in the initial state here can be the same as those of the target virtual props in the previous initial state, or it can be different.
  • the properties of the props can include color, shape, type, and so on.
  • the balloon size is adjusted to be greater than the balloon's maximum inflation size
  • the number of successfully blown balloons is updated, and the A small deflated balloon is redisplayed below the target user's mouth (the shape, color, and type of the balloon can be the same as or different from the previous balloon).
  • the target user when the target user is blowing a balloon, when the terminal device detects that the balloon size on the terminal screen reaches the maximum expansion size, when it is detected that the target user is still beeping and the sound attributes meet the preset sound attribute conditions (That is, when you continue to blow the balloon), the balloon is successfully blown, the number of times the balloon is blown successfully is updated, and a small deflated balloon is redisplayed under the target user’s mouth.
  • the method further includes: obtaining a personalized added object; generating the target based on the obtained personalized added object and a preset virtual item model Virtual props.
  • the personalized adding objects can be stickers, photos, and other self-made (Do It Yourself, DIY) objects.
  • the target user can select DIY objects such as photos and stickers that he likes or are interested in, and add the DIY objects to the preset virtual prop model based on preset rules to generate target virtual props.
  • DIY objects such as photos and stickers that he likes or are interested in
  • the user selects the DIY button on the terminal device, and adds the image of Snow White he likes to the balloon prop model, generates a balloon containing the image of Snow White, and displays the above balloon containing Snow White on the screen of the terminal device superior.
  • the method further includes: displaying auxiliary virtual props in a preset position area on the screen displaying the face image; responding to the display form of the target virtual props being adjusted to conform to the preset Condition, changing the display special effects of the auxiliary virtual props.
  • the auxiliary virtual props may be virtual pets, virtual characters, etc., that is, virtual cats, virtual dogs, virtual smiling characters, etc.; here, the preset location area may be any area outside the area where the face image on the terminal screen is located.
  • auxiliary virtual props can be applause, clapping, thumbs up and so on.
  • the preset area on the screen of the terminal device displays auxiliary virtual props.
  • the target user is blowing a balloon
  • the terminal device detects that the balloon size on the terminal screen reaches the maximum inflated size, when the target user is still detected
  • the sound attribute meets the preset sound attribute condition (that is, when the balloon continues to be blown)
  • the user successfully blows the balloon
  • the display special effects of the auxiliary virtual props are adjusted.
  • the preset area on the screen of the terminal device displays the auxiliary virtual prop as a virtual smiling character.
  • the display effect of the virtual smiling character is adjusted to a thumbs up special effect, which is specifically displayed
  • the interface diagram is shown in Figure 7, taking the terminal device as a mobile phone as an example.
  • the target virtual props in the initial form are respectively displayed.
  • the face image of the target user acquired by the terminal device includes the face images of multiple target users
  • feature extraction is performed on the face image of each target user, and each target is determined based on the attribute information of the feature extraction result
  • the user’s mouth position information and based on the above-mentioned mouth position information of each target user, the initial form of target virtual props (ie, deflated balloons) are displayed below each target user’s mouth.
  • the specific display interface is shown in the figure As shown in 8, the terminal device is a mobile phone as an example.
  • the embodiments of the present disclosure can display a multi-person interaction scene.
  • multiple target users can compete for the operation authority of the target virtual item (different target users can have their own corresponding target virtual items, but only the winner can operate).
  • adjusting the display form of the target virtual item according to the detected state information of the target part includes: The detected state information of the target part of each target user among the plurality of target users, and the face shape change information corresponding to each target user, determine the selected user from the plurality of target users, and adjust the selected user The display form of the target virtual item corresponding to the user.
  • the user who is operating faster among multiple target users can be determined by the following method, which is described in detail as follows: According to the detected state of the target part of each target user among the multiple target users Information, respectively adjusting the display form of the target virtual prop corresponding to each target user among the plurality of target users.
  • the face image of the target user acquired by the terminal device includes the face images of multiple target users
  • feature extraction is performed on the face image of each target user to determine the mouth state information of each target user ( Namely: beeping, blowing), and the face change information corresponding to each target user (that is, the mouth opening range and the bulging range of the cheeks), and according to the mouth state information of each target user and the corresponding face Change the information and adjust the size of the balloon below the target user’s mouth.
  • the face image of the target user acquired by the terminal device includes the face images of three target users (user a, user b, and user c)
  • feature extraction is performed on the face images of the three target users.
  • Determine the mouth status information of user a as follows: beep, blow, sound decibel is 2 decibels, sound duration is 4 seconds, and the mouth opening range and cheek bulge range corresponding to user a are relatively large; determine user b
  • the mouth status information of is: smiling, and the mouth is not opened; it is determined that the mouth status information of user c is: beeping, blowing, the sound decibel is 1.5 decibels, the sound duration is 3 seconds, and the mouth corresponding to user c
  • the opening range and the bulging range of the cheeks are smaller, and the size of the balloon below the mouth of user a is adjusted to 4 times the initial size; do not adjust the size of the balloon below the mouth of user b; adjust the size of the balloon below the mouth of user c to 2 times the initial size.
  • video recording can be performed on the screen during the above operation, and the recorded video can be shared through a social APP.
  • FIG. 10 is a flowchart of another operation control method provided by an embodiment of the present disclosure.
  • the method includes steps S1001 to S1003, wherein:
  • the location information of the target part in the face image can be detected; based on the detected location information, the face image is compared with the detected location information on the face image.
  • the target virtual item in the initial form is displayed at the corresponding relative position.
  • S1003 Adjust the display form of the target virtual prop according to the detected facial expression information in the facial image and the detected sound information.
  • the facial expression information may include the state information of the target part and/or the information about the magnitude of the face shape change, etc.; the state information of the target part here may be whether the mouth is pouted.
  • the sound information may include information such as sound type, sound size, and sound continuity.
  • the status information of the target part (whether the mouth is beeping) and the change range of the face shape (the range of bulging cheeks and the motion range of opening and closing the mouth can be determined)
  • the voice data of the target user can obtain the voice data of the target user, process the voice data, and determine the voice information corresponding to the voice data; adjust the target virtual props according to the state information of the target part, the face shape change range information and the voice information Display size.
  • the display form of the target virtual prop can be adjusted for more related descriptions, please refer to the description in the first embodiment. Repeat it again.
  • the writing order of the steps does not mean a strict execution order but constitutes any limitation on the implementation process.
  • the specific execution order of each step should be based on its function and possibility.
  • the inner logic is determined.
  • the embodiment of the present disclosure also provides an operation control device corresponding to the method of operation control. Since the principle of the device in the embodiment of the present disclosure to solve the problem is similar to the above-mentioned method of operation control in the embodiment of the present disclosure, The implementation of the device can refer to the implementation of the method, and the repetition will not be repeated.
  • FIG. 11 it is a schematic diagram of an operation control apparatus 1100 provided by an embodiment of the present disclosure.
  • the apparatus includes: an acquisition module 1101, a detection module 1102, a display module 1103, and an adjustment module 1104; among them,
  • the obtaining module 1101 is used to obtain the face image of the target user.
  • the detection module 1102 is used to detect the position information of the target part in the face image.
  • the display module 1103 is configured to display the target virtual item in the initial display form at a relative position corresponding to the detected position information on the face image based on the detected position information.
  • the adjustment module 1104 is configured to adjust the display form of the target virtual item according to the detected state information of the target part.
  • the embodiments of the present disclosure can realize the real-time control of the display form of the virtual props by the user, realize the fusion display of the user's face image and the display form of the virtual props, and enhance the real experience of operating the virtual props.
  • the virtual props replace the real props , It also plays a role in saving material costs, protecting the environment (reducing real props garbage), and facilitating statistical operation results.
  • the relative position between the target part and the target virtual item matches the relative position of the real target operation object relative to the target part when the real target operation object is operated in the real scene. The operation of virtual props more closely matches the real scene.
  • the display form includes the shape and/or size of the display.
  • the adjustment module 1104 is specifically configured to adjust the state attribute of the target part when it is detected that the state attribute of the target part meets the preset state attribute condition and the sound attribute meets the preset sound attribute condition. Describe the display form of the target virtual props.
  • the adjustment module 1104 is specifically configured to, when the state attribute of the target part meets the preset state attribute condition, and the detected sound attribute meets the preset sound attribute condition, according to the detected
  • the face shape change information of the target user determines the display form adjustment range of the target virtual prop in a unit time; and adjusts the display form of the target virtual prop according to the determined display form adjustment range.
  • the adjustment module 1104 is specifically configured to determine the unit time according to the detected face shape change information of the target user when the state attribute of the target part meets the preset state attribute condition The adjustment range of the display form of the target virtual prop within; and adjust the display form of the target virtual prop according to the determined adjustment range of the display form.
  • the state of the target part conforms to the state of a beeping mouth.
  • the sound attribute meeting the preset sound attribute condition includes: detecting that the size of the sound is greater than a set threshold, and/or detecting that the type of the sound is a preset type of sound.
  • the device further includes: a target animation special effect display module, configured to display the target animation special effect corresponding to the target virtual prop after adjusting the display form of the target virtual prop to meet a preset condition .
  • the target animation special effect display module is specifically configured to display the target animation special effect of the virtual balloon being blown or blown off.
  • the target animation special effect display module is specifically configured to display the target animation special effect matching the prop attribute information according to the prop attribute information of the target virtual prop.
  • the device further includes: a counting update module, configured to update the recorded number of successful operations after adjusting the display form of the target virtual item to meet a preset condition, and redisplay the initial state Target virtual props.
  • a counting update module configured to update the recorded number of successful operations after adjusting the display form of the target virtual item to meet a preset condition, and redisplay the initial state Target virtual props.
  • the device further includes: a personalization setting module, configured to obtain a personalization addition object; based on the obtained personalization addition object and a preset virtual item model, generate the target virtual item Props.
  • a personalization setting module configured to obtain a personalization addition object; based on the obtained personalization addition object and a preset virtual item model, generate the target virtual item Props.
  • the device further includes: an auxiliary virtual prop display module, configured to display auxiliary virtual props in a preset position area on the screen on which the face image is displayed.
  • the auxiliary virtual prop display effect adjustment module is used to respond to the adjustment of the display form of the target virtual prop to meet a preset condition, and change the special display effect of the auxiliary virtual prop.
  • the face image of the target user includes face images of multiple target users; the display module 1103 is further configured to be based on the detected position information of the target part of each target user, At the relative position corresponding to the detected position information of each target part on the face image, the target virtual props in the initial form are respectively displayed.
  • the adjustment module 1104 is further specifically configured to determine from the detected state information of the target part of each target user among the plurality of target users and the face shape change information corresponding to each target user The selected user is determined among the multiple target users, and the display form of the target virtual item corresponding to the selected user is adjusted.
  • the adjustment module 1104 is further specifically configured to adjust each target user in the plurality of target users according to the detected state information of the target part of each target user in the plurality of target users.
  • the target virtual prop corresponds to a real target operation object in a real scene; the relative position between the target part and the target virtual prop matches the real target in the real scene The relative position of the target operation object relative to the target part when being operated.
  • FIG. 12 it is a schematic diagram of an operation control apparatus 1200 provided by an embodiment of the present disclosure.
  • the apparatus includes: an acquisition module 1201, a display module 1202, and an adjustment module 1203; wherein,
  • the obtaining module 1201 obtains the face image of the target user.
  • the display module 1202 is used to display the target virtual item in the initial state according to the acquired face image.
  • the adjustment module 1203 is configured to adjust the display form of the target virtual prop according to the facial expression information in the detected facial image and the detected sound information.
  • an embodiment of the present application also provides an electronic device.
  • a schematic structural diagram of an electronic device 1300 provided in an embodiment of this application includes a processor 1301, a memory 1302, and a bus 1303.
  • the memory 1302 is used to store execution instructions, including the memory 13021 and the external memory 13022; the memory 13021 here is also called internal memory, which is used to temporarily store the calculation data in the processor 1301 and the data exchanged with the external memory 13022 such as a hard disk.
  • the processor 1301 exchanges data with the external memory 13022 through the memory 13021, and when the electronic device 1300 is running, the processor 1301 and the memory 1302 communicate through the bus 1303, so that the processor 1301 executes the following instructions:
  • Obtain the face image of the target user detect the position information of the target part in the face image; based on the detected position information, display the initial shape at the relative position corresponding to the detected position information on the face image Target virtual props; adjust the display form of the target virtual props according to the detected state information of the target part.
  • FIG. 14 a schematic structural diagram of an electronic device 1400 provided in an embodiment of this application includes a processor 1401, a memory 1402, and a bus 1403.
  • the memory 1402 is used to store execution instructions, including a memory 14021 and an external memory 14022; the memory 14021 here is also called internal memory, which is used to temporarily store the calculation data in the processor 1401 and the data exchanged with the external memory 14022 such as a hard disk.
  • the processor 1401 exchanges data with the external memory 14022 through the memory 14021.
  • the processor 1401 and the memory 1402 communicate through the bus 1403, so that the processor 1401 executes the following instructions:
  • the embodiment of the present disclosure also provides a computer-readable storage medium on which a computer program is stored, and the computer program executes the steps of the operation control method described in the above method embodiment when the computer program is run by a processor.
  • the storage medium may be a volatile or nonvolatile computer readable storage medium.
  • the computer program product of the operation control method provided by the embodiment of the present disclosure includes a computer readable storage medium storing program code, and the program code includes instructions that can be used to execute the operation control method described in the above method embodiment
  • the program code includes instructions that can be used to execute the operation control method described in the above method embodiment
  • the embodiments of the present disclosure also provide a computer program, which, when executed by a processor, implements any one of the methods in the foregoing embodiments.
  • the computer program product can be specifically implemented by hardware, software, or a combination thereof.
  • the computer program product is specifically embodied as a computer storage medium.
  • the computer program product is specifically embodied as a software product, such as a software development kit (SDK), etc. Wait.
  • SDK software development kit
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • the functional units in the various embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the function is implemented in the form of a software function unit and sold or used as an independent product, it can be stored in a non-volatile computer readable storage medium executable by a processor.
  • the technical solution of the present disclosure essentially or the part that contributes to the prior art or the part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium, including Several instructions are used to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the methods described in the various embodiments of the present disclosure.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk and other media that can store program code .

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)

Abstract

一种操作控制的方法及装置,其中,该方法包括:获取目标用户的人脸图像(S101);检测所述人脸图像中目标部位的位置信息(S102);基于检测到的位置信息,在所述人脸图像上与检测到的位置信息对应的相对位置处展示处于初始展示形态的目标虚拟道具(S103);根据检测到的所述目标部位的状态信息,调整所述目标虚拟道具的展示形态(S104)。该方法可以实现用户对虚拟道具展示形态的实时控制,实现了用户人脸图像与虚拟道具的配合展示,增强了对虚拟道具进行操作的现实体验,另外,由于虚拟道具代替了现实道具,还起到了节省素材成本、保护环境(减少现实道具垃圾)、以及便于统计操作结果的作用。

Description

一种操作控制的方法及装置
相关申请的交叉引用
本申请要求于2020年06月24日提交的、申请号为202010589705.7、发明名称为“一种操作控制的方法及装置”的中国专利申请的优先权,该申请的全文通过引用结合在本申请中。
技术领域
本公开涉及互联网技术领域,具体而言,涉及一种操作控制的方法及装置。
背景技术
目前,随着互联网技术的不断发展,智能终端逐渐在人们的生活和工作中普及,智能终端中安装的各种媒体软件的功能也越来越强大。比如,可以通过智能终端中安装的媒体软件来实现对虚拟道具的操作,像模拟射击等,基于这类软件可以减少对真实素材的需求,节省成本,并且便于统计操作结果。但是,目前对虚拟道具的操作大都缺少现实融入性,用户的真实体验感不强。
发明内容
本公开实施例至少提供一种操作控制的方法及装置。
第一方面,本公开实施例提供了一种操作控制的方法,所述方法包括:
获取目标用户的人脸图像;
检测所述人脸图像中目标部位的位置信息;
基于检测到的位置信息,在所述人脸图像上与检测到的位置信息对应的相对位置处展示处于初始展示形态的目标虚拟道具;
根据检测到的所述目标部位的状态信息,调整所述目标虚拟道具的展示形态。
在一种可能的实施方式中,所述展示形态包括展示的形状和/或尺寸大小。
在一种可能的实施方式中,所述根据检测到的所述目标部位的状态信息,调整所述目标虚拟道具的展示形态,包括:
在检测到所述目标部位的状态属性符合预设的状态属性条件、且检测到声音属性符合预设声音属性条件的情况下,调整所述目标虚拟道具的展示形态。
在一种可能的实施方式中,在检测到所述目标部位的状态属性符合预设的状态属性条件、且检测到声音属性符合预设声音属性条件的情况下,调整所述目标虚拟道具的展示形态,包括:
在所述目标部位的状态属性符合预设的状态属性条件、且检测到的声音属性符合预设声音属性条件的情况下,根据检测到的所述目标用户的脸形变化信息,确定单位时间内的所述目标虚拟道具的展示形态调整幅度;
根据确定的展示形态调整幅度,调整所述目标虚拟道具的展示形态。
在一种可能的实施方式中,根据检测到的所述目标部位的状态信息,调整所述目标虚拟道具的展示形态,包括:
在所述目标部位的状态属性符合预设的状态属性条件的情况下,根据检测到的所述目标用户的脸形变化信息,确定单位时间内的所述目标虚拟道具的展示形态调整幅度;
根据确定的展示形态调整幅度,调整所述目标虚拟道具的展示形态。
在一种可能的实施方式中,在所述目标部位为嘴部,所述目标虚拟道 具为虚拟气球的情况下,所述目标部位的状态属性符合预设的状态属性条件,包括:
所述目标部位的状态符合嘟嘴的状态。
在一种可能的实施方式中,所述声音属性符合预设声音属性条件,包括:
检测到声音的大小大于设定阈值,和/或检测到声音的类型为预设类型的声音。
在一种可能的实施方式中,根据检测到的所述目标部位的状态信息,调整所述目标虚拟道具的展示形态之后,还包括:
在调整所述目标虚拟道具的展示形态至符合预设条件之后,展示所述目标虚拟道具对应的目标动画特效。
在一种可能的实施方式中,在所述目标部位为嘴部,所述目标虚拟道具为虚拟气球的情况下,展示所述目标虚拟道具的目标动画特效,包括:
展示所述虚拟气球被吹破或吹飞的动画特效。
在一种可能的实施方式中,在调整所述目标虚拟道具的展示形态至符合预设条件之后,展示所述目标虚拟道具对应的目标动画特效,包括:
根据所述目标虚拟道具的道具属性信息,展示与该道具属性信息匹配的目标动画特效。
在一种可能的实施方式中,根据检测到的所述目标部位的状态信息,调整所述目标虚拟道具的展示形态之后,还包括:
在调整所述目标虚拟道具的展示形态至符合预设条件之后,更新记录的成功操作次数,并重新展示初始状态的目标虚拟道具。
在一种可能的实施方式中,所述方法还包括:
获取个性化添加对象;
基于获取的所述个性化添加对象,以及预设的虚拟道具模型,生成所述目标虚拟道具。
在一种可能的实施方式中,所述方法还包括:
在展示所述人脸图像的屏幕上的预设位置区域展示辅助虚拟道具;
响应所述目标虚拟道具的展示形态调整至符合预设条件,改变所述辅助虚拟道具的展示特效。
在一种可能的实施方式中,所述目标用户的人脸图像包括多个目标用户的人脸图像;
基于检测到的每个目标用户的所述目标部位的位置信息,在所述人脸图像上与检测到的每个目标部位的位置信息对应的相对位置处,分别展示初始形态的目标虚拟道具。
在一种可能的实施方式中,根据检测到的所述目标部位的状态信息,调整所述目标虚拟道具的展示形态,包括:
根据检测到的所述多个目标用户中每个目标用户的目标部位的状态信息,以及每个目标用户对应的脸形变化信息,从所述多个目标用户中确定选中用户,调整所述选中用户对应的目标虚拟道具的展示形态。
在一种可能的实施方式中,根据检测到的所述目标部位的状态信息,调整所述目标虚拟道具的展示形态,包括:
根据检测到的所述多个目标用户中每个目标用户的目标部位的状态信息,分别调整所述多个目标用户中每个目标用户对应的目标虚拟道具的展示形态。
在一种可能的实施方式中,所述目标虚拟道具对应有现实场景下的真实目标操作对象;所述目标部位与所述目标虚拟道具之间的所述相对位置,匹配于现实场景下所述真实目标操作对象在被操作时相对于所述目标部位的相对位置。
第二方面,本公开实施例还提供一种操作控制的方法,包括:
获取目标用户的人脸图像;
根据获取的人脸图像,展示初始形态的目标虚拟道具;
根据检测到的所述人脸图像中的人脸表情信息,以及检测到的声音信息,调整所述目标虚拟道具的展示形态。
第三方面,本公开实施例还提供一种操作控制的装置,包括:
获取模块,用于获取目标用户的人脸图像。
检测模块,用于检测所述人脸图像中目标部位的位置信息。
展示模块,用于基于检测到的位置信息,在所述人脸图像上与检测到的位置信息对应的相对位置处展示处于初始展示形态的目标虚拟道具。
调整模块,用于根据检测到的所述目标部位的状态信息,调整所述目标虚拟道具的展示形态。
在一种可能的实施方式中,所述展示形态包括展示的形状和/或尺寸大小。
在一种可能的实施方式中,所述调整模块,具体用于在检测到所述目标部位的状态属性符合预设的状态属性条件、且检测到声音属性符合预设声音属性条件的情况下,调整所述目标虚拟道具的展示形态。
在一种可能的实施方式中,所述调整模块,具体用于在所述目标部位的状态属性符合预设的状态属性条件、且检测到的声音属性符合预设声音属性条件的情况下,根据检测到的所述目标用户的脸形变化信息,确定单位时间内的所述目标虚拟道具的展示形态调整幅度;根据确定的展示形态调整幅度,调整所述目标虚拟道具的展示形态。
在一种可能的实施方式中,所述调整模块,具体用于在所述目标部位的状态属性符合预设的状态属性条件的情况下,根据检测到的所述目标用户的脸形变化信息,确定单位时间内的所述目标虚拟道具的展示形态调整 幅度;根据确定的展示形态调整幅度,调整所述目标虚拟道具的展示形态。
在一种可能的实施方式中,所述目标部位的状态符合嘟嘴的状态。
在一种可能的实施方式中,所述声音属性符合预设声音属性条件,包括:检测到声音的大小大于设定阈值,和/或检测到声音的类型为预设类型的声音。
在一种可能的实施方式中,所述装置还包括:目标动画特效展示模块,用于在调整所述目标虚拟道具的展示形态至符合预设条件之后,展示所述目标虚拟道具对应的目标动画特效。
在一种可能的实施方式中,所述目标动画特效展示模块,具体用于展示所述虚拟气球被吹破或吹飞的所述目标动画特效。
在一种可能的实施方式中,所述目标动画特效展示模块,具体用于根据所述目标虚拟道具的道具属性信息,展示与该道具属性信息匹配的目标动画特效。
在一种可能的实施方式中,所述装置还包括:计数更新模块,用于在调整所述目标虚拟道具的展示形态至符合预设条件之后,更新记录的成功操作次数,并重新展示初始状态的目标虚拟道具。
在一种可能的实施方式中,所述装置还包括:个性化设置模块,用于获取个性化添加对象;基于获取的所述个性化添加对象,以及预设的虚拟道具模型,生成所述目标虚拟道具。
在一种可能的实施方式中,所述装置还包括:辅助虚拟道具展示模块,用于在展示所述人脸图像的屏幕上的预设位置区域展示辅助虚拟道具。
辅助虚拟道具展示效果调整模块,用于响应所述目标虚拟道具的展示形态调整至符合预设条件,改变所述辅助虚拟道具的展示特效。
在一种可能的实施方式中,所述目标用户的人脸图像包括多个目标用户的人脸图像;所述展示模块,还用于基于检测到的每个目标用户的所述 目标部位的位置信息,在所述人脸图像上与检测到的每个目标部位的位置信息对应的相对位置处,分别展示初始形态的目标虚拟道具。
在一种可能的实施方式中,所述调整模块,还具体用于根据检测到的所述多个目标用户中每个目标用户的目标部位的状态信息,以及每个目标用户对应的脸形变化信息,从所述多个目标用户中确定选中用户,调整所述选中用户对应的目标虚拟道具的展示形态。
在一种可能的实施方式中,所述调整模块,还具体用于根据检测到的所述多个目标用户中每个目标用户的目标部位的状态信息,分别调整所述多个目标用户中每个目标用户对应的目标虚拟道具的展示形态。
在一种可能的实施方式中,所述目标虚拟道具对应有现实场景下的真实目标操作对象;所述目标部位与所述目标虚拟道具之间的所述相对位置,匹配于现实场景下所述真实目标操作对象在被操作时相对于所述目标部位的相对位置。
第四方面,本公开实施例还提供一种操作控制的装置,包括:
获取模块,用于获取目标用户的人脸图像。
展示模块,用于根据获取的人脸图像,展示初始状态的目标虚拟道具。
调整模块,用于根据检测到的所述人脸图像中的人脸表情信息,以及检测到的声音信息,调整所述目标虚拟道具的展示形态。
第五方面,本公开实施例还提供一种计算机设备,包括:处理器、存储器和总线,所述存储器存储有所述处理器可执行的机器可读指令,当计算机设备运行时,所述处理器与所述存储器之间通过总线通信,所述机器可读指令被所述处理器执行时执行上述第一方面,或第一方面中任一种可能的实施方式中的步骤,或执行上述第二方面中的步骤。
第六方面,本公开实施例还提供一种计算机可读存储介质,该计算机可读存储介质上存储有计算机程序,该计算机程序被处理器运行时执行上 述第一方面,或第一方面中任一种可能的实施方式中的步骤,或执行上述第二方面中的步骤。
本公开实施例可以实现用户对虚拟道具展示形态的实时控制,实现了用户人脸图像与虚拟道具的配合展示,增强了对虚拟道具进行操作的现实体验,另外,由于虚拟道具代替了现实道具,还起到了节省素材成本、保护环境(减少现实道具垃圾)、以及便于统计操作结果的作用。
为使本公开的上述目的、特征和优点能更明显易懂,下文特举较佳实施例,并配合所附附图,作详细说明如下。
附图说明
为了更清楚地说明本公开实施例的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,此处的附图被并入说明书中并构成本说明书中的一部分,这些附图示出了符合本公开的实施例,并与说明书一起用于说明本公开的技术方案。应当理解,以下附图仅示出了本公开的某些实施例,因此不应被看作是对范围的限定,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他相关的附图。
图1示出了本公开实施例所提供的一种操作控制的方法的流程图;
图2示出了本公开实施例所提供的获取人脸图像的界面图的展示示意图;
图3示出了本公开实施例所提供的一种初始形态的目标虚拟道具的界面展示图的示意图;
图4示出了本公开实施例所提供的一种目标虚拟道具调整后的界面展示图的示意图;
图5示出了本公开实施例所提供的一种吹破特效的界面展示图的示意图;
图6示出了本公开实施例所提供的一种吹飞特效的界面展示图的示意图;
图7示出了本公开实施例所提供的一种辅助虚拟道具展示特效的界面展示图的示意图;
图8示出了本公开实施例所提供的一种多个目标用户对应的初始形态的目标虚拟道具的界面展示图的示意图;
图9示出了本公开实施例所提供的一种多个目标用户对应的调整后目标虚拟道具的界面展示图的示意图;
图10示出了本公开实施例所提供的另一种操作控制的方法的流程图;
图11示出了本公开实施例所提供的一种操作控制的装置的示意图;
图12示出了本公开实施例所提供的另一种操作控制的装置的示意图;
图13示出了本公开实施例所提供的一种电子设备的示意图;
图14示出了本公开实施例所提供的一种电子设备的示意图。
具体实施方式
为使本公开实施例的目的、技术方案和优点更加清楚,下面将结合本公开实施例中附图,对本公开实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本公开一部分实施例,而不是全部的实施例。通常在此处附图中描述和示出的本公开实施例的组件可以以各种不同的配置来布置和设计。因此,以下对在附图中提供的本公开的实施例的详细描述并非旨在限制要求保护的本公开的范围,而是仅仅表示本公开的选定实施例。基于本公开的实施例,本领域技术人员在没有做出创造性劳动的前提下所获得的所有其他实施例,都属于本公开保护的范围。
基于上述研究,本公开实施例提供了一种操作控制的方法及装置,可以实现用户对虚拟道具展示形态的实时控制,实现了用户人脸图像与虚拟道具的配合展示,增强了对虚拟道具进行操作的现实体验,另外,由于虚 拟道具代替了现实道具,还起到了节省素材成本、保护环境(减少现实道具垃圾)、以及便于统计操作结果的作用。另外,本公开实施例根据目标部位的位置信息确定目标虚拟道具的展示位置,从而可以使得目标虚拟道具的展示位置符合现实场景下的位置相对关系,进一步增强现实体验。
针对以上方案所存在的缺陷,均是发明人在经过实践并仔细研究后得出的结果,因此,上述问题的发现过程以及下文中本公开针对上述问题所提出的解决方案,都应该是发明人在本公开过程中对本公开做出的贡献。
应注意到:相似的标号和字母在下面的附图中表示类似项,因此,一旦某一项在一个附图中被定义,则在随后的附图中不需要对其进行进一步定义和解释。
为便于对本实施例进行理解,首先对本公开实施例所公开的一种操作控制的方法进行详细介绍,本公开实施例所提供的操作控制的方法的执行主体一般为具有一定计算能力的计算机设备,该计算机设备例如包括:终端设备或服务器或其它处理设备,终端设备可以为用户设备(User Equipment,UE)、移动设备、用户终端、终端、蜂窝电话、无绳电话、个人数字处理(Personal Digital Assistant,PDA)、手持设备、计算设备、车载设备、可穿戴设备等。在一些可能的实现方式中,该操作控制的方法可以通过处理器调用存储器中存储的计算机可读指令的方式来实现。
下面以执行主体为终端设备为例对本公开实施例提供的操作控制的方法加以说明。
实施例一
参见图1所示,为本公开实施例提供的一种操作控制的方法的流程图,所述方法包括步骤S101~S104,其中:
S101、获取目标用户的人脸图像。
在具体实施中,可以通过终端设备的前置摄像头获取目标用户的人脸图像。具体的,当目标用户在前置摄像头的拍摄范围内时,则前置摄像头 会自动搜索并拍摄目标用户的人脸图像。其中,终端设备可以为智能手机、平板电脑等。
具体的获取用户的人脸图像的界面图可以包括以下部分:人脸图像、指示目标用户开始游戏的动作提示、下一个目标虚拟道具的形状、样式等信息、成功操作次数、辅助虚拟道具、我的宠物、气球DIY(Do It Yourself,DIY)、排行榜等指示用户操作的触发按钮;其中,下一个目标虚拟道具的形状、样式等信息可以用来提示用户下一个目标虚拟道具的形状、样式等;成功操作次数可以表示用户成功吹破气球的次数,也即用户成功吹破气球的个数;我的宠物的触发按钮可以用来指示用户对拥有的辅助虚拟道具进行其他操作;气球DIY的触发按钮可以用于指示目标用户选择自己喜欢或者感兴趣的照片、贴纸等DIY对象,自己设计目标虚拟道具。具体的获取用户的人脸图像的界面图如图2所示,以终端设备为手机为例。
S102、检测所述人脸图像中目标部位的位置信息。
其中,目标部位可以为嘴部;目标部位的位置信息用来表示嘴部在终端屏幕上的位置。
在具体实施中,基于S101中获取到的目标用户的人脸图像,对上述人脸图像进行特征提取,根据特征提取结果,确定人脸图像中的嘴部图像,并确定嘴部图像在终端屏幕上的位置。
S103、基于检测到的位置信息,在所述人脸图像上与检测到的位置信息对应的相对位置处展示处于初始展示形态的目标虚拟道具。
本公开实施例中,目标虚拟道具可以对应有现实场景下的真实目标操作对象;此时,目标部位与所述目标虚拟道具之间的所述相对位置,可以匹配于现实场景下所述真实目标操作对象在被操作时相对于所述目标部位的相对位置,另外,用户操作目标虚拟道具的方式也与现实场景下用户操作对应的真实目标操作对象的方式匹配,从而进一步增强现实体验。
比如,目标虚拟道具可以为气球,相对位置可以为嘴部下方;这里, 气球的样式可以包括多种,比如可以包括兔子样式、甜甜圈样式等。
在具体实施中,可以在基于目标用户的人脸图像检测到用户发起预设触发操作后,启动S103的目标虚拟道具展示功能,比如检测到用户嘟嘴后,展示虚拟气球。
另外,本公开实施例还可以对用户针对目标虚拟道具的操作进行统计记录,在这种情况下,可以给用户一定的准备时间。作为一种可选实施方式,可以在用户发起预设触发操作(比如嘟嘴)后,启动倒计时,请用户开始准备,倒计时结束后,开始计时统计用户操作记录。
这里,初始展示形态用来指示目标虚拟道具在初始展示阶段的状态;比如,处于初始展示形态的目标虚拟道具可以为瘪的(没有吹气的)小气球。
在具体实施方式中,基于S102中确定的目标用户嘴部的位置,在嘴部下方展示瘪的小气球,具体展示界面如图3所示,以终端设备为手机为例。
S104、根据检测到的所述目标部位的状态信息,调整所述目标虚拟道具的展示形态。
在具体实施中,可以通过对目标部位的图像进行特征提取,确定目标部位的状态信息。目标部位的状态信息可以包括目标部位的姿态信息,比如嘟嘴。比如可以在目标部位的状态信息符合嘟嘴的状态的前提条件下,调整虚拟气球的展示形态。
其中,目标虚拟道具的展示形态可以包括展示的形状和/或尺寸大小;比如,虚拟气球的展示形状可以包括兔子形状、甜甜圈形状等;尺寸大小用来指示气球的膨胀大小程度,可以为初始展示形态大小的倍数,比如:为初始展示形态大小的1.5倍。
在具体实施中,可以根据检测到的所述目标部位的状态信息,以及检测到的声音信息,调整所述目标虚拟道具的展示形态。
具体的,通过对目标用户的人脸图像进行特征提取,确定目标用户嘴部的状态信息;获取目标用户的声音数据,对声音数据进行处理,确定上述声音数据对应的声音信息;根据上述嘴部的状态信息和上述声音信息,调整气球的展示尺寸。
在具体实施中,根据上述嘴部的状态信息和上述声音信息,调整气球的展示尺寸,具体描述如下:在所述目标部位的状态属性符合预设的状态属性条件、且检测到声音属性符合预设声音属性条件的情况下,调整所述目标虚拟道具的展示形态。
其中,状态属性可以包括目标部位的姿态特征等;比如,针对吹气球的场景,嘴部的状态属性包括嘴部是否撅起。
这里,预设的状态属性条件可以包括嘟嘴以及不同的嘟嘴幅度,可以为轻微嘟嘴、大幅度嘟嘴等。这里,目标部位的状态符合预设的状态属性条件可以为目标部位的状态符合嘟嘴的状态。
这里,声音属性条件可以包括声音类型、声音大小和声音持续度;其中,针对吹气球这种场景,可以将声音类型划分为吹气的声音和其它声音;声音大小可以通过检测目标用户发出声音的音量获得;声音持续度用来指示发出声音的持续时间。
示例性的,预设声音属性条件可以包括:声音类型:吹气(也可以不限制声音类型),声音大小:大于等于1分贝(仅为举例,非实操中实际阈值),声音持续时间:持续时长大于等于3秒(也可以不限制持续时间)。
示例性的,当检测到目标用户的目标部位的状态属性符合预设的状态属性条件,即嘟嘴、且检测到声音大小大于设定阈值的情况下,调整在嘴部下方的气球的尺寸大小。
在一种可选的实施方式中,声音属性符合预设声音属性条件可以为:检测到声音的大小大于设定阈值,和/或检测到声音的类型为预设类型的声音。比如,声音符合预设声音属性条件可以为声音大小大于设定阈值,且 声音类型为吹气。
示例性的,当检测到目标用户的目标部位的状态属性符合预设的状态属性条件,即嘟嘴、且检测到声音属性符合预设声音属性条件的情况下,即吹气、声音大小大于1分贝,则调整在嘴部下方的气球的尺寸大小。
在具体实施中,在检测到所述目标部位的状态属性符合预设的状态属性条件、且检测到声音属性符合预设声音属性条件的情况下,调整所述目标虚拟道具的展示形态,具体描述如下:在所述目标部位的状态属性符合预设的状态属性条件、且检测到的声音属性符合预设声音属性条件的情况下,根据检测到目标用户的脸形变化信息,确定单位时间内的目标虚拟道具的展示形态调整幅度;根据确定的展示形态调整幅度,调整所述目标虚拟道具的展示形态。
其中,脸形变化信息可以用来指示发出对应动作的力度大小(比如:吹气力度大小);可以包括脸形变化幅度,即嘴边张开幅度和腮帮鼓起幅度。
这里,脸形变化信息可以影响气球膨胀速度;具体的,脸形变化信息与气球膨胀速度的关系为:当嘴巴张开幅度和腮帮鼓起幅度较大时,即吹气力度较大,则对应的气球膨胀速度快;当嘴巴张开幅度和腮帮鼓起幅度较小时,即吹气力度较小,则对饮的气球膨胀速度慢。
具体的,当检测到目标用户的嘴部为嘟嘴,且声音符合预设声音属性条件时,检测目标用户的人脸对应的嘴巴张开幅度和腮帮鼓起幅度,根据上述当前目标用户的嘴巴张开幅度和腮帮鼓起幅度以及上述确定的脸形变化信息与气球膨胀速度的关系,确定单位时间内气球尺寸的变化程度(即气球膨胀程度);根据上述确定的单位时间内气球尺寸的变化程度,调整气球在终端屏幕上的展示尺寸。
示例性的,当前目标用户嘟嘴,且声音符合预设声音属性条件,检测到目标用户的人脸的嘴巴张开幅度较大、且腮帮鼓起幅度较大,则按照较 大膨胀速度调整气球在终端屏幕上的展示尺寸,调整后的展示界面如图4所示,以终端设备为手机为例。
在一种可选的实施方式中,当检测到用户的目标部位的状态属性不符合预设的状态属性条件时,将目标虚拟道具的展示形态调整为初始形态。也就是说,在目标用户吹起气球过程中,若目标用户改变嘴部的状态信息(即从嘟嘴到不嘟嘴的过程),则将在目标用户嘴部下方的气球调整为初始的瘪的小气球状态。
在一种可选的实施方式中,还可以仅根据检测到的目标用户的嘴部状态信息及脸形变化信息,调整气球在终端屏幕上的展示尺寸,具体描述如下:在所述目标部位的状态属性符合预设的状态属性条件的情况下,根据检测到的目标用户的脸形变化信息,确定单位时间内的目标虚拟道具的展示形态调整幅度;根据确定的展示形态调整幅度,调整所述目标虚拟道具的展示形态。
具体的,当检测到目标用户的嘴部状态符合嘟嘴状态的情况下,检测目标用户的人脸对应的嘴巴张开幅度和腮帮鼓起幅度,根据上述当前目标用户的嘴巴张开幅度和腮帮鼓起幅度以及上述确定的脸形变化信息与气球膨胀速度的关系,确定单位时间内气球尺寸的变化程度(即气球膨胀程度);根据上述确定的单位时间内气球尺寸的变化程度,调整气球在终端屏幕上的展示尺寸。
本公开实施例可以实现用户对虚拟道具展示形态的实时控制,实现用户人脸图像与虚拟道具展示形态的融合展示,增强了对虚拟道具进行操作的现实体验,另外,由于虚拟道具代替了现实道具,还起到了节省素材成本、保护环境(减少现实道具垃圾)、以及便于统计操作结果的作用。此外,在本公开实施例中,目标部位与目标虚拟道具之间的相对位置,匹配于现实场景下真实目标操作对象在被操作时相对于所述目标部位的相对位置,从而本公开实施例对虚拟道具的操作更加匹配现实场景。
在一种可选的实施方式中,在根据检测到的所述目标部位的状态信息,调整所述目标虚拟道具的展示形态之后,所述方法还包括:在调整所述目标虚拟道具的展示形态至符合预设条件之后,展示所述目标虚拟道具对应的目标动画特效。
其中,预设条件指目标虚拟道具的尺寸阈值;这里,指的是气球的最大膨胀尺寸。
这里,目标动画特效可以为吹破、吹飞等。具体的,展示所述目标虚拟道具的目标动画特效可以为展示所述虚拟气球被吹破或被吹飞的动画特效。
具体的,在根据检测到的目标用户的嘴部状态信息,调整在终端屏幕是上的气球尺寸之后,当气球尺寸调整到大于气球的最大膨胀尺寸之后,则展示吹破或吹飞气球的特效。比如,当目标用户在吹气球过程中,当终端设备检测到在终端屏幕上的气球尺寸达到最大膨胀尺寸之后,当检测到目标用户依旧为嘟嘴、且声音属性符合预设声音属性条件(即继续吹气球时)时,则在终端屏幕上气球爆炸时的动画特效。
在具体实施中,根据目标虚拟道具的道具属性信息,展示与该道具属性信息匹配的目标动画特效。
其中,道具属性信息可以包括道具类型、以及每种类型对应的现实效果;其中,道具类型可以为炸弹、云朵等,且上述炸弹对应的现实效果为爆炸、云朵对应的现实效果为漂浮等。
示例性的,当目标虚拟道具为炸弹气球时,则根据目标虚拟道具的道具属性信息,则确定展示与炸弹对应的现实效果相同的动画特效,即在终端屏幕上展示炸弹气球的吹破特效,具体展示界面如图5所示,以终端设备为手机为例。
示例性的,当目标虚拟道具为云朵气球时,则根据目标虚拟道具的道具属性信息,则确定展示与云朵对应的现实效果相同的动画特效,即在终 端屏幕上展示云朵气球的吹飞特效,具体展示界面如图6所示,以终端设备为手机为例。
在一种可选的实施方式中,在根据检测到的所述目标部位的状态信息,调整所述目标虚拟道具的展示形态之后,所述方法还包括:在调整所述目标虚拟道具的展示形态至符合预设条件之后,更新记录的成功操作次数,并重新展示初始状态的目标虚拟道具。
这里,成功操作次数可以为成功吹破气球的次数,即成功吹破气球的个数;这里的初始状态的目标虚拟道具的道具属性可以与之前初始状态的目标虚拟道具的道具属性相同,也可以不同。其中,道具属性可以包括颜色、形态、类型等。
具体的,在根据检测到的目标用户的嘴部状态信息,调整在终端屏幕是上的气球尺寸之后,当气球尺寸调整到大于气球的最大膨胀尺寸之后,更新成功吹破气球的次数,并在目标用户的嘴部下方重新展示一个瘪的小气球(这里的气球的形状、颜色、类型可以与前一个气球相同,也可以不同)。也就是说,当目标用户在吹气球过程中,当终端设备检测到在终端屏幕上的气球尺寸达到最大膨胀尺寸之后,当检测到目标用户依旧为嘟嘴、且声音属性符合预设声音属性条件(即继续吹气球时)时,则成功吹破气球,更新成功吹破气球的次数,并在目标用户的嘴部下方重新展示一个瘪的小气球。
为了进一步丰富操作场景,在一种可选的实施方式中,所述方法还包括:获取个性化添加对象;基于获取的所述个性化添加对象,以及预设的虚拟道具模型,生成所述目标虚拟道具。
其中,个性化添加对象可以为贴纸、照片等自己动手制作(Do It Yourself,DIY)的对象。
具体的,目标用户可以选择自己喜欢或者感兴趣的照片、贴纸等DIY对象,将DIY对象基于预设规则添加到预设的虚拟道具模型上,生成目标 虚拟道具。
示例性的,用户在终端设备上选择DIY按钮,并将自己喜欢的白雪公主形象添加到气球道具模型上,生成包含白雪公主形象的气球,并将上述包含白雪公主的气球展示在终端设备的屏幕上。
在一种可选的实施方式中,所述方法还包括:在展示所述人脸图像的屏幕上的预设位置区域展示辅助虚拟道具;响应所述目标虚拟道具的展示形态调整至符合预设条件,改变所述辅助虚拟道具的展示特效。
其中,辅助虚拟道具可以为虚拟宠物、虚拟人物等,即虚拟猫、虚拟狗、虚拟笑脸人物等;这里,预设位置区域可以为终端屏幕上处人脸图像所在区域外的任意区域。
这里,辅助虚拟道具的展示特效可以为鼓掌、拍手、竖起大拇指点赞等。
具体的,终端设备的屏幕上的预设区域展示辅助虚拟道具,当目标用户在吹气球过程中,当终端设备检测到在终端屏幕上的气球尺寸达到最大膨胀尺寸之后,当检测到目标用户依旧为嘟嘴、且声音属性符合预设声音属性条件(即继续吹气球时)时,则用户成功吹破气球,检测到用户成功吹破气球,则调整辅助虚拟道具的展示特效。
示例性的,终端设备的屏幕上的预设区域展示辅助虚拟道具为虚拟笑脸人物,当检测到用户成功吹破气球时,则将虚拟笑脸人物的展示特效调整为竖起大拇指特效,具体展示界面图如图7所示,以终端设备为手机为例。
在一种可选的实施方式中,当所述目标用户的人脸图像包括多个目标用户的人脸图像时,则基于检测到的每个目标用户的所述目标部位的位置信息,在所述人脸图像上与检测到的每个目标部位的位置信息对应的相对位置处,分别展示初始形态的目标虚拟道具。
具体的,当终端设备获取的目标用户的人脸图像包括多个目标用户的 人脸图像时,对每个目标用户的人脸图像进行特征提取,基于特征提取结果的属性信息,确定每个目标用户的嘴部位置信息,并基于上述每个目标用户的嘴部位置信息,在每个目标用户的嘴部下方展示初始形态的目标虚拟道具(即瘪的小气球),具体展示界面图如图8所示,以终端设备为手机为例。
另外,本公开实施例可以展示多人互动场景。在这种场景下,多个目标用户之间可以竞争目标虚拟道具的操作权限(不同目标用户可以分别有各自对应的目标虚拟道具,不过胜出者才能操作)。比如,在一种可选的实施方式中,当获取到多个目标用户的人脸图像时,根据检测到的所述目标部位的状态信息,调整所述目标虚拟道具的展示形态,包括:根据检测到的所述多个目标用户中每个目标用户的目标部位的状态信息,以及每个目标用户对应的脸形变化信息,从所述多个目标用户中确定被选中用户,调整所述被选中用户对应的目标虚拟道具的展示形态。
在另一种多人互动场景下,可以通过以下方法确定多个目标用户中操作较快的用户,具体描述如下:根据检测到的所述多个目标用户中每个目标用户的目标部位的状态信息,分别调整所述多个目标用户中每个目标用户对应的目标虚拟道具的展示形态。
具体的,当终端设备获取的目标用户的人脸图像包括多个目标用户的人脸图像时,对每个目标用户的人脸图像分别进行特征提取,确定每个目标用户的嘴部状态信息(即:嘟嘴、吹气),以及每个目标用户对应的脸部变化信息(即嘴巴张开幅度和腮帮鼓起幅度),并根据每个目标用户的嘴部状态信息和对应的脸部变化信息,调整该目标用户嘴部下方气球的尺寸大小。
示例性的,当终端设备获取的目标用户的人脸图像包括三个目标用户的人脸图像(用户a、用户b、用户c)时,对三个目标用户的人脸图像分别进行特征提取,确定用户a的嘴部状态信息为:嘟嘴、吹气、声音分贝为2分贝、声音持续时长为4秒,且用户a对应的嘴巴张开幅度和腮帮鼓 起幅度较大;确定用户b的嘴部状态信息为:微笑、且未张开嘴巴;确定用户c的嘴部状态信息为:嘟嘴、吹气、声音分贝为1.5分贝、声音持续时长为3秒,且用户c对应的嘴巴张开幅度和腮帮鼓起幅度较小,则根据用户a、用户b、用户c的嘴部状态信息、嘴巴张开幅度和腮帮鼓起幅度,将用户a嘴部下方气球的尺寸调整为初始尺寸的4倍;不调整用户b嘴部下方气球的尺寸;将用户c嘴部下方气球的尺寸调整为初始尺寸的2倍。具体展示界面图如图9所示,以终端设备为手机为例。
本公开实施例中,可以针对上述操作过程中的画面进行视频录制,将录制的视频通过社交APP进行分享。比如,可以在拍摄获取人脸图像的同时开始录制视频,在整个操作流程结束后,保存录制完成的视频,分享到社交APP。
实施例二
参见图10所示,为本公开实施例提供的另一种操作控制的方法的流程图,所述方法包括步骤S1001~S1003,其中:
S1001、获取目标用户的人脸图像。
S1002、根据获取的人脸图像,展示初始形态的目标虚拟道具。
参考实施例一所述,获取目标用户的人脸图像后,可以检测所述人脸图像中目标部位的位置信息;基于检测到的位置信息,在所述人脸图像上与检测到的位置信息对应的相对位置处展示初始形态的目标虚拟道具。
S1003、根据检测到的所述人脸图像中的人脸表情信息,以及检测到的声音信息,调整所述目标虚拟道具的展示形态。
其中,人脸表情信息可以包括目标部位的状态信息和/或脸形变化幅度信息等;这里的目标部位的状态信息可以为是否嘟嘴。其中,声音信息可以包括声音类型、声音大小、声音持续性等信息。
在一种实施方式中,可以通过对目标用户的人脸图像进行特征提取, 确定目标部位的状态信息(是否嘟嘴)和脸形变化幅度(腮帮鼓起幅度和嘴部张开闭合的动作幅度)等信息;并可以获取目标用户的声音数据,对声音数据进行处理,确定上述声音数据对应的声音信息;根据上述目标部位的状态信息、脸形变化幅度信息和上述声音信息,调整目标虚拟道具的展示尺寸。
在具体实施中,根据检测到的人脸图像中的人脸表情信息,以及检测到的声音信息,调整目标虚拟道具的展示形态的更多的相关描述可以参见实施例一中的描述,这里不再赘述。
本领域技术人员可以理解,在具体实施方式的上述方法中,各步骤的撰写顺序并不意味着严格的执行顺序而对实施过程构成任何限定,各步骤的具体执行顺序应当以其功能和可能的内在逻辑确定。
基于同一发明构思,本公开实施例中还提供了与操作控制的方法对应的操作控制的装置,由于本公开实施例中的装置解决问题的原理与本公开实施例上述操作控制的方法相似,因此装置的实施可以参见方法的实施,重复之处不再赘述。
实施例三
参照图11所示,为本公开实施例提供的一种操作控制的装置1100的示意图,所述装置包括:获取模块1101、检测模块1102、展示模块1103和调整模块1104;其中,
获取模块1101,用于获取目标用户的人脸图像。
检测模块1102,用于检测所述人脸图像中目标部位的位置信息。
展示模块1103,用于基于检测到的位置信息,在所述人脸图像上与检测到的位置信息对应的相对位置处展示处于初始展示形态的目标虚拟道具。
调整模块1104,用于根据检测到的所述目标部位的状态信息,调整所 述目标虚拟道具的展示形态。
本公开实施例可以实现用户对虚拟道具展示形态的实时控制,实现用户人脸图像与虚拟道具展示形态的融合展示,增强了对虚拟道具进行操作的现实体验,另外,由于虚拟道具代替了现实道具,还起到了节省素材成本、保护环境(减少现实道具垃圾)、以及便于统计操作结果的作用。此外,在本公开实施例中,目标部位与目标虚拟道具之间的相对位置,匹配于现实场景下真实目标操作对象在被操作时相对于所述目标部位的相对位置,从而本公开实施例对虚拟道具的操作更加匹配现实场景。
一种可能的实施方式中,所述展示形态包括展示的形状和/或尺寸大小。
一种可能的实施方式中,调整模块1104,具体用于在检测到所述目标部位的状态属性符合预设的状态属性条件、且检测到声音属性符合预设声音属性条件的情况下,调整所述目标虚拟道具的展示形态。
一种可能的实施方式中,调整模块1104,具体用于在所述目标部位的状态属性符合预设的状态属性条件、且检测到的声音属性符合预设声音属性条件的情况下,根据检测到的所述目标用户的的脸形变化信息,确定单位时间内的所述目标虚拟道具的展示形态调整幅度;根据确定的展示形态调整幅度,调整所述目标虚拟道具的展示形态。
一种可能的实施方式中,调整模块1104,具体用于在所述目标部位的状态属性符合预设的状态属性条件的情况下,根据检测到的所述目标用户的脸形变化信息,确定单位时间内的所述目标虚拟道具的展示形态调整幅度;根据确定的展示形态调整幅度,调整所述目标虚拟道具的展示形态。
一种可能的实施方式中,所述目标部位的状态符合嘟嘴的状态。
一种可能的实施方式中,所述声音属性符合预设声音属性条件,包括:检测到声音的大小大于设定阈值,和/或检测到声音的类型为预设类型的声音。
一种可能的实施方式中,所述装置还包括:目标动画特效展示模块, 用于在调整所述目标虚拟道具的展示形态至符合预设条件之后,展示所述目标虚拟道具对应的目标动画特效。
一种可能的实施方式中,所述目标动画特效展示模块,具体用于展示所述虚拟气球被吹破或吹飞的所述目标动画特效。
一种可能的实施方式中,所述目标动画特效展示模块,具体用于根据所述目标虚拟道具的道具属性信息,展示与该道具属性信息匹配的目标动画特效。
一种可能的实施方式中,所述装置还包括:计数更新模块,用于在调整所述目标虚拟道具的展示形态至符合预设条件之后,更新记录的成功操作次数,并重新展示初始状态的目标虚拟道具。
一种可能的实施方式中,所述装置还包括:个性化设置模块,用于获取个性化添加对象;基于获取的所述个性化添加对象,以及预设的虚拟道具模型,生成所述目标虚拟道具。
一种可能的实施方式中,所述装置还包括:辅助虚拟道具展示模块,用于在展示所述人脸图像的屏幕上的预设位置区域展示辅助虚拟道具。
辅助虚拟道具展示效果调整模块,用于响应所述目标虚拟道具的展示形态调整至符合预设条件,改变所述辅助虚拟道具的展示特效。
一种可能的实施方式中,所述目标用户的人脸图像包括多个目标用户的人脸图像;展示模块1103,还用于基于检测到的每个目标用户的所述目标部位的位置信息,在所述人脸图像上与检测到的每个目标部位的位置信息对应的相对位置处,分别展示初始形态的目标虚拟道具。
一种可能的实施方式中,调整模块1104,还具体用于根据检测到的所述多个目标用户中每个目标用户的目标部位的状态信息,以及每个目标用户对应的脸形变化信息,从所述多个目标用户中确定被选中用户,调整所述被选中用户对应的目标虚拟道具的展示形态。
一种可能的实施方式中,调整模块1104,还具体用于根据检测到的所述多个目标用户中每个目标用户的目标部位的状态信息,分别调整所述多个目标用户中每个目标用户对应的目标虚拟道具的展示形态。
一种可能的实施方式中,所述目标虚拟道具对应有现实场景下的真实目标操作对象;所述目标部位与所述目标虚拟道具之间的所述相对位置,匹配于现实场景下所述真实目标操作对象在被操作时相对于所述目标部位的相对位置。
实施例四
参照图12所示,为本公开实施例提供的一种操作控制的装置1200的示意图,所述装置包括:获取模块1201、展示模块1202和调整模块1203;其中,
获取模块1201,获取目标用户的人脸图像。
展示模块1202,用于根据获取的人脸图像,展示初始状态的目标虚拟道具。
调整模块1203,用于根据检测到的所述人脸图像中的人脸表情信息,以及检测到的声音信息,调整所述目标虚拟道具的展示形态。
关于装置中的各模块的处理流程、以及各模块之间的交互流程、以及有益效果的描述可以参照上述方法实施例中的相关说明,这里不再详述。
基于同一技术构思,本申请实施例还提供了一种电子设备。参照图13所示,为本申请实施例提供的电子设备1300的结构示意图,包括处理器1301、存储器1302、和总线1303。其中,存储器1302用于存储执行指令,包括内存13021和外部存储器13022;这里的内存13021也称内存储器,用于暂时存放处理器1301中的运算数据,以及与硬盘等外部存储器13022交换的数据,处理器1301通过内存13021与外部存储器13022进行数据交换,当电子设备1300运行时,处理器1301与存储器1302之间通过总线1303通信,使得处理器1301执行以下指令:
获取目标用户的人脸图像;检测所述人脸图像中目标部位的位置信息;基于检测到的位置信息,在所述人脸图像上与检测到的位置信息对应的相对位置处展示初始形态的目标虚拟道具;根据检测到的所述目标部位的状态信息,调整所述目标虚拟道具的展示形态。
基于同一技术构思,本申请实施例还提供了一种电子设备。参照图14所示,为本申请实施例提供的电子设备1400的结构示意图,包括处理器1401、存储器1402、和总线1403。其中,存储器1402用于存储执行指令,包括内存14021和外部存储器14022;这里的内存14021也称内存储器,用于暂时存放处理器1401中的运算数据,以及与硬盘等外部存储器14022交换的数据,处理器1401通过内存14021与外部存储器14022进行数据交换,当电子设备1400运行时,处理器1401与存储器1402之间通过总线1403通信,使得处理器1401执行以下指令:
获取目标用户的人脸图像;根据获取的人脸图像,展示初始形态的目标虚拟道具;根据检测到的所述人脸图像中的人脸表情信息,以及检测到的声音信息,调整所述目标虚拟道具的展示形态。
本公开实施例还提供一种计算机可读存储介质,该计算机可读存储介质上存储有计算机程序,该计算机程序被处理器运行时执行上述方法实施例中所述的操作控制的方法的步骤。其中,该存储介质可以是易失性或非易失的计算机可读取存储介质。
本公开实施例所提供的操作控制的方法的计算机程序产品,包括存储了程序代码的计算机可读存储介质,所述程序代码包括的指令可用于执行上述方法实施例中所述的操作控制的方法的步骤,具体可参见上述方法实施例,在此不再赘述。
本公开实施例还提供一种计算机程序,该计算机程序被处理器执行时实现前述实施例的任意一种方法。该计算机程序产品可以具体通过硬件、软件或其结合的方式实现。在一个可选实施例中,所述计算机程序产品具 体体现为计算机存储介质,在另一个可选实施例中,计算机程序产品具体体现为软件产品,例如软件开发包(Software Development Kit,SDK)等等。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统和装置的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。在本公开所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,又例如,多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些通信接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本公开各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个处理器可执行的非易失的计算机可读取存储介质中。基于这样的理解,本公开的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本公开各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access  Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
最后应说明的是:以上所述实施例,仅为本公开的具体实施方式,用以说明本公开的技术方案,而非对其限制,本公开的保护范围并不局限于此,尽管参照前述实施例对本公开进行了详细的说明,本领域的普通技术人员应当理解:任何熟悉本技术领域的技术人员在本公开揭露的技术范围内,其依然可以对前述实施例所记载的技术方案进行修改或可轻易想到变化,或者对其中部分技术特征进行等同替换;而这些修改、变化或者替换,并不使相应技术方案的本质脱离本公开实施例技术方案的精神和范围,都应涵盖在本公开的保护范围之内。因此,本公开的保护范围应所述以权利要求的保护范围为准。

Claims (21)

  1. 一种操作控制的方法,其特征在于,所述方法包括:
    获取目标用户的人脸图像;
    检测所述人脸图像中目标部位的位置信息;
    基于检测到的位置信息,在所述人脸图像上与检测到的位置信息对应的相对位置处展示处于初始展示形态的目标虚拟道具;
    根据检测到的所述目标部位的状态信息,调整所述目标虚拟道具的展示形态。
  2. 根据权利要求1所述的方法,其特征在于,所述展示形态包括展示的形状和/或尺寸大小。
  3. 根据权利要求1或2所述的方法,其特征在于,所述根据检测到的所述目标部位的状态信息,调整所述目标虚拟道具的展示形态,包括:
    在检测到所述目标部位的状态属性符合预设的状态属性条件、且检测到声音属性符合预设声音属性条件的情况下,调整所述目标虚拟道具的展示形态。
  4. 根据权利要求3所述的方法,其特征在于,在检测到所述目标部位的状态属性符合预设的状态属性条件、且检测到声音属性符合预设声音属性条件的情况下,调整所述目标虚拟道具的展示形态,包括:
    在所述目标部位的状态属性符合预设的状态属性条件、且检测到的声音属性符合预设声音属性条件的情况下,根据检测到的所述目标用户的脸形变化信息,确定单位时间内的所述目标虚拟道具的展示形态调整幅度;
    根据确定的展示形态调整幅度,调整所述目标虚拟道具的展示形态。
  5. 根据权利要求1所述的方法,其特征在于,根据检测到的所述目标部位的状态信息,调整所述目标虚拟道具的展示形态,包括:
    在所述目标部位的状态属性符合预设的状态属性条件的情况下,根据检测到的所述目标用户的脸形变化信息,确定单位时间内的所述目标虚拟道具的展示形态调整幅度;
    根据确定的展示形态调整幅度,调整所述目标虚拟道具的展示形态。
  6. 根据权利要求3~5任一所述的方法,其特征在于,在所述目标部位为嘴部,所述目标虚拟道具为虚拟气球的情况下,所述目标部位的状态属性符合预设的状态属性条件,包括:
    所述目标部位的状态符合嘟嘴的状态。
  7. 根据权利要求3所述的方法,其特征在于,所述声音属性符合预设声音属性条件,包括:
    检测到声音的大小大于设定阈值,和/或检测到声音的类型为预设类型的声音。
  8. 根据权利要求1所述的方法,其特征在于,根据检测到的所述目标部位的状态信息,调整所述目标虚拟道具的展示形态之后,还包括:
    在调整所述目标虚拟道具的展示形态至符合预设条件之后,展示所述目标虚拟道具对应的目标动画特效。
  9. 根据权利要求8所述的方法,其特征在于,在所述目标部位为嘴部,所述目标虚拟道具为虚拟气球的情况下,展示所述目标虚拟道具的目标动画特效,包括:
    展示所述虚拟气球被吹破或吹飞的所述目标动画特效。
  10. 根据权利要求9所述的方法,在调整所述目标虚拟道具的展示形态至符合预设条件之后,展示所述目标虚拟道具对应的目标动画特效,包括:
    根据所述目标虚拟道具的道具属性信息,展示与该道具属性信息匹配的目标动画特效。
  11. 根据权利要求1所述的方法,其特征在于,根据检测到的所述目标部位的状态信息,调整所述目标虚拟道具的展示形态之后,还包括:
    在调整所述目标虚拟道具的展示形态至符合预设条件之后,更新记录的成功操作次数,并重新展示初始状态的目标虚拟道具。
  12. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    获取个性化添加对象;
    基于获取的所述个性化添加对象,以及预设的虚拟道具模型,生成所述目标虚拟道具。
  13. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    在展示所述人脸图像的屏幕上的预设位置区域展示辅助虚拟道具;
    响应所述目标虚拟道具的展示形态调整至符合预设条件,改变所述辅助虚拟道具的展示特效。
  14. 根据权利要求1所述的方法,其特征在于,所述目标用户的人脸图像包括多个目标用户的人脸图像;
    基于检测到的每个目标用户的所述目标部位的位置信息,在所述人脸图像上与检测到的每个目标部位的位置信息对应的相对位置处,分别展示初始形态的目标虚拟道具。
  15. 根据权利要求14所述的方法,其特征在于,根据检测到的所述目标部位的状态信息,调整所述目标虚拟道具的展示形态,包括:
    根据检测到的所述多个目标用户中每个目标用户的目标部位的状态信息,以及每个目标用户对应的脸形变化信息,从所述多个目标用户中确定选中用户,调整所述选中用户对应的目标虚拟道具的展示形态。
  16. 根据权利要求14所述的方法,其特征在于,根据检测到的所述目标部位的状态信息,调整所述目标虚拟道具的展示形态,包括:
    根据检测到的所述多个目标用户中每个目标用户的目标部位的状态信息,分别调整所述多个目标用户中每个目标用户对应的目标虚拟道具的展示形态。
  17. 一种操作控制的方法,其特征在于,所述方法包括:
    获取目标用户的人脸图像;
    根据获取的人脸图像,展示初始形态的目标虚拟道具;
    根据检测到的所述人脸图像中的人脸表情信息,以及检测到的声音信息,调整所述目标虚拟道具的展示形态。
  18. 一种操作控制的装置,其特征在于,包括:
    获取模块,用于获取目标用户的人脸图像;
    检测模块,用于检测所述人脸图像中目标部位的位置信息;
    展示模块,用于基于检测到的位置信息,在所述人脸图像上与检测到的位置信息对应的相对位置处展示处于初始展示形态的目标虚拟道具;
    调整模块,用于根据检测到的所述目标部位的状态信息,调整所述目标虚拟道具的展示形态。
  19. 一种操作控制的装置,其特征在于,包括:
    获取模块,获取目标用户的人脸图像;
    展示模块,用于根据获取的人脸图像,展示初始状态的目标虚拟道具;
    调整模块,用于根据检测到的所述人脸图像中的人脸表情信息,以及检测到的声音信息,调整所述目标虚拟道具的展示形态。
  20. 一种计算机设备,其特征在于,包括:处理器、存储器和总线,所述存储器存储有所述处理器可执行的机器可读指令,当计算机设备运行时,所述处理器与所述存储器之间通过总线通信,所述机器可读指令被所述处理器执行时执行如权利要求1至17任一所述的操作控制的方法的步 骤。
  21. 一种计算机可读存储介质,其特征在于,该计算机可读存储介质上存储有计算机程序,该计算机程序被处理器运行时执行如权利要求1至17任意一项所述的操作控制的方法的步骤。
PCT/CN2021/096269 2020-06-24 2021-05-27 一种操作控制的方法及装置 WO2021258978A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010589705.7A CN111760265B (zh) 2020-06-24 2020-06-24 一种操作控制的方法及装置
CN202010589705.7 2020-06-24

Publications (1)

Publication Number Publication Date
WO2021258978A1 true WO2021258978A1 (zh) 2021-12-30

Family

ID=72721813

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/096269 WO2021258978A1 (zh) 2020-06-24 2021-05-27 一种操作控制的方法及装置

Country Status (2)

Country Link
CN (1) CN111760265B (zh)
WO (1) WO2021258978A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114618163A (zh) * 2022-03-21 2022-06-14 北京字跳网络技术有限公司 虚拟道具的驱动方法、装置、电子设备及可读存储介质
CN114625291A (zh) * 2022-03-15 2022-06-14 北京字节跳动网络技术有限公司 一种任务信息展示方法、装置、计算机设备及存储介质

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111760265B (zh) * 2020-06-24 2024-03-22 抖音视界有限公司 一种操作控制的方法及装置
CN112562721B (zh) * 2020-11-30 2024-04-16 清华珠三角研究院 一种视频翻译方法、系统、装置及存储介质
CN112791416A (zh) * 2021-01-22 2021-05-14 北京字跳网络技术有限公司 一种场景数据的交互控制方法及装置
CN113573158A (zh) * 2021-07-28 2021-10-29 维沃移动通信(杭州)有限公司 视频处理方法、装置、电子设备及存储介质
CN113689256A (zh) * 2021-08-06 2021-11-23 江苏农牧人电子商务股份有限公司 一种虚拟物品推送方法和系统
CN113867530A (zh) * 2021-09-28 2021-12-31 深圳市慧鲤科技有限公司 虚拟物体控制方法、装置、设备及存储介质
CN113920226A (zh) * 2021-09-30 2022-01-11 北京有竹居网络技术有限公司 用户交互方法、装置、存储介质及电子设备
CN113986015B (zh) * 2021-11-08 2024-04-30 北京字节跳动网络技术有限公司 虚拟道具的处理方法、装置、设备和存储介质
CN116077946A (zh) * 2021-11-08 2023-05-09 脸萌有限公司 角色信息交互方法、设备、存储介质及程序产品
CN114494658B (zh) * 2022-01-25 2023-10-31 北京字跳网络技术有限公司 特效展示方法、装置、设备和存储介质
CN114567805A (zh) * 2022-02-24 2022-05-31 北京字跳网络技术有限公司 确定特效视频的方法、装置、电子设备及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160300567A1 (en) * 2015-04-13 2016-10-13 Hisense Mobile Communications Technology Co., Ltd. Terminal and method for voice control on terminal
CN106445131A (zh) * 2016-09-18 2017-02-22 腾讯科技(深圳)有限公司 虚拟目标操作方法和装置
CN108668050A (zh) * 2017-03-31 2018-10-16 深圳市掌网科技股份有限公司 基于虚拟现实的视频拍摄方法和装置
CN111240482A (zh) * 2020-01-10 2020-06-05 北京字节跳动网络技术有限公司 一种特效展示方法及装置
CN111760265A (zh) * 2020-06-24 2020-10-13 北京字节跳动网络技术有限公司 一种操作控制的方法及装置

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105893452B (zh) * 2016-01-22 2020-04-17 上海肇观电子科技有限公司 一种呈现多媒体信息的方法及装置
CN107529091B (zh) * 2017-09-08 2020-08-04 广州华多网络科技有限公司 视频剪辑方法及装置
EP3782124A1 (en) * 2018-04-18 2021-02-24 Snap Inc. Augmented expression system
CN108905192A (zh) * 2018-06-01 2018-11-30 北京市商汤科技开发有限公司 信息处理方法及装置、存储介质
CN110308793B (zh) * 2019-07-04 2023-03-14 北京百度网讯科技有限公司 增强现实ar表情生成方法、装置及存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160300567A1 (en) * 2015-04-13 2016-10-13 Hisense Mobile Communications Technology Co., Ltd. Terminal and method for voice control on terminal
CN106445131A (zh) * 2016-09-18 2017-02-22 腾讯科技(深圳)有限公司 虚拟目标操作方法和装置
CN108668050A (zh) * 2017-03-31 2018-10-16 深圳市掌网科技股份有限公司 基于虚拟现实的视频拍摄方法和装置
CN111240482A (zh) * 2020-01-10 2020-06-05 北京字节跳动网络技术有限公司 一种特效展示方法及装置
CN111760265A (zh) * 2020-06-24 2020-10-13 北京字节跳动网络技术有限公司 一种操作控制的方法及装置

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114625291A (zh) * 2022-03-15 2022-06-14 北京字节跳动网络技术有限公司 一种任务信息展示方法、装置、计算机设备及存储介质
CN114618163A (zh) * 2022-03-21 2022-06-14 北京字跳网络技术有限公司 虚拟道具的驱动方法、装置、电子设备及可读存储介质

Also Published As

Publication number Publication date
CN111760265B (zh) 2024-03-22
CN111760265A (zh) 2020-10-13

Similar Documents

Publication Publication Date Title
WO2021258978A1 (zh) 一种操作控制的方法及装置
WO2017152673A1 (zh) 人物面部模型的表情动画生成方法及装置
CN111640202B (zh) 一种ar场景特效生成的方法及装置
US10482660B2 (en) System and method to integrate content in real time into a dynamic real-time 3-dimensional scene
CN112154658A (zh) 图像处理装置、图像处理方法和程序
TWI749795B (zh) 擴增實境資料呈現方法、設備、儲存介質
JP7168694B2 (ja) ヒューマンフェースによる3d特殊効果生成方法、装置および電子装置
CN110555507B (zh) 虚拟机器人的交互方法、装置、电子设备及存储介质
WO2021082787A1 (zh) 虚拟操作对象的生成方法和装置、存储介质及电子设备
CN111638793A (zh) 飞行器的展示方法、装置、电子设备及存储介质
CN112156464B (zh) 虚拟对象的二维形象展示方法、装置、设备及存储介质
US11554315B2 (en) Communication with augmented reality virtual agents
CN111986076A (zh) 图像处理方法及装置、互动式展示装置和电子设备
CN110263617B (zh) 三维人脸模型获取方法及装置
CN108876878B (zh) 头像生成方法及装置
WO2020093798A1 (zh) 一种显示目标图像的方法、装置、终端及存储介质
US7529428B2 (en) Image processing apparatus and storage medium storing image processing program
US11673054B2 (en) Controlling AR games on fashion items
CN113826147A (zh) 动画角色的改进
CN113487709A (zh) 一种特效展示方法、装置、计算机设备以及存储介质
CN108525306B (zh) 游戏实现方法、装置、存储介质及电子设备
CN108965101B (zh) 会话消息处理方法、装置、存储介质和计算机设备
CN109391842A (zh) 一种配音方法、移动终端
CN115606191A (zh) 多媒体消息传送应用程序中可修改视频中的文本消息自定义
WO2023055825A1 (en) 3d upper garment tracking

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21828352

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 18012610

Country of ref document: US

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 31/03/2023)

122 Ep: pct application non-entry in european phase

Ref document number: 21828352

Country of ref document: EP

Kind code of ref document: A1