WO2021258978A1 - 一种操作控制的方法及装置 - Google Patents
一种操作控制的方法及装置 Download PDFInfo
- Publication number
- WO2021258978A1 WO2021258978A1 PCT/CN2021/096269 CN2021096269W WO2021258978A1 WO 2021258978 A1 WO2021258978 A1 WO 2021258978A1 CN 2021096269 W CN2021096269 W CN 2021096269W WO 2021258978 A1 WO2021258978 A1 WO 2021258978A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- target
- detected
- display form
- target virtual
- information
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 79
- 230000000694 effects Effects 0.000 claims abstract description 53
- 230000001815 facial effect Effects 0.000 claims abstract description 13
- 230000008859 change Effects 0.000 claims description 34
- 238000004590 computer program Methods 0.000 claims description 13
- 230000008921 facial expression Effects 0.000 claims description 9
- 238000004891 communication Methods 0.000 claims description 4
- 238000001514 detection method Methods 0.000 claims description 4
- 230000004044 response Effects 0.000 claims description 2
- 239000000463 material Substances 0.000 abstract description 6
- 230000000875 corresponding effect Effects 0.000 description 47
- 238000010586 diagram Methods 0.000 description 28
- 238000007664 blowing Methods 0.000 description 19
- 230000008569 process Effects 0.000 description 8
- 238000000605 extraction Methods 0.000 description 7
- 230000006870 function Effects 0.000 description 6
- 238000012545 processing Methods 0.000 description 4
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 230000003993 interaction Effects 0.000 description 3
- 241000283973 Oryctolagus cuniculus Species 0.000 description 2
- 238000010009 beating Methods 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 235000012489 doughnuts Nutrition 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000004927 fusion Effects 0.000 description 2
- 238000002360 preparation method Methods 0.000 description 2
- 210000003813 thumb Anatomy 0.000 description 2
- 241001125929 Trisopterus luscus Species 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000035622 drinking Effects 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 238000004880 explosion Methods 0.000 description 1
- 238000007667 floating Methods 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/20—Input arrangements for video game devices
- A63F13/21—Input arrangements for video game devices characterised by their sensors, purposes or types
- A63F13/213—Input arrangements for video game devices characterised by their sensors, purposes or types comprising photodetecting means, e.g. cameras, photodiodes or infrared cells
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/50—Controlling the output signals based on the game progress
- A63F13/52—Controlling the output signals based on the game progress involving aspects of the displayed game scene
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/174—Facial expression recognition
Definitions
- the present disclosure relates to the field of Internet technology, and in particular to a method and device for operation control.
- the embodiments of the present disclosure provide at least one method and device for operation control.
- an operation control method including:
- the display form of the target virtual item is adjusted.
- the display form includes the shape and/or size of the display.
- the adjusting the display form of the target virtual prop according to the detected state information of the target part includes:
- the display form of the target virtual item is adjusted.
- the display of the target virtual prop is adjusted Morphology, including:
- the detected facial shape change information of the target user is used to determine the The adjustment range of the display form of the target virtual prop;
- the display form of the target virtual prop is adjusted.
- adjusting the display form of the target virtual prop according to the detected state information of the target part includes:
- the display form of the target virtual prop is adjusted.
- the state attribute of the target part meets a preset state attribute condition, including:
- the state of the target part conforms to the state of a beeping mouth.
- the sound attribute meeting a preset sound attribute condition includes:
- the size of the detected sound is greater than the set threshold, and/or the type of the detected sound is a preset type of sound.
- the method further includes:
- the target animation special effect corresponding to the target virtual prop is displayed.
- displaying the target animation special effect of the target virtual prop includes:
- displaying the target animation special effect corresponding to the target virtual prop includes:
- the target animation special effect matching the prop attribute information is displayed.
- the method further includes:
- the recorded number of successful operations is updated, and the target virtual prop in the initial state is redisplayed.
- the method further includes:
- the target virtual item is generated.
- the method further includes:
- the display special effect of the auxiliary virtual prop is changed.
- the face image of the target user includes face images of multiple target users
- the target virtual props in the initial form are respectively displayed at the relative position corresponding to the detected position information of each target part on the face image.
- adjusting the display form of the target virtual prop according to the detected state information of the target part includes:
- the detected state information of the target part of each target user among the plurality of target users, and the face shape change information corresponding to each target user determine a selected user from the plurality of target users, and adjust the selected user The display form of the corresponding target virtual item.
- adjusting the display form of the target virtual prop according to the detected state information of the target part includes:
- the detected state information of the target part of each target user among the plurality of target users respectively adjust the display form of the target virtual item corresponding to each target user among the plurality of target users.
- the target virtual prop corresponds to a real target operation object in a real scene; the relative position between the target part and the target virtual prop matches the said target in the real scene The relative position of the real target operation object relative to the target part when being operated.
- embodiments of the present disclosure also provide an operation control method, including:
- the display form of the target virtual prop is adjusted.
- an operation control device including:
- the acquisition module is used to acquire the face image of the target user.
- the detection module is used to detect the position information of the target part in the face image.
- the display module is configured to display the target virtual item in the initial display form at a relative position corresponding to the detected position information on the face image based on the detected position information.
- the adjustment module is used to adjust the display form of the target virtual props according to the detected state information of the target part.
- the display form includes the shape and/or size of the display.
- the adjustment module is specifically configured to: when it is detected that the state attribute of the target part meets the preset state attribute condition, and it is detected that the sound attribute meets the preset sound attribute condition, Adjust the display form of the target virtual props.
- the adjustment module is specifically configured to perform according to the condition that the state attribute of the target part meets the preset state attribute condition and the detected sound attribute meets the preset sound attribute condition
- the detected face shape change information of the target user determines the display form adjustment range of the target virtual prop in a unit time; adjusts the display form of the target virtual prop according to the determined display form adjustment range.
- the adjustment module is specifically configured to determine, based on the detected face shape change information of the target user, when the state attribute of the target part meets a preset state attribute condition The adjustment range of the display form of the target virtual prop in a unit time; according to the determined adjustment range of the display form, the display form of the target virtual prop is adjusted.
- the state of the target part corresponds to the state of a beeping mouth.
- the sound attribute meeting the preset sound attribute condition includes: detecting that the size of the sound is greater than a set threshold, and/or detecting that the type of the sound is a preset type of sound.
- the device further includes: a target animation special effect display module, configured to display the target animation corresponding to the target virtual prop after adjusting the display form of the target virtual prop to meet a preset condition Special effects.
- a target animation special effect display module configured to display the target animation corresponding to the target virtual prop after adjusting the display form of the target virtual prop to meet a preset condition Special effects.
- the target animation special effect display module is specifically configured to display the target animation special effect in which the virtual balloon is blown or blown off.
- the target animation special effect display module is specifically configured to display the target animation special effect matching the prop attribute information according to the prop attribute information of the target virtual prop.
- the device further includes: a counting update module, configured to update the recorded number of successful operations after adjusting the display form of the target virtual item to meet a preset condition, and redisplay the initial state The target virtual props.
- a counting update module configured to update the recorded number of successful operations after adjusting the display form of the target virtual item to meet a preset condition, and redisplay the initial state The target virtual props.
- the device further includes: a personalization setting module, configured to obtain a personalization addition object; based on the obtained personalization addition object and a preset virtual item model, the target is generated Virtual props.
- a personalization setting module configured to obtain a personalization addition object; based on the obtained personalization addition object and a preset virtual item model, the target is generated Virtual props.
- the device further includes: an auxiliary virtual prop display module, configured to display auxiliary virtual props in a preset position area on the screen on which the face image is displayed.
- the auxiliary virtual prop display effect adjustment module is used to respond to the adjustment of the display form of the target virtual prop to meet a preset condition, and change the special display effect of the auxiliary virtual prop.
- the face image of the target user includes face images of multiple target users; the display module is further configured to be based on the detected position of the target part of each target user Information, respectively displaying the target virtual props in the initial form at the relative position corresponding to the detected position information of each target part on the face image.
- the adjustment module is further specifically configured to detect the status information of the target part of each target user among the plurality of target users, and the face shape change information corresponding to each target user , Determine a selected user from the multiple target users, and adjust the display form of the target virtual item corresponding to the selected user.
- the adjustment module is further specifically configured to adjust each of the plurality of target users separately according to the detected state information of the target part of each of the plurality of target users.
- the target virtual prop corresponds to a real target operation object in a real scene; the relative position between the target part and the target virtual prop matches the said target in the real scene The relative position of the real target operation object relative to the target part when being operated.
- an operation control device including:
- the acquisition module is used to acquire the face image of the target user.
- the display module is used to display the target virtual props in the initial state according to the acquired face image.
- the adjustment module is configured to adjust the display form of the target virtual prop according to the facial expression information in the detected facial image and the detected sound information.
- embodiments of the present disclosure also provide a computer device, including a processor, a memory, and a bus.
- the memory stores machine-readable instructions executable by the processor.
- the processing communicates with the memory through a bus.
- the machine-readable instructions are executed by the processor, the above-mentioned first aspect or the steps in any one of the possible implementations of the first aspect are executed, or the above-mentioned first aspect is executed. The steps in the two aspects.
- the embodiments of the present disclosure also provide a computer-readable storage medium having a computer program stored on the computer-readable storage medium, and the computer program executes the first aspect or any of the first aspects when the computer program is run by a processor.
- a processor executes the first aspect or any of the first aspects when the computer program is run by a processor.
- the embodiments of the present disclosure can realize the real-time control of the display form of the virtual props by the user, realize the coordinated display of the user's face image and the virtual props, and enhance the real experience of operating the virtual props.
- the virtual props replace the real props, It also plays a role in saving material costs, protecting the environment (reducing garbage from real props), and facilitating statistical operation results.
- Fig. 1 shows a flowchart of an operation control method provided by an embodiment of the present disclosure
- FIG. 2 shows a schematic diagram of an interface diagram for acquiring a face image provided by an embodiment of the present disclosure
- FIG. 3 shows a schematic diagram of an interface display diagram of a target virtual prop in an initial form provided by an embodiment of the present disclosure
- FIG. 4 shows a schematic diagram of an interface display diagram of a target virtual prop after adjustment provided by an embodiment of the present disclosure
- FIG. 5 shows a schematic diagram of an interface display diagram of a blow-through special effect provided by an embodiment of the present disclosure
- FIG. 6 shows a schematic diagram of an interface display diagram of a blowing special effect provided by an embodiment of the present disclosure
- FIG. 7 shows a schematic diagram of an interface display diagram of an auxiliary virtual prop display special effect provided by an embodiment of the present disclosure
- FIG. 8 shows a schematic diagram of an interface display diagram of a target virtual prop in an initial form corresponding to a plurality of target users according to an embodiment of the present disclosure
- FIG. 9 shows a schematic diagram of an interface display diagram of adjusted target virtual props corresponding to multiple target users according to an embodiment of the present disclosure
- FIG. 10 shows a flowchart of another operation control method provided by an embodiment of the present disclosure.
- FIG. 11 shows a schematic diagram of an operation control device provided by an embodiment of the present disclosure
- FIG. 12 shows a schematic diagram of another operation control device provided by an embodiment of the present disclosure.
- FIG. 13 shows a schematic diagram of an electronic device provided by an embodiment of the present disclosure
- FIG. 14 shows a schematic diagram of an electronic device provided by an embodiment of the present disclosure.
- the embodiments of the present disclosure provide a method and device for operation control, which can realize the real-time control of the display form of the virtual props by the user, realize the coordinated display of the user's face image and the virtual props, and enhance the performance of the virtual props.
- the actual experience of the operation because virtual props replace the real props, it also plays a role in saving material costs, protecting the environment (reducing the garbage of real props), and facilitating statistical operation results.
- the embodiments of the present disclosure determine the display position of the target virtual prop according to the position information of the target part, so that the display position of the target virtual prop can conform to the relative position relationship in the real scene, and further enhance the reality experience.
- the execution subject of the operation control method provided in the embodiment of the present disclosure is generally a computer device with a certain computing capability.
- the computer equipment includes, for example, terminal equipment or servers or other processing equipment.
- the terminal equipment may be User Equipment (UE), mobile equipment, user terminal, terminal, cellular phone, cordless phone, personal digital assistant (Personal Digital Assistant, PDA), handheld devices, computing devices, vehicle-mounted devices, wearable devices, etc.
- UE User Equipment
- PDA Personal Digital Assistant
- handheld devices computing devices, vehicle-mounted devices, wearable devices, etc.
- the method of operation control may be implemented by a processor invoking computer-readable instructions stored in the memory.
- FIG. 1 it is a flowchart of an operation control method provided by an embodiment of the present disclosure.
- the method includes steps S101 to S104, wherein:
- the face image of the target user can be acquired through the front camera of the terminal device.
- the front camera will automatically search for and shoot the face image of the target user.
- the terminal device may be a smart phone, a tablet computer, etc.
- the specific interface diagram for acquiring the user's face image may include the following parts: face image, action prompt instructing the target user to start the game, the shape and style of the next target virtual item, the number of successful operations, auxiliary virtual items, and me Pets, balloons DIY (Do It Yourself, DIY), leaderboards and other trigger buttons that indicate user operations; among them, the shape and style of the next target virtual item can be used to remind the user of the shape and style of the next target virtual item
- the number of successful operations can indicate the number of times the user successfully blows the balloon, that is, the number of the user successfully blows the balloon; the trigger button of my pet can be used to instruct the user to perform other operations on the auxiliary virtual props; balloon DIY
- the trigger button can be used to instruct the target user to select DIY objects such as photos and stickers that they like or are interested in, and design the target virtual props by themselves.
- the specific interface diagram for acquiring the user's face image is shown in Fig. 2, taking the terminal device as a mobile phone as an example.
- S102 Detect location information of a target part in the face image.
- the target part may be the mouth; the position information of the target part is used to indicate the position of the mouth on the terminal screen.
- the target virtual prop may correspond to the real target operation object in the real scene; at this time, the relative position between the target part and the target virtual prop may match the real target in the real scene The relative position of the operating object relative to the target part when it is operated.
- the way the user operates the target virtual item also matches the way the user operates the corresponding real target operating object in the real scene, thereby further enhancing the reality experience.
- the target virtual item may be a balloon, and the relative position may be below the mouth;
- the style of the balloon may include a variety of styles, for example, it may include a rabbit style, a donut style, and so on.
- the target virtual prop display function of S103 may be activated, for example, a virtual balloon is displayed after the user's mouth is detected.
- the embodiments of the present disclosure may also perform statistical recording of the user's operations on the target virtual item.
- the user may be given a certain amount of preparation time.
- the countdown may be started after the user initiates a preset trigger operation (such as beeping), the user is asked to start preparations, and after the countdown is over, the countdown of user operation records is started.
- the initial display form is used to indicate the state of the target virtual item in the initial display stage; for example, the target virtual item in the initial display form may be a deflated (uninflated) small balloon.
- a deflated small balloon is displayed below the mouth.
- the specific display interface is shown in FIG. 3, and the terminal device is a mobile phone as an example.
- S104 Adjust the display form of the target virtual item according to the detected state information of the target part.
- the state information of the target part can be determined by performing feature extraction on the image of the target part.
- the status information of the target part may include posture information of the target part, such as a beeping mouth.
- the display form of the virtual balloon can be adjusted under the precondition that the status information of the target part meets the state of pouting.
- the display form of the target virtual item may include the shape and/or size of the display; for example, the display shape of the virtual balloon may include the shape of a rabbit, a doughnut, etc.; the size is used to indicate the degree of inflation of the balloon, which can be The multiple of the size of the initial display form, for example: 1.5 times the size of the initial display form.
- the display form of the target virtual prop can be adjusted according to the detected state information of the target part and the detected sound information.
- the state information of the target user’s mouth is determined; the voice data of the target user is acquired, the voice data is processed, and the voice information corresponding to the voice data is determined; Adjust the display size of the balloon.
- the display size of the balloon is adjusted according to the above-mentioned mouth state information and the above-mentioned sound information.
- the specific description is as follows: the state attribute of the target part meets the preset state attribute condition, and the sound attribute is detected to meet the predetermined condition.
- the sound attribute condition is set, the display form of the target virtual prop is adjusted.
- the state attribute may include the posture characteristics of the target part, etc.; for example, for a balloon blowing scene, the state attribute of the mouth includes whether the mouth is pushed up or not.
- the preset state attribute conditions may include a beeping mouth and different beeping amplitudes, which can be a slight beeping mouth, a large beating beating mouth, and the like.
- the state of the target part conforms to the preset state attribute condition may be that the state of the target part conforms to the state of the mouth.
- the sound attribute conditions can include sound type, sound size, and sound duration; among them, for the balloon blowing scene, the sound type can be divided into blowing sound and other sounds; the sound level can be determined by detecting the sound of the target user. The volume is obtained; the sound duration is used to indicate the duration of the sound.
- the preset sound attribute conditions may include: sound type: blowing (the sound type may not be limited), sound size: greater than or equal to 1 decibel (only for example, not the actual threshold in actual operation), sound duration: The duration is greater than or equal to 3 seconds (the duration can also be unlimited).
- the size of the balloon under the mouth is adjusted .
- the sound attribute meeting the preset sound attribute condition may be: the detected sound size is greater than a set threshold, and/or the detected sound type is a preset type of sound.
- the sound meets the preset sound attribute condition can be that the sound size is greater than the set threshold, and the sound type is blowing.
- the volume of the sound is greater than 1.
- Decibel adjust the size of the balloon below the mouth.
- the display form of the target virtual item is adjusted, and the specific description is As follows: in the case that the state attribute of the target part meets the preset state attribute condition and the detected sound attribute meets the preset sound attribute condition, determine the target within a unit time according to the detected face shape change information of the target user The display form adjustment range of the virtual props; according to the determined display form adjustment range, the display form of the target virtual props is adjusted.
- the face shape change information can be used to indicate the strength of the corresponding action (for example: blowing strength); it can include the face shape change range, that is, the mouth opening range and the cheek bulging range.
- the face shape change information can affect the inflation speed of the balloon; specifically, the relationship between the face shape change information and the balloon inflation speed is: when the mouth opening range and the cheek bulge range are greater, that is, the blowing force is greater, then the corresponding The balloon expands quickly; when the mouth opening and cheek bulge are small, that is, the blowing force is small, and the drinking balloon expands slowly.
- the mouth opening range and cheek bulging range corresponding to the target user’s face are detected, based on the aforementioned current target user’s
- the relationship between the width of the mouth opening and the bulging of the cheeks, as well as the above-determined face shape change information and the balloon inflation speed determine the degree of change in the balloon size per unit time (that is, the degree of balloon inflation); according to the above determined balloon size per unit time Change the degree of change, adjust the display size of the balloon on the terminal screen.
- the current target user beeps, and the sound meets the preset sound attribute conditions, and it is detected that the target user’s face has a large mouth opening and a large cheek bulge, it will be adjusted according to a larger expansion speed
- the display size of the balloon on the terminal screen and the adjusted display interface are shown in Figure 4, taking the terminal device as a mobile phone as an example.
- the display form of the target virtual item is adjusted to the initial form. That is to say, in the process of the target user blowing up the balloon, if the target user changes the state information of the mouth (that is, the process from beeping to not beeping), the balloon under the target user's mouth is adjusted to the initial deflated The state of the small balloon.
- the display size of the balloon on the terminal screen only according to the detected mouth state information and face shape change information of the target user.
- the specific description is as follows: the state of the target part When the attributes meet the preset state attribute conditions, determine the display form adjustment range of the target virtual item in a unit time according to the detected face shape change information of the target user; adjust the target virtual item according to the determined display form adjustment range The display form of the props.
- the mouth opening range and cheek bulging range corresponding to the target user’s face are detected, and the current target user’s mouth opening range and The bulge of the cheek and the relationship between the information of the face shape change determined above and the inflation speed of the balloon determine the degree of change of the balloon size per unit time (that is, the degree of balloon inflation); adjust the balloon according to the degree of change of the balloon size determined above per unit time
- the display size on the terminal screen is the degree of change of the balloon size per unit time (that is, the degree of balloon inflation); adjust the balloon according to the degree of change of the balloon size determined above per unit time.
- the embodiments of the present disclosure can realize the real-time control of the display form of the virtual props by the user, realize the fusion display of the user's face image and the display form of the virtual props, and enhance the real experience of operating the virtual props.
- the virtual props replace the real props , It also plays a role in saving material costs, protecting the environment (reducing real props garbage), and facilitating statistical operation results.
- the relative position between the target part and the target virtual item matches the relative position of the real target operation object relative to the target part when the real target operation object is operated in the real scene. The operation of virtual props more closely matches the real scene.
- the method further includes: adjusting the display form of the target virtual item After the preset conditions are met, the target animation special effects corresponding to the target virtual props are displayed.
- the preset condition refers to the size threshold of the target virtual item; here, it refers to the maximum inflation size of the balloon.
- the target animation special effects can be blow-through, blow-up, etc.
- the target animation special effect showing the target virtual prop may be an animation special effect showing that the virtual balloon is blown or blown off.
- the special effect of blowing or blowing the balloon is displayed .
- the target user is blowing a balloon
- the terminal device detects that the size of the balloon on the terminal screen has reached the maximum expansion size, when it is detected that the target user is still beeping, and the sound attributes meet the preset sound attribute conditions (ie When you continue to blow the balloon), the special effect of the animation when the balloon explodes on the terminal screen.
- the target animation special effect matching the item attribute information is displayed.
- the item attribute information may include the type of the item and the actual effect corresponding to each type; wherein the type of the item may be a bomb, a cloud, etc., and the actual effect corresponding to the bomb is an explosion, and the actual effect corresponding to the cloud is floating.
- the target virtual prop is a bomb balloon
- it is determined to display the animation special effect that corresponds to the bomb with the same realistic effect, that is, the blow-through special effect of the bomb balloon is displayed on the terminal screen.
- the specific display interface is shown in Figure 5, taking the terminal device as a mobile phone as an example.
- the target virtual prop is a cloud balloon
- it is determined to display the animation special effect that corresponds to the cloud with the same realistic effect, that is, the blowing special effect of the cloud balloon is displayed on the terminal screen.
- the specific display interface is shown in Figure 6, taking the terminal device as a mobile phone as an example.
- the method further includes: adjusting the display form of the target virtual item After the preset conditions are met, the recorded number of successful operations is updated, and the target virtual item in the initial state is redisplayed.
- the number of successful operations can be the number of successfully blown balloons, that is, the number of successfully blown balloons;
- the prop properties of the target virtual props in the initial state here can be the same as those of the target virtual props in the previous initial state, or it can be different.
- the properties of the props can include color, shape, type, and so on.
- the balloon size is adjusted to be greater than the balloon's maximum inflation size
- the number of successfully blown balloons is updated, and the A small deflated balloon is redisplayed below the target user's mouth (the shape, color, and type of the balloon can be the same as or different from the previous balloon).
- the target user when the target user is blowing a balloon, when the terminal device detects that the balloon size on the terminal screen reaches the maximum expansion size, when it is detected that the target user is still beeping and the sound attributes meet the preset sound attribute conditions (That is, when you continue to blow the balloon), the balloon is successfully blown, the number of times the balloon is blown successfully is updated, and a small deflated balloon is redisplayed under the target user’s mouth.
- the method further includes: obtaining a personalized added object; generating the target based on the obtained personalized added object and a preset virtual item model Virtual props.
- the personalized adding objects can be stickers, photos, and other self-made (Do It Yourself, DIY) objects.
- the target user can select DIY objects such as photos and stickers that he likes or are interested in, and add the DIY objects to the preset virtual prop model based on preset rules to generate target virtual props.
- DIY objects such as photos and stickers that he likes or are interested in
- the user selects the DIY button on the terminal device, and adds the image of Snow White he likes to the balloon prop model, generates a balloon containing the image of Snow White, and displays the above balloon containing Snow White on the screen of the terminal device superior.
- the method further includes: displaying auxiliary virtual props in a preset position area on the screen displaying the face image; responding to the display form of the target virtual props being adjusted to conform to the preset Condition, changing the display special effects of the auxiliary virtual props.
- the auxiliary virtual props may be virtual pets, virtual characters, etc., that is, virtual cats, virtual dogs, virtual smiling characters, etc.; here, the preset location area may be any area outside the area where the face image on the terminal screen is located.
- auxiliary virtual props can be applause, clapping, thumbs up and so on.
- the preset area on the screen of the terminal device displays auxiliary virtual props.
- the target user is blowing a balloon
- the terminal device detects that the balloon size on the terminal screen reaches the maximum inflated size, when the target user is still detected
- the sound attribute meets the preset sound attribute condition (that is, when the balloon continues to be blown)
- the user successfully blows the balloon
- the display special effects of the auxiliary virtual props are adjusted.
- the preset area on the screen of the terminal device displays the auxiliary virtual prop as a virtual smiling character.
- the display effect of the virtual smiling character is adjusted to a thumbs up special effect, which is specifically displayed
- the interface diagram is shown in Figure 7, taking the terminal device as a mobile phone as an example.
- the target virtual props in the initial form are respectively displayed.
- the face image of the target user acquired by the terminal device includes the face images of multiple target users
- feature extraction is performed on the face image of each target user, and each target is determined based on the attribute information of the feature extraction result
- the user’s mouth position information and based on the above-mentioned mouth position information of each target user, the initial form of target virtual props (ie, deflated balloons) are displayed below each target user’s mouth.
- the specific display interface is shown in the figure As shown in 8, the terminal device is a mobile phone as an example.
- the embodiments of the present disclosure can display a multi-person interaction scene.
- multiple target users can compete for the operation authority of the target virtual item (different target users can have their own corresponding target virtual items, but only the winner can operate).
- adjusting the display form of the target virtual item according to the detected state information of the target part includes: The detected state information of the target part of each target user among the plurality of target users, and the face shape change information corresponding to each target user, determine the selected user from the plurality of target users, and adjust the selected user The display form of the target virtual item corresponding to the user.
- the user who is operating faster among multiple target users can be determined by the following method, which is described in detail as follows: According to the detected state of the target part of each target user among the multiple target users Information, respectively adjusting the display form of the target virtual prop corresponding to each target user among the plurality of target users.
- the face image of the target user acquired by the terminal device includes the face images of multiple target users
- feature extraction is performed on the face image of each target user to determine the mouth state information of each target user ( Namely: beeping, blowing), and the face change information corresponding to each target user (that is, the mouth opening range and the bulging range of the cheeks), and according to the mouth state information of each target user and the corresponding face Change the information and adjust the size of the balloon below the target user’s mouth.
- the face image of the target user acquired by the terminal device includes the face images of three target users (user a, user b, and user c)
- feature extraction is performed on the face images of the three target users.
- Determine the mouth status information of user a as follows: beep, blow, sound decibel is 2 decibels, sound duration is 4 seconds, and the mouth opening range and cheek bulge range corresponding to user a are relatively large; determine user b
- the mouth status information of is: smiling, and the mouth is not opened; it is determined that the mouth status information of user c is: beeping, blowing, the sound decibel is 1.5 decibels, the sound duration is 3 seconds, and the mouth corresponding to user c
- the opening range and the bulging range of the cheeks are smaller, and the size of the balloon below the mouth of user a is adjusted to 4 times the initial size; do not adjust the size of the balloon below the mouth of user b; adjust the size of the balloon below the mouth of user c to 2 times the initial size.
- video recording can be performed on the screen during the above operation, and the recorded video can be shared through a social APP.
- FIG. 10 is a flowchart of another operation control method provided by an embodiment of the present disclosure.
- the method includes steps S1001 to S1003, wherein:
- the location information of the target part in the face image can be detected; based on the detected location information, the face image is compared with the detected location information on the face image.
- the target virtual item in the initial form is displayed at the corresponding relative position.
- S1003 Adjust the display form of the target virtual prop according to the detected facial expression information in the facial image and the detected sound information.
- the facial expression information may include the state information of the target part and/or the information about the magnitude of the face shape change, etc.; the state information of the target part here may be whether the mouth is pouted.
- the sound information may include information such as sound type, sound size, and sound continuity.
- the status information of the target part (whether the mouth is beeping) and the change range of the face shape (the range of bulging cheeks and the motion range of opening and closing the mouth can be determined)
- the voice data of the target user can obtain the voice data of the target user, process the voice data, and determine the voice information corresponding to the voice data; adjust the target virtual props according to the state information of the target part, the face shape change range information and the voice information Display size.
- the display form of the target virtual prop can be adjusted for more related descriptions, please refer to the description in the first embodiment. Repeat it again.
- the writing order of the steps does not mean a strict execution order but constitutes any limitation on the implementation process.
- the specific execution order of each step should be based on its function and possibility.
- the inner logic is determined.
- the embodiment of the present disclosure also provides an operation control device corresponding to the method of operation control. Since the principle of the device in the embodiment of the present disclosure to solve the problem is similar to the above-mentioned method of operation control in the embodiment of the present disclosure, The implementation of the device can refer to the implementation of the method, and the repetition will not be repeated.
- FIG. 11 it is a schematic diagram of an operation control apparatus 1100 provided by an embodiment of the present disclosure.
- the apparatus includes: an acquisition module 1101, a detection module 1102, a display module 1103, and an adjustment module 1104; among them,
- the obtaining module 1101 is used to obtain the face image of the target user.
- the detection module 1102 is used to detect the position information of the target part in the face image.
- the display module 1103 is configured to display the target virtual item in the initial display form at a relative position corresponding to the detected position information on the face image based on the detected position information.
- the adjustment module 1104 is configured to adjust the display form of the target virtual item according to the detected state information of the target part.
- the embodiments of the present disclosure can realize the real-time control of the display form of the virtual props by the user, realize the fusion display of the user's face image and the display form of the virtual props, and enhance the real experience of operating the virtual props.
- the virtual props replace the real props , It also plays a role in saving material costs, protecting the environment (reducing real props garbage), and facilitating statistical operation results.
- the relative position between the target part and the target virtual item matches the relative position of the real target operation object relative to the target part when the real target operation object is operated in the real scene. The operation of virtual props more closely matches the real scene.
- the display form includes the shape and/or size of the display.
- the adjustment module 1104 is specifically configured to adjust the state attribute of the target part when it is detected that the state attribute of the target part meets the preset state attribute condition and the sound attribute meets the preset sound attribute condition. Describe the display form of the target virtual props.
- the adjustment module 1104 is specifically configured to, when the state attribute of the target part meets the preset state attribute condition, and the detected sound attribute meets the preset sound attribute condition, according to the detected
- the face shape change information of the target user determines the display form adjustment range of the target virtual prop in a unit time; and adjusts the display form of the target virtual prop according to the determined display form adjustment range.
- the adjustment module 1104 is specifically configured to determine the unit time according to the detected face shape change information of the target user when the state attribute of the target part meets the preset state attribute condition The adjustment range of the display form of the target virtual prop within; and adjust the display form of the target virtual prop according to the determined adjustment range of the display form.
- the state of the target part conforms to the state of a beeping mouth.
- the sound attribute meeting the preset sound attribute condition includes: detecting that the size of the sound is greater than a set threshold, and/or detecting that the type of the sound is a preset type of sound.
- the device further includes: a target animation special effect display module, configured to display the target animation special effect corresponding to the target virtual prop after adjusting the display form of the target virtual prop to meet a preset condition .
- the target animation special effect display module is specifically configured to display the target animation special effect of the virtual balloon being blown or blown off.
- the target animation special effect display module is specifically configured to display the target animation special effect matching the prop attribute information according to the prop attribute information of the target virtual prop.
- the device further includes: a counting update module, configured to update the recorded number of successful operations after adjusting the display form of the target virtual item to meet a preset condition, and redisplay the initial state Target virtual props.
- a counting update module configured to update the recorded number of successful operations after adjusting the display form of the target virtual item to meet a preset condition, and redisplay the initial state Target virtual props.
- the device further includes: a personalization setting module, configured to obtain a personalization addition object; based on the obtained personalization addition object and a preset virtual item model, generate the target virtual item Props.
- a personalization setting module configured to obtain a personalization addition object; based on the obtained personalization addition object and a preset virtual item model, generate the target virtual item Props.
- the device further includes: an auxiliary virtual prop display module, configured to display auxiliary virtual props in a preset position area on the screen on which the face image is displayed.
- the auxiliary virtual prop display effect adjustment module is used to respond to the adjustment of the display form of the target virtual prop to meet a preset condition, and change the special display effect of the auxiliary virtual prop.
- the face image of the target user includes face images of multiple target users; the display module 1103 is further configured to be based on the detected position information of the target part of each target user, At the relative position corresponding to the detected position information of each target part on the face image, the target virtual props in the initial form are respectively displayed.
- the adjustment module 1104 is further specifically configured to determine from the detected state information of the target part of each target user among the plurality of target users and the face shape change information corresponding to each target user The selected user is determined among the multiple target users, and the display form of the target virtual item corresponding to the selected user is adjusted.
- the adjustment module 1104 is further specifically configured to adjust each target user in the plurality of target users according to the detected state information of the target part of each target user in the plurality of target users.
- the target virtual prop corresponds to a real target operation object in a real scene; the relative position between the target part and the target virtual prop matches the real target in the real scene The relative position of the target operation object relative to the target part when being operated.
- FIG. 12 it is a schematic diagram of an operation control apparatus 1200 provided by an embodiment of the present disclosure.
- the apparatus includes: an acquisition module 1201, a display module 1202, and an adjustment module 1203; wherein,
- the obtaining module 1201 obtains the face image of the target user.
- the display module 1202 is used to display the target virtual item in the initial state according to the acquired face image.
- the adjustment module 1203 is configured to adjust the display form of the target virtual prop according to the facial expression information in the detected facial image and the detected sound information.
- an embodiment of the present application also provides an electronic device.
- a schematic structural diagram of an electronic device 1300 provided in an embodiment of this application includes a processor 1301, a memory 1302, and a bus 1303.
- the memory 1302 is used to store execution instructions, including the memory 13021 and the external memory 13022; the memory 13021 here is also called internal memory, which is used to temporarily store the calculation data in the processor 1301 and the data exchanged with the external memory 13022 such as a hard disk.
- the processor 1301 exchanges data with the external memory 13022 through the memory 13021, and when the electronic device 1300 is running, the processor 1301 and the memory 1302 communicate through the bus 1303, so that the processor 1301 executes the following instructions:
- Obtain the face image of the target user detect the position information of the target part in the face image; based on the detected position information, display the initial shape at the relative position corresponding to the detected position information on the face image Target virtual props; adjust the display form of the target virtual props according to the detected state information of the target part.
- FIG. 14 a schematic structural diagram of an electronic device 1400 provided in an embodiment of this application includes a processor 1401, a memory 1402, and a bus 1403.
- the memory 1402 is used to store execution instructions, including a memory 14021 and an external memory 14022; the memory 14021 here is also called internal memory, which is used to temporarily store the calculation data in the processor 1401 and the data exchanged with the external memory 14022 such as a hard disk.
- the processor 1401 exchanges data with the external memory 14022 through the memory 14021.
- the processor 1401 and the memory 1402 communicate through the bus 1403, so that the processor 1401 executes the following instructions:
- the embodiment of the present disclosure also provides a computer-readable storage medium on which a computer program is stored, and the computer program executes the steps of the operation control method described in the above method embodiment when the computer program is run by a processor.
- the storage medium may be a volatile or nonvolatile computer readable storage medium.
- the computer program product of the operation control method provided by the embodiment of the present disclosure includes a computer readable storage medium storing program code, and the program code includes instructions that can be used to execute the operation control method described in the above method embodiment
- the program code includes instructions that can be used to execute the operation control method described in the above method embodiment
- the embodiments of the present disclosure also provide a computer program, which, when executed by a processor, implements any one of the methods in the foregoing embodiments.
- the computer program product can be specifically implemented by hardware, software, or a combination thereof.
- the computer program product is specifically embodied as a computer storage medium.
- the computer program product is specifically embodied as a software product, such as a software development kit (SDK), etc. Wait.
- SDK software development kit
- the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
- the functional units in the various embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
- the function is implemented in the form of a software function unit and sold or used as an independent product, it can be stored in a non-volatile computer readable storage medium executable by a processor.
- the technical solution of the present disclosure essentially or the part that contributes to the prior art or the part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium, including Several instructions are used to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the methods described in the various embodiments of the present disclosure.
- the aforementioned storage media include: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk and other media that can store program code .
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- User Interface Of Digital Computer (AREA)
- Processing Or Creating Images (AREA)
Abstract
Description
Claims (21)
- 一种操作控制的方法,其特征在于,所述方法包括:获取目标用户的人脸图像;检测所述人脸图像中目标部位的位置信息;基于检测到的位置信息,在所述人脸图像上与检测到的位置信息对应的相对位置处展示处于初始展示形态的目标虚拟道具;根据检测到的所述目标部位的状态信息,调整所述目标虚拟道具的展示形态。
- 根据权利要求1所述的方法,其特征在于,所述展示形态包括展示的形状和/或尺寸大小。
- 根据权利要求1或2所述的方法,其特征在于,所述根据检测到的所述目标部位的状态信息,调整所述目标虚拟道具的展示形态,包括:在检测到所述目标部位的状态属性符合预设的状态属性条件、且检测到声音属性符合预设声音属性条件的情况下,调整所述目标虚拟道具的展示形态。
- 根据权利要求3所述的方法,其特征在于,在检测到所述目标部位的状态属性符合预设的状态属性条件、且检测到声音属性符合预设声音属性条件的情况下,调整所述目标虚拟道具的展示形态,包括:在所述目标部位的状态属性符合预设的状态属性条件、且检测到的声音属性符合预设声音属性条件的情况下,根据检测到的所述目标用户的脸形变化信息,确定单位时间内的所述目标虚拟道具的展示形态调整幅度;根据确定的展示形态调整幅度,调整所述目标虚拟道具的展示形态。
- 根据权利要求1所述的方法,其特征在于,根据检测到的所述目标部位的状态信息,调整所述目标虚拟道具的展示形态,包括:在所述目标部位的状态属性符合预设的状态属性条件的情况下,根据检测到的所述目标用户的脸形变化信息,确定单位时间内的所述目标虚拟道具的展示形态调整幅度;根据确定的展示形态调整幅度,调整所述目标虚拟道具的展示形态。
- 根据权利要求3~5任一所述的方法,其特征在于,在所述目标部位为嘴部,所述目标虚拟道具为虚拟气球的情况下,所述目标部位的状态属性符合预设的状态属性条件,包括:所述目标部位的状态符合嘟嘴的状态。
- 根据权利要求3所述的方法,其特征在于,所述声音属性符合预设声音属性条件,包括:检测到声音的大小大于设定阈值,和/或检测到声音的类型为预设类型的声音。
- 根据权利要求1所述的方法,其特征在于,根据检测到的所述目标部位的状态信息,调整所述目标虚拟道具的展示形态之后,还包括:在调整所述目标虚拟道具的展示形态至符合预设条件之后,展示所述目标虚拟道具对应的目标动画特效。
- 根据权利要求8所述的方法,其特征在于,在所述目标部位为嘴部,所述目标虚拟道具为虚拟气球的情况下,展示所述目标虚拟道具的目标动画特效,包括:展示所述虚拟气球被吹破或吹飞的所述目标动画特效。
- 根据权利要求9所述的方法,在调整所述目标虚拟道具的展示形态至符合预设条件之后,展示所述目标虚拟道具对应的目标动画特效,包括:根据所述目标虚拟道具的道具属性信息,展示与该道具属性信息匹配的目标动画特效。
- 根据权利要求1所述的方法,其特征在于,根据检测到的所述目标部位的状态信息,调整所述目标虚拟道具的展示形态之后,还包括:在调整所述目标虚拟道具的展示形态至符合预设条件之后,更新记录的成功操作次数,并重新展示初始状态的目标虚拟道具。
- 根据权利要求1所述的方法,其特征在于,所述方法还包括:获取个性化添加对象;基于获取的所述个性化添加对象,以及预设的虚拟道具模型,生成所述目标虚拟道具。
- 根据权利要求1所述的方法,其特征在于,所述方法还包括:在展示所述人脸图像的屏幕上的预设位置区域展示辅助虚拟道具;响应所述目标虚拟道具的展示形态调整至符合预设条件,改变所述辅助虚拟道具的展示特效。
- 根据权利要求1所述的方法,其特征在于,所述目标用户的人脸图像包括多个目标用户的人脸图像;基于检测到的每个目标用户的所述目标部位的位置信息,在所述人脸图像上与检测到的每个目标部位的位置信息对应的相对位置处,分别展示初始形态的目标虚拟道具。
- 根据权利要求14所述的方法,其特征在于,根据检测到的所述目标部位的状态信息,调整所述目标虚拟道具的展示形态,包括:根据检测到的所述多个目标用户中每个目标用户的目标部位的状态信息,以及每个目标用户对应的脸形变化信息,从所述多个目标用户中确定选中用户,调整所述选中用户对应的目标虚拟道具的展示形态。
- 根据权利要求14所述的方法,其特征在于,根据检测到的所述目标部位的状态信息,调整所述目标虚拟道具的展示形态,包括:根据检测到的所述多个目标用户中每个目标用户的目标部位的状态信息,分别调整所述多个目标用户中每个目标用户对应的目标虚拟道具的展示形态。
- 一种操作控制的方法,其特征在于,所述方法包括:获取目标用户的人脸图像;根据获取的人脸图像,展示初始形态的目标虚拟道具;根据检测到的所述人脸图像中的人脸表情信息,以及检测到的声音信息,调整所述目标虚拟道具的展示形态。
- 一种操作控制的装置,其特征在于,包括:获取模块,用于获取目标用户的人脸图像;检测模块,用于检测所述人脸图像中目标部位的位置信息;展示模块,用于基于检测到的位置信息,在所述人脸图像上与检测到的位置信息对应的相对位置处展示处于初始展示形态的目标虚拟道具;调整模块,用于根据检测到的所述目标部位的状态信息,调整所述目标虚拟道具的展示形态。
- 一种操作控制的装置,其特征在于,包括:获取模块,获取目标用户的人脸图像;展示模块,用于根据获取的人脸图像,展示初始状态的目标虚拟道具;调整模块,用于根据检测到的所述人脸图像中的人脸表情信息,以及检测到的声音信息,调整所述目标虚拟道具的展示形态。
- 一种计算机设备,其特征在于,包括:处理器、存储器和总线,所述存储器存储有所述处理器可执行的机器可读指令,当计算机设备运行时,所述处理器与所述存储器之间通过总线通信,所述机器可读指令被所述处理器执行时执行如权利要求1至17任一所述的操作控制的方法的步 骤。
- 一种计算机可读存储介质,其特征在于,该计算机可读存储介质上存储有计算机程序,该计算机程序被处理器运行时执行如权利要求1至17任意一项所述的操作控制的方法的步骤。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010589705.7A CN111760265B (zh) | 2020-06-24 | 2020-06-24 | 一种操作控制的方法及装置 |
CN202010589705.7 | 2020-06-24 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021258978A1 true WO2021258978A1 (zh) | 2021-12-30 |
Family
ID=72721813
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2021/096269 WO2021258978A1 (zh) | 2020-06-24 | 2021-05-27 | 一种操作控制的方法及装置 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN111760265B (zh) |
WO (1) | WO2021258978A1 (zh) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114618163A (zh) * | 2022-03-21 | 2022-06-14 | 北京字跳网络技术有限公司 | 虚拟道具的驱动方法、装置、电子设备及可读存储介质 |
CN114625291A (zh) * | 2022-03-15 | 2022-06-14 | 北京字节跳动网络技术有限公司 | 一种任务信息展示方法、装置、计算机设备及存储介质 |
Families Citing this family (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111760265B (zh) * | 2020-06-24 | 2024-03-22 | 抖音视界有限公司 | 一种操作控制的方法及装置 |
CN112562721B (zh) * | 2020-11-30 | 2024-04-16 | 清华珠三角研究院 | 一种视频翻译方法、系统、装置及存储介质 |
CN112791416A (zh) * | 2021-01-22 | 2021-05-14 | 北京字跳网络技术有限公司 | 一种场景数据的交互控制方法及装置 |
CN113573158A (zh) * | 2021-07-28 | 2021-10-29 | 维沃移动通信(杭州)有限公司 | 视频处理方法、装置、电子设备及存储介质 |
CN113689256A (zh) * | 2021-08-06 | 2021-11-23 | 江苏农牧人电子商务股份有限公司 | 一种虚拟物品推送方法和系统 |
CN113867530A (zh) * | 2021-09-28 | 2021-12-31 | 深圳市慧鲤科技有限公司 | 虚拟物体控制方法、装置、设备及存储介质 |
CN113920226A (zh) * | 2021-09-30 | 2022-01-11 | 北京有竹居网络技术有限公司 | 用户交互方法、装置、存储介质及电子设备 |
CN113986015B (zh) * | 2021-11-08 | 2024-04-30 | 北京字节跳动网络技术有限公司 | 虚拟道具的处理方法、装置、设备和存储介质 |
CN116077946A (zh) * | 2021-11-08 | 2023-05-09 | 脸萌有限公司 | 角色信息交互方法、设备、存储介质及程序产品 |
CN114494658B (zh) * | 2022-01-25 | 2023-10-31 | 北京字跳网络技术有限公司 | 特效展示方法、装置、设备和存储介质 |
CN114567805A (zh) * | 2022-02-24 | 2022-05-31 | 北京字跳网络技术有限公司 | 确定特效视频的方法、装置、电子设备及存储介质 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160300567A1 (en) * | 2015-04-13 | 2016-10-13 | Hisense Mobile Communications Technology Co., Ltd. | Terminal and method for voice control on terminal |
CN106445131A (zh) * | 2016-09-18 | 2017-02-22 | 腾讯科技(深圳)有限公司 | 虚拟目标操作方法和装置 |
CN108668050A (zh) * | 2017-03-31 | 2018-10-16 | 深圳市掌网科技股份有限公司 | 基于虚拟现实的视频拍摄方法和装置 |
CN111240482A (zh) * | 2020-01-10 | 2020-06-05 | 北京字节跳动网络技术有限公司 | 一种特效展示方法及装置 |
CN111760265A (zh) * | 2020-06-24 | 2020-10-13 | 北京字节跳动网络技术有限公司 | 一种操作控制的方法及装置 |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105893452B (zh) * | 2016-01-22 | 2020-04-17 | 上海肇观电子科技有限公司 | 一种呈现多媒体信息的方法及装置 |
CN107529091B (zh) * | 2017-09-08 | 2020-08-04 | 广州华多网络科技有限公司 | 视频剪辑方法及装置 |
EP3782124A1 (en) * | 2018-04-18 | 2021-02-24 | Snap Inc. | Augmented expression system |
CN108905192A (zh) * | 2018-06-01 | 2018-11-30 | 北京市商汤科技开发有限公司 | 信息处理方法及装置、存储介质 |
CN110308793B (zh) * | 2019-07-04 | 2023-03-14 | 北京百度网讯科技有限公司 | 增强现实ar表情生成方法、装置及存储介质 |
-
2020
- 2020-06-24 CN CN202010589705.7A patent/CN111760265B/zh active Active
-
2021
- 2021-05-27 WO PCT/CN2021/096269 patent/WO2021258978A1/zh active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160300567A1 (en) * | 2015-04-13 | 2016-10-13 | Hisense Mobile Communications Technology Co., Ltd. | Terminal and method for voice control on terminal |
CN106445131A (zh) * | 2016-09-18 | 2017-02-22 | 腾讯科技(深圳)有限公司 | 虚拟目标操作方法和装置 |
CN108668050A (zh) * | 2017-03-31 | 2018-10-16 | 深圳市掌网科技股份有限公司 | 基于虚拟现实的视频拍摄方法和装置 |
CN111240482A (zh) * | 2020-01-10 | 2020-06-05 | 北京字节跳动网络技术有限公司 | 一种特效展示方法及装置 |
CN111760265A (zh) * | 2020-06-24 | 2020-10-13 | 北京字节跳动网络技术有限公司 | 一种操作控制的方法及装置 |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114625291A (zh) * | 2022-03-15 | 2022-06-14 | 北京字节跳动网络技术有限公司 | 一种任务信息展示方法、装置、计算机设备及存储介质 |
CN114618163A (zh) * | 2022-03-21 | 2022-06-14 | 北京字跳网络技术有限公司 | 虚拟道具的驱动方法、装置、电子设备及可读存储介质 |
Also Published As
Publication number | Publication date |
---|---|
CN111760265B (zh) | 2024-03-22 |
CN111760265A (zh) | 2020-10-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021258978A1 (zh) | 一种操作控制的方法及装置 | |
WO2017152673A1 (zh) | 人物面部模型的表情动画生成方法及装置 | |
CN111640202B (zh) | 一种ar场景特效生成的方法及装置 | |
US10482660B2 (en) | System and method to integrate content in real time into a dynamic real-time 3-dimensional scene | |
CN112154658A (zh) | 图像处理装置、图像处理方法和程序 | |
TWI749795B (zh) | 擴增實境資料呈現方法、設備、儲存介質 | |
JP7168694B2 (ja) | ヒューマンフェースによる3d特殊効果生成方法、装置および電子装置 | |
CN110555507B (zh) | 虚拟机器人的交互方法、装置、电子设备及存储介质 | |
WO2021082787A1 (zh) | 虚拟操作对象的生成方法和装置、存储介质及电子设备 | |
CN111638793A (zh) | 飞行器的展示方法、装置、电子设备及存储介质 | |
CN112156464B (zh) | 虚拟对象的二维形象展示方法、装置、设备及存储介质 | |
US11554315B2 (en) | Communication with augmented reality virtual agents | |
CN111986076A (zh) | 图像处理方法及装置、互动式展示装置和电子设备 | |
CN110263617B (zh) | 三维人脸模型获取方法及装置 | |
CN108876878B (zh) | 头像生成方法及装置 | |
WO2020093798A1 (zh) | 一种显示目标图像的方法、装置、终端及存储介质 | |
US7529428B2 (en) | Image processing apparatus and storage medium storing image processing program | |
US11673054B2 (en) | Controlling AR games on fashion items | |
CN113826147A (zh) | 动画角色的改进 | |
CN113487709A (zh) | 一种特效展示方法、装置、计算机设备以及存储介质 | |
CN108525306B (zh) | 游戏实现方法、装置、存储介质及电子设备 | |
CN108965101B (zh) | 会话消息处理方法、装置、存储介质和计算机设备 | |
CN109391842A (zh) | 一种配音方法、移动终端 | |
CN115606191A (zh) | 多媒体消息传送应用程序中可修改视频中的文本消息自定义 | |
WO2023055825A1 (en) | 3d upper garment tracking |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21828352 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 18012610 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 31/03/2023) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21828352 Country of ref document: EP Kind code of ref document: A1 |