CN111652985A - Virtual object control method and device, electronic equipment and storage medium - Google Patents

Virtual object control method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN111652985A
CN111652985A CN202010525221.6A CN202010525221A CN111652985A CN 111652985 A CN111652985 A CN 111652985A CN 202010525221 A CN202010525221 A CN 202010525221A CN 111652985 A CN111652985 A CN 111652985A
Authority
CN
China
Prior art keywords
real
virtual object
information
people
virtual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010525221.6A
Other languages
Chinese (zh)
Other versions
CN111652985B (en
Inventor
王子彬
孙红亮
李炳泽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Sensetime Intelligent Technology Co Ltd
Original Assignee
Shanghai Sensetime Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Sensetime Intelligent Technology Co Ltd filed Critical Shanghai Sensetime Intelligent Technology Co Ltd
Priority to CN202010525221.6A priority Critical patent/CN111652985B/en
Publication of CN111652985A publication Critical patent/CN111652985A/en
Application granted granted Critical
Publication of CN111652985B publication Critical patent/CN111652985B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/53Recognition of crowd images, e.g. recognition of crowd congestion

Abstract

The present disclosure provides a method, an apparatus, an electronic device and a storage medium for controlling a virtual object, wherein the method comprises: acquiring a real scene image comprising a plurality of real characters; extracting people flow distribution information corresponding to the plurality of real persons from the obtained real scene image; determining action execution information of a virtual object in a virtual scene based on the extracted people stream distribution information; and generating an AR animation special effect after the plurality of real characters are merged into the virtual scene based on the images of the plurality of real characters in the real scene image and the action execution information of the virtual object in the virtual scene. The interaction between the real figure and the virtual object is realized, for example, the clapping interaction between a plurality of people and the polar bear can be realized under the condition that the distribution of people is relatively discrete, namely, the autonomous interaction between the animal and the people is realized, and the visiting experience and the exhibition effect are optimized.

Description

Virtual object control method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of augmented reality technologies, and in particular, to a method and an apparatus for controlling a virtual object, an electronic device, and a storage medium.
Background
With the modernization of cities and the improvement of social, economic and cultural levels, amusement activity places such as amusement parks, zoos and parks are increasingly popular with people. To meet the entertainment needs of various users, various play items are often involved at these amusement rides.
Taking zoo visits as an example here, people always expect to interact with animals in close proximity. However, to better protect visitors and the animals themselves, most of the current animals are kept in cages, which makes close-range interaction a less realistic matter.
Disclosure of Invention
The embodiment of the disclosure provides at least a control scheme of a virtual object, and the autonomous interaction between the virtual object and a plurality of real persons is realized through action execution information determined by people flow distribution characteristics.
In a first aspect, an embodiment of the present disclosure provides a method for controlling a virtual object, where the method includes:
acquiring a real scene image comprising a plurality of real characters;
extracting people flow distribution information corresponding to the plurality of real persons from the obtained real scene image;
determining action execution information of a virtual object in a virtual scene based on the extracted people stream distribution information;
and generating an AR animation special effect after the plurality of real characters are merged into the virtual scene based on the images of the plurality of real characters in the real scene image and the action execution information of the virtual object in the virtual scene.
In one embodiment, the people flow distribution information includes people flow intensity; the determining of the action execution information of the virtual object in the virtual scene based on the extracted people stream distribution information comprises:
generating action execution information for controlling the virtual object to avoid the plurality of real characters in the AR animation special effect to be generated under the condition that the extracted people flow density is greater than or equal to a preset density;
and generating action execution information for controlling the virtual object to be close to the plurality of real characters in the AR animation special effect to be generated under the condition that the extracted people flow density is smaller than a preset density.
In one embodiment, the people flow distribution information includes people position distribution information; the determining of the action execution information of the virtual object in the virtual scene based on the extracted people stream distribution information comprises:
determining real position range information corresponding to the plurality of real characters in the AR animation special effect to be generated based on the extracted personnel position distribution information corresponding to the plurality of real characters;
if the real position range indicated by the real position range information is larger than or equal to a preset range, generating first action execution information for controlling the virtual object to appear in the real position range;
and if the actual position range indicated by the actual position range information is smaller than a preset range, generating second action execution information for controlling the virtual object to move to the actual position range.
In one embodiment, the generating an AR animation special effect after the plurality of real characters are merged into the virtual scene based on the images of the plurality of real characters in the real scene image and the motion execution information of the virtual object in the virtual scene includes:
determining motion trail data corresponding to the motion execution information of the virtual object in the virtual scene based on the corresponding relation between the motion execution information and the motion trail data;
and generating an AR animation special effect after the plurality of real characters are blended into the virtual scene based on the images of the plurality of real characters in the real scene image, the action execution information of the virtual object in the virtual scene and the corresponding motion trail data.
In one embodiment, the extracting, from the acquired real scene image, people flow distribution information corresponding to the plurality of real persons includes:
and extracting the people flow information of the acquired real scene image by utilizing a pre-trained people flow information extraction network to obtain the distribution characteristics of the people flow density corresponding to the plurality of real figures, and taking the distribution characteristics of the people flow density as the people flow distribution information.
In a second aspect, an embodiment of the present disclosure further provides an apparatus for controlling a virtual object, where the apparatus includes:
the system comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring a real scene image comprising a plurality of real characters;
the extraction module is used for extracting people stream distribution information corresponding to the plurality of real people from the acquired real scene image;
the determining module is used for determining action execution information of the virtual object in the virtual scene based on the extracted people stream distribution information;
and the generating module is used for generating the AR animation special effect after the plurality of real characters are blended into the virtual scene based on the images of the plurality of real characters in the real scene image and the action execution information of the virtual object in the virtual scene.
In one embodiment, the people flow distribution information includes people flow intensity; the determining module is used for determining the action execution information of the virtual object in the virtual scene based on the extracted people stream distribution information according to the following steps:
generating action execution information for controlling the virtual object to avoid the plurality of real characters in the AR animation special effect to be generated under the condition that the extracted people flow density is greater than or equal to a preset density;
and generating action execution information for controlling the virtual object to be close to the plurality of real characters in the AR animation special effect to be generated under the condition that the extracted people flow density is smaller than a preset density.
In one embodiment, the people flow distribution information includes people position distribution information; the determining module is used for determining the action execution information of the virtual object in the virtual scene based on the extracted people stream distribution information according to the following steps:
determining real position range information corresponding to the plurality of real characters in the AR animation special effect to be generated based on the extracted personnel position distribution information corresponding to the plurality of real characters;
if the real position range indicated by the real position range information is larger than or equal to a preset range, generating first action execution information for controlling the virtual object to appear in the real position range;
and if the actual position range indicated by the actual position range information is smaller than a preset range, generating second action execution information for controlling the virtual object to move to the actual position range.
In a third aspect, an embodiment of the present disclosure further provides an electronic device, including: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating via the bus when the electronic device is running, the machine-readable instructions when executed by the processor performing the steps of the method for controlling a virtual object according to the first aspect and any of its various embodiments.
In a fourth aspect, the disclosed embodiments also provide a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, performs the steps of the method for controlling a virtual object according to the first aspect and any of its various embodiments.
By adopting the control scheme of the virtual object, firstly, a real scene image comprising a plurality of real characters can be acquired, then people stream distribution information corresponding to the plurality of real characters can be extracted from the acquired real scene image, action execution information of the virtual object in the virtual scene can be determined based on the people stream distribution information, and finally, an AR animation special effect after the plurality of real characters are blended into the virtual scene can be generated based on the images of the plurality of real characters in the real scene image and the action execution information of the virtual object in the virtual scene. That is, the above control scheme may generate the action execution information corresponding to the virtual object based on the people flow distribution information, and thus, the AR animation special effect after the multiple real characters are merged into the virtual scene may be generated based on the action execution information and the images of the multiple real characters in the real scene image, and the interaction between the real characters and the virtual object may be realized, for example, the clapping action between multiple people and the polar bear may be realized under the condition that the people flow distribution is relatively discrete, that is, the autonomous interaction between the animal and the people is realized, and the visiting experience and the exhibition effect are optimized.
In order to make the aforementioned objects, features and advantages of the present disclosure more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings required for use in the embodiments will be briefly described below, and the drawings herein incorporated in and forming a part of the specification illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the technical solutions of the present disclosure. It is appreciated that the following drawings depict only certain embodiments of the disclosure and are therefore not to be considered limiting of its scope, for those skilled in the art will be able to derive additional related drawings therefrom without the benefit of the inventive faculty.
Fig. 1 is a flowchart illustrating a control method for a virtual object according to a first embodiment of the disclosure;
fig. 2 is a schematic diagram illustrating an application of a control method for a virtual object according to a first embodiment of the present disclosure;
fig. 3 is a schematic diagram illustrating a control apparatus for a virtual object according to a second embodiment of the disclosure;
fig. 4 shows a schematic diagram of an electronic device provided in a third embodiment of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions of the embodiments of the present disclosure will be described clearly and completely with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are only a part of the embodiments of the present disclosure, not all of the embodiments. The components of the embodiments of the present disclosure, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present disclosure, presented in the figures, is not intended to limit the scope of the claimed disclosure, but is merely representative of selected embodiments of the disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the disclosure without making creative efforts, shall fall within the protection scope of the disclosure.
It has been found that when visiting zoos, people always expect close range interactions with animals. However, to better protect visitors and the animals themselves, most of the current animals are kept in cages, which makes close-range interaction a less realistic matter.
Based on the research, the present disclosure provides a control scheme for at least one virtual object, which implements autonomous interaction between the virtual object and a plurality of real persons through motion execution information determined by people flow distribution characteristics.
The above-mentioned drawbacks are the results of the inventor after practical and careful study, and therefore, the discovery process of the above-mentioned problems and the solutions proposed by the present disclosure to the above-mentioned problems should be the contribution of the inventor in the process of the present disclosure.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
To facilitate understanding of the present embodiment, first, a detailed description is given of a control method for a virtual object disclosed in the embodiments of the present disclosure, where an execution subject of the control method for a virtual object provided in the embodiments of the present disclosure is generally an electronic device with certain computing capability, and the electronic device includes, for example: a terminal device, which may be a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a Personal Digital Assistant (PDA), a handheld device, a computing device, a vehicle mounted device, a wearable device, or a server or other processing device. In some possible implementations, the control method of the virtual object may be implemented by a processor calling computer readable instructions stored in a memory.
The following describes a control method of a virtual object according to an embodiment of the present disclosure, taking an execution subject as a server as an example.
Example one
Referring to fig. 1, which is a flowchart of a method for controlling a virtual object according to an embodiment of the present disclosure, the method includes steps S101 to S104, where:
s101, acquiring a real scene image comprising a plurality of real characters;
s102, extracting people stream distribution information corresponding to a plurality of real people from the obtained real scene image;
s103, determining action execution information of the virtual object in the virtual scene based on the extracted people stream distribution information;
and S104, generating an AR animation special effect after the plurality of real characters are merged into the virtual scene based on the images of the plurality of real characters in the real scene image and the action execution information of the virtual object in the virtual scene.
Here, in order to facilitate understanding of the control method of the virtual object provided by the embodiments of the present disclosure, first, an application scenario of the control method of the virtual object is specifically described below. The control method of the virtual object in the embodiment of the disclosure can be applied to any activity scene needing to interact with animals. Therefore, under the condition that a user enters an activity scene of the control method deployed to the virtual object, an Augmented Reality (AR) animation special effect which is obtained after the user is integrated into the virtual scene containing the virtual object can be provided for the user, so that the user can view the interaction between the user and the virtual object by using the display equipment, the autonomous interaction between multiple people and animals is realized, and the visiting experience and the exhibition effect are optimized.
It should be noted that the virtual object in the embodiment of the present disclosure may be constructed based on any real animal model, for example, various virtual animals such as polar bears, penguins, elephants, and the like, and various virtual objects such as roller coasters, trojans, and the like. And is not particularly limited herein. Next, a polar bear will be specifically described as an example.
In order to implement the interaction between the real characters and the virtual objects, in the embodiment of the present disclosure, first, the action execution information of the virtual objects in the virtual scene may be determined based on the people flow distribution information corresponding to a plurality of real characters. Wherein, the action execution information corresponding to the virtual object can be determined based on the people flow distribution information extracted from the acquired real scene image.
The people stream distribution information may include people stream density, personnel position distribution information, and other people stream distribution information. The people flow density can represent the number of real characters in a real scene, the more the number is, the larger the corresponding people flow density is, otherwise, the less the number is, the smaller the corresponding people flow density is. The person position distribution information can represent the distribution situation of the positions of the real persons in the real scene, and when the positions of the real persons are relatively close, the probability of the person gathering is higher, and conversely, when the positions of the real persons are relatively far, the probability of the person gathering is lower.
In order to realize autonomous interaction between human and animals, in the embodiment of the present disclosure, based on different people stream distribution information, different action execution information may be determined. Next, the following two aspects can be explained in detail.
In a first aspect: when the people flow density is taken as the people flow distribution information, the action execution information of the virtual object in the virtual scene can be determined according to the following method:
here, in a case where it is determined that the extracted density of the stream of people is greater than or equal to the preset density, motion execution information for controlling the virtual object to avoid a plurality of real characters in the AR animation special effect to be generated is generated, that is, in a case where it is determined that the number of characters in the current real scene is large, the virtual object may be controlled to avoid the real characters.
For example, the virtual object may move along a route with few other people, which mainly considers that too many real characters bring a filling feeling of the picture to the AR animation special effect to be generated, and if the virtual object is controlled to move to the real characters, the AR animation special effect picture may be adversely affected. Particularly, when the user has the requirement of AR group photo memorandum, too many targets occupy most of AR animation special effect pictures, and good group photo experience cannot be brought.
In the embodiment of the disclosure, when it is determined that the extracted people flow density is less than the preset density, the action execution information for controlling the virtual object to be close to the plurality of real characters in the AR animation special effect to be generated is generated, that is, when it is determined that the number of characters in the current real scene is small, the virtual object can be controlled to be close to the real characters, so that on one hand, the current AR animation special effect picture can be harmonious, on the other hand, the requirements of the user such as group photo can be met, and the practicability is better.
In a second aspect: when the person position distribution information is used as the people flow distribution information, the action execution information of the virtual object in the virtual scene can be determined according to the following steps:
step one, determining real position range information corresponding to a plurality of real characters in an AR animation special effect to be generated based on extracted personnel position distribution information corresponding to the plurality of real characters;
step two, if the real position range indicated by the real position range information is larger than or equal to a preset range, generating first action execution information for controlling the virtual object to appear in the real position range;
and step three, if the real position range indicated by the real position range information is smaller than the preset range, generating second action execution information for controlling the virtual object to move to the real position range.
Here, the embodiment of the present disclosure may first determine, based on the person position distribution information, real position range information corresponding to a plurality of real characters in an AR animation special effect to be generated, at this time, if a real position range where the plurality of real characters are located is relatively large, at this time, a virtual object may be controlled to appear in the real position range, and if the real position range where the plurality of real characters are located is relatively small, at this time, the virtual object may be controlled to move to the real position range.
Therefore, the control method of the virtual object provided by the embodiment of the disclosure can control the virtual object to execute different actions based on different people stream densities and people position distribution information, that is, the autonomous interaction between the real character and the virtual object is realized.
When the AR animation special effect is generated, the motion track data can be combined. Here, first, the motion trajectory data corresponding to the motion execution information of the virtual object in the virtual scene may be determined based on the correspondence between each piece of motion execution information and each piece of motion trajectory data, and then, the AR animation special effect in which the plurality of real characters are merged into the virtual scene may be generated based on the images of the plurality of real characters in the real scene image, the motion execution information of the virtual object in the virtual scene, and the corresponding motion trajectory data.
The motion trajectory data may record motion position information of the virtual object related to each piece of motion execution information, for example, the motion of the polar bear probe may correspond to a set of pieces of motion position information determined to climb out of the pool in one direction, and the set of pieces of motion position information is used as the motion trajectory data.
In consideration of the important role of the traffic distribution information in determining the action execution information, the following may specifically explain the extraction process of the traffic distribution information.
In the embodiment of the present disclosure, in order to extract people flow distribution information corresponding to a plurality of real people from an acquired real scene image, on one hand, the people flow distribution information may be implemented by using an image processing method, and on the other hand, the people flow distribution information may be implemented by using a deep learning method.
For example, a people stream information extraction network capable of extracting distribution features of people stream density may be trained in advance by using a plurality of image samples, and then, the obtained real scene image including real characters is input into the trained people stream information extraction network, that is, people stream distribution information corresponding to a plurality of real characters may be extracted from the input real scene image.
In specific application, the AR animation special effect can be displayed through a display screen arranged in front of a real character, so that the real character can watch the dynamic interaction process of the real character and a virtual object on the display screen, and the autonomous interaction between the virtual object and a plurality of real characters is realized through the action execution information determined by the people flow distribution characteristics.
In order to facilitate understanding of the implementation process of the control method of the virtual object, as shown in fig. 2, the following description may be made in conjunction with a display effect diagram of a display screen.
When a user enters an activity scene of the control method deployed to the virtual object, the person position distribution information (three discrete person distributions) corresponding to a plurality of real characters can be extracted, and when it is determined that the corresponding real position range is large, the action execution information for controlling the virtual object to appear in the real position range is generated, and the AR animation special effect shown on the display screen after the real characters are blended into the virtual scene including the virtual object is utilized, as shown in fig. 2.
Therefore, the AR reality augmentation technology is used in the moving scene, the camera is arranged in a three-dimensional space, the people stream distribution information is captured in real time, the camera and the virtual scene containing the virtual object are connected and integrated in a seamless mode to generate the corresponding AR animation special effect, and practical interaction between users in the real world and animals in the virtual world is achieved.
It will be understood by those skilled in the art that in the method of the present invention, the order of writing the steps does not imply a strict order of execution and any limitations on the implementation, and the specific order of execution of the steps should be determined by their function and possible inherent logic.
Based on the same inventive concept, a control apparatus for a virtual object corresponding to a control method for a virtual object is also provided in the embodiments of the present disclosure, and since the principle of the apparatus in the embodiments of the present disclosure for solving the problem is similar to the control method for a virtual object described above in the embodiments of the present disclosure, the implementation of the apparatus may refer to the implementation of the method, and repeated details are omitted.
Example two
Referring to fig. 3, which is a schematic diagram of an architecture of a control apparatus for a virtual object according to an embodiment of the present disclosure, the apparatus includes: an acquisition module 301, an extraction module 302, a determination module 303 and a generation module 304; wherein the content of the first and second substances,
an obtaining module 301, configured to obtain a real scene image including a plurality of real characters;
an extracting module 302, configured to extract people stream distribution information corresponding to a plurality of real people from the acquired real scene image;
a determining module 303, configured to determine, based on the extracted people stream distribution information, action execution information of the virtual object in the virtual scene;
the generating module 304 is configured to generate an AR animation special effect after the multiple real characters are merged into the virtual scene based on the images of the multiple real characters in the real scene image and the motion execution information of the virtual object in the virtual scene.
In one embodiment, the people flow distribution information includes people flow intensity; a determining module 303, configured to determine, based on the extracted people stream distribution information, action execution information of the virtual object in the virtual scene according to the following steps:
generating action execution information for controlling the virtual object to avoid a plurality of real characters in the AR animation special effect to be generated under the condition that the extracted people stream density is greater than or equal to the preset density;
and under the condition that the extracted people stream density is less than the preset density, generating action execution information for controlling the virtual object to be close to a plurality of real characters in the AR animation special effect to be generated.
In one embodiment, the people flow distribution information includes people position distribution information; a determining module 303, configured to determine, based on the extracted people stream distribution information, action execution information of the virtual object in the virtual scene according to the following steps:
determining real position range information corresponding to a plurality of real characters in the AR animation special effect to be generated based on the extracted personnel position distribution information corresponding to the plurality of real characters;
if the real position range indicated by the real position range information is larger than or equal to the preset range, generating first action execution information for controlling the virtual object to appear in the real position range;
and if the actual position range indicated by the actual position range information is smaller than the preset range, generating second action execution information for controlling the virtual object to move to the actual position range.
In one embodiment, the generating module 304 is configured to generate an AR animation special effect after a plurality of real characters are merged into a virtual scene based on images of the plurality of real characters in an image of a real scene and motion execution information of a virtual object in the virtual scene according to the following steps:
determining motion trail data corresponding to the motion execution information of the virtual object in the virtual scene based on the corresponding relation between the motion execution information and the motion trail data;
and generating an AR animation special effect after the multiple real characters are blended into the virtual scene based on the images of the multiple real characters in the real scene image, the action execution information of the virtual object in the virtual scene and the corresponding motion trail data.
In one embodiment, the extracting module 302 is configured to extract people flow distribution information corresponding to a plurality of real people from the acquired real scene image according to the following steps:
and extracting the people flow information of the acquired real scene image by using a pre-trained people flow information extraction network to obtain the distribution characteristics of the people flow density corresponding to a plurality of real figures, and taking the distribution characteristics of the people flow density as the people flow distribution information.
The description of the processing flow of each module in the device and the interaction flow between the modules may refer to the related description in the above method embodiments, and will not be described in detail here.
EXAMPLE III
An embodiment of the present disclosure further provides an electronic device, as shown in fig. 4, which is a schematic structural diagram of the electronic device provided in the embodiment of the present disclosure, and the electronic device includes: a processor 401, a memory 402, and a bus 403. The memory 402 stores machine-readable instructions executable by the processor 401, the processor 401 and the memory 402 communicating via the bus 403 when the electronic device is operating, the machine-readable instructions when executed by the processor 401 performing the following:
acquiring a real scene image comprising a plurality of real characters;
extracting people flow distribution information corresponding to a plurality of real persons from the obtained real scene image;
determining action execution information of the virtual object in the virtual scene based on the extracted people stream distribution information;
and generating an AR animation special effect after the plurality of real characters are blended into the virtual scene based on the images of the plurality of real characters in the real scene image and the action execution information of the virtual object in the virtual scene.
In one embodiment, the people flow distribution information includes people flow intensity; the instructions executed by the processor 401 for determining the action execution information of the virtual object in the virtual scene based on the extracted people flow distribution information include:
generating action execution information for controlling the virtual object to avoid a plurality of real characters in the AR animation special effect to be generated under the condition that the extracted people stream density is greater than or equal to the preset density;
and under the condition that the extracted people stream density is less than the preset density, generating action execution information for controlling the virtual object to be close to a plurality of real characters in the AR animation special effect to be generated.
In one embodiment, the people flow distribution information includes people position distribution information; the instructions executed by the processor 401 for determining the action execution information of the virtual object in the virtual scene based on the extracted people flow distribution information include:
determining real position range information corresponding to a plurality of real characters in the AR animation special effect to be generated based on the extracted personnel position distribution information corresponding to the plurality of real characters;
if the real position range indicated by the real position range information is larger than or equal to the preset range, generating first action execution information for controlling the virtual object to appear in the real position range;
and if the actual position range indicated by the actual position range information is smaller than the preset range, generating second action execution information for controlling the virtual object to move to the actual position range.
In one embodiment, the instructions executed by the processor 401 to generate an AR animation special effect after a plurality of real characters are merged into a virtual scene based on images of the plurality of real characters in an image of a real scene and motion execution information of a virtual object in the virtual scene includes:
determining motion trail data corresponding to the motion execution information of the virtual object in the virtual scene based on the corresponding relation between the motion execution information and the motion trail data;
and generating an AR animation special effect after the multiple real characters are blended into the virtual scene based on the images of the multiple real characters in the real scene image, the action execution information of the virtual object in the virtual scene and the corresponding motion trail data.
In one embodiment, the instructions executed by the processor 401 to extract the people flow distribution information corresponding to a plurality of real persons from the acquired real scene image include:
and extracting the people flow information of the acquired real scene image by using a pre-trained people flow information extraction network to obtain the distribution characteristics of the people flow density corresponding to a plurality of real figures, and taking the distribution characteristics of the people flow density as the people flow distribution information.
The present disclosure also provides a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by the processor 401 to execute the steps of the method for controlling a virtual object in the above method embodiments. The storage medium may be a volatile or non-volatile computer-readable storage medium.
The computer program product of the method for controlling a virtual object provided in the embodiments of the present disclosure includes a computer-readable storage medium storing a program code, where instructions included in the program code may be used to execute steps of the method for controlling a virtual object described in the above method embodiments, which may be referred to in the above method embodiments specifically, and are not described herein again.
The embodiments of the present disclosure also provide a computer program, which when executed by a processor implements any one of the methods of the foregoing embodiments. The computer program product may be embodied in hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied in a computer storage medium, and in another alternative embodiment, the computer program product is embodied in a Software product, such as a Software Development Kit (SDK), or the like.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the system and the apparatus described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again. In the several embodiments provided in the present disclosure, it should be understood that the disclosed system, apparatus, and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing an electronic device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present disclosure. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
Finally, it should be noted that: the above-mentioned embodiments are merely specific embodiments of the present disclosure, which are used for illustrating the technical solutions of the present disclosure and not for limiting the same, and the scope of the present disclosure is not limited thereto, and although the present disclosure is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive of the technical solutions described in the foregoing embodiments or equivalent technical features thereof within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present disclosure, and should be construed as being included therein. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (10)

1. A method for controlling a virtual object, the method comprising:
acquiring a real scene image comprising a plurality of real characters;
extracting people flow distribution information corresponding to the plurality of real persons from the obtained real scene image;
determining action execution information of a virtual object in a virtual scene based on the extracted people stream distribution information;
and generating an AR animation special effect after the plurality of real characters are merged into the virtual scene based on the images of the plurality of real characters in the real scene image and the action execution information of the virtual object in the virtual scene.
2. The method of claim 1, wherein the people stream distribution information includes people stream intensity; the determining of the action execution information of the virtual object in the virtual scene based on the extracted people stream distribution information comprises:
generating action execution information for controlling the virtual object to avoid the plurality of real characters in the AR animation special effect to be generated under the condition that the extracted people flow density is greater than or equal to a preset density;
and generating action execution information for controlling the virtual object to be close to the plurality of real characters in the AR animation special effect to be generated under the condition that the extracted people flow density is smaller than a preset density.
3. The method of claim 1, wherein the people flow distribution information comprises people location distribution information; the determining of the action execution information of the virtual object in the virtual scene based on the extracted people stream distribution information comprises:
determining real position range information corresponding to the plurality of real characters in the AR animation special effect to be generated based on the extracted personnel position distribution information corresponding to the plurality of real characters;
if the real position range indicated by the real position range information is larger than or equal to a preset range, generating first action execution information for controlling the virtual object to appear in the real position range;
and if the actual position range indicated by the actual position range information is smaller than a preset range, generating second action execution information for controlling the virtual object to move to the actual position range.
4. The method according to any one of claims 1 to 3, wherein the generating an AR animation special effect after the plurality of real characters are merged into the virtual scene based on the images of the plurality of real characters in the real scene image and the motion execution information of the virtual object in the virtual scene comprises:
determining motion trail data corresponding to the motion execution information of the virtual object in the virtual scene based on the corresponding relation between the motion execution information and the motion trail data;
and generating an AR animation special effect after the plurality of real characters are blended into the virtual scene based on the images of the plurality of real characters in the real scene image, the action execution information of the virtual object in the virtual scene and the corresponding motion trail data.
5. The method according to any one of claims 1 to 4, wherein the extracting of the people flow distribution information corresponding to the plurality of real persons from the acquired real scene image includes:
and extracting the people flow information of the acquired real scene image by utilizing a pre-trained people flow information extraction network to obtain the distribution characteristics of the people flow density corresponding to the plurality of real figures, and taking the distribution characteristics of the people flow density as the people flow distribution information.
6. An apparatus for controlling a virtual object, the apparatus comprising:
the system comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring a real scene image comprising a plurality of real characters;
the extraction module is used for extracting people stream distribution information corresponding to the plurality of real people from the acquired real scene image;
the determining module is used for determining action execution information of the virtual object in the virtual scene based on the extracted people stream distribution information;
and the generating module is used for generating the AR animation special effect after the plurality of real characters are blended into the virtual scene based on the images of the plurality of real characters in the real scene image and the action execution information of the virtual object in the virtual scene.
7. The apparatus of claim 6, wherein the people flow distribution information comprises people flow intensity; the determining module is used for determining the action execution information of the virtual object in the virtual scene based on the extracted people stream distribution information according to the following steps:
generating action execution information for controlling the virtual object to avoid the plurality of real characters in the AR animation special effect to be generated under the condition that the extracted people flow density is greater than or equal to a preset density;
and generating action execution information for controlling the virtual object to be close to the plurality of real characters in the AR animation special effect to be generated under the condition that the extracted people flow density is smaller than a preset density.
8. The apparatus of claim 6, wherein the people flow distribution information comprises people location distribution information; the determining module is used for determining the action execution information of the virtual object in the virtual scene based on the extracted people stream distribution information according to the following steps:
determining real position range information corresponding to the plurality of real characters in the AR animation special effect to be generated based on the extracted personnel position distribution information corresponding to the plurality of real characters;
if the real position range indicated by the real position range information is larger than or equal to a preset range, generating first action execution information for controlling the virtual object to appear in the real position range;
and if the actual position range indicated by the actual position range information is smaller than a preset range, generating second action execution information for controlling the virtual object to move to the actual position range.
9. An electronic device, comprising: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating via the bus when the electronic device is operating, the machine-readable instructions when executed by the processor performing the steps of the method of controlling a virtual object according to any one of claims 1 to 5.
10. A computer-readable storage medium, characterized in that a computer program is stored on the computer-readable storage medium, which computer program, when being executed by a processor, performs the steps of the method for controlling a virtual object according to any one of claims 1 to 5.
CN202010525221.6A 2020-06-10 2020-06-10 Virtual object control method and device, electronic equipment and storage medium Active CN111652985B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010525221.6A CN111652985B (en) 2020-06-10 2020-06-10 Virtual object control method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010525221.6A CN111652985B (en) 2020-06-10 2020-06-10 Virtual object control method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111652985A true CN111652985A (en) 2020-09-11
CN111652985B CN111652985B (en) 2024-04-16

Family

ID=72344979

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010525221.6A Active CN111652985B (en) 2020-06-10 2020-06-10 Virtual object control method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111652985B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113495628A (en) * 2021-07-30 2021-10-12 浙江星秀农业科技有限公司 Firefly simulation interaction method and system based on VR technology

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105955456A (en) * 2016-04-15 2016-09-21 深圳超多维光电子有限公司 Virtual reality and augmented reality fusion method, device and intelligent wearable equipment
CN106383587A (en) * 2016-10-26 2017-02-08 腾讯科技(深圳)有限公司 Augmented reality scene generation method, device and equipment
CN107067474A (en) * 2017-03-07 2017-08-18 深圳市吉美文化科技有限公司 A kind of augmented reality processing method and processing device
US20190385371A1 (en) * 2018-06-19 2019-12-19 Google Llc Interaction system for augmented reality objects
CN110674398A (en) * 2019-09-05 2020-01-10 深圳追一科技有限公司 Virtual character interaction method and device, terminal equipment and storage medium
CN111026261A (en) * 2018-10-09 2020-04-17 上海奈飒翱网络科技有限公司 Method for AR interactive display of tourist attractions
CN111243101A (en) * 2019-12-31 2020-06-05 浙江省邮电工程建设有限公司 Method, system and device for increasing AR environment immersion degree of user based on artificial intelligence

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105955456A (en) * 2016-04-15 2016-09-21 深圳超多维光电子有限公司 Virtual reality and augmented reality fusion method, device and intelligent wearable equipment
US20170301137A1 (en) * 2016-04-15 2017-10-19 Superd Co., Ltd. Method, apparatus, and smart wearable device for fusing augmented reality and virtual reality
CN106383587A (en) * 2016-10-26 2017-02-08 腾讯科技(深圳)有限公司 Augmented reality scene generation method, device and equipment
CN107067474A (en) * 2017-03-07 2017-08-18 深圳市吉美文化科技有限公司 A kind of augmented reality processing method and processing device
US20190385371A1 (en) * 2018-06-19 2019-12-19 Google Llc Interaction system for augmented reality objects
CN111026261A (en) * 2018-10-09 2020-04-17 上海奈飒翱网络科技有限公司 Method for AR interactive display of tourist attractions
CN110674398A (en) * 2019-09-05 2020-01-10 深圳追一科技有限公司 Virtual character interaction method and device, terminal equipment and storage medium
CN111243101A (en) * 2019-12-31 2020-06-05 浙江省邮电工程建设有限公司 Method, system and device for increasing AR environment immersion degree of user based on artificial intelligence

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
MING HOU ET AL.: "A Model of Real - Virtual Object Interactions in Stereoscopic Augmented Reality Environments", 《PROCEEDINGS ON SEVENTH INTERNATIONAL CONFERENCE ON INFORMATION VISUALIZATION》, 4 August 2003 (2003-08-04), pages 1 - 6 *
李江: "基于ARToolKit的增强现实技术在恐龙博物馆中的应用研究", 《中国优秀硕士学位论文全文数据库》, 15 October 2010 (2010-10-15), pages 1 - 66 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113495628A (en) * 2021-07-30 2021-10-12 浙江星秀农业科技有限公司 Firefly simulation interaction method and system based on VR technology

Also Published As

Publication number Publication date
CN111652985B (en) 2024-04-16

Similar Documents

Publication Publication Date Title
CN111640202B (en) AR scene special effect generation method and device
CN111652987B (en) AR group photo image generation method and device
CN111080759A (en) Method and device for realizing split mirror effect and related product
CN111581547B (en) Tour information pushing method and device, electronic equipment and storage medium
CN111667590A (en) Interactive group photo method and device, electronic equipment and storage medium
Tomaselli et al. New media: Ancient signs of literacy, modern signs of tracking
CN112933606A (en) Game scene conversion method and device, storage medium and computer equipment
CN111639979A (en) Entertainment item recommendation method and device
CN111694431A (en) Method and device for generating character image
CN111640235A (en) Queuing information display method and device
CN111651058A (en) Historical scene control display method and device, electronic equipment and storage medium
CN111652983A (en) Augmented reality AR special effect generation method, device and equipment
Giannini et al. Digital art and identity merging human and artificial intelligence: enter the metaverse
CN111652985A (en) Virtual object control method and device, electronic equipment and storage medium
Rodrigo et al. Igpaw: intramuros—design of an augmented reality game for philippine history
CN111580679A (en) Space capsule display method and device, electronic equipment and storage medium
CN111639977A (en) Information pushing method and device, computer equipment and storage medium
CN111639615A (en) Trigger control method and device for virtual building
CN113282179A (en) Interaction method, interaction device, computer equipment and storage medium
CN111652981A (en) Space capsule special effect generation method and device, electronic equipment and storage medium
CN111639975A (en) Information pushing method and device
CN112333473A (en) Interaction method, interaction device and computer storage medium
CN113538703A (en) Data display method and device, computer equipment and storage medium
CN111953849A (en) Method and device for displaying message board, electronic equipment and storage medium
CN111640198A (en) Interactive shooting method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant