WO2022022028A1 - 虚拟对象控制方法及装置、设备、计算机可读存储介质 - Google Patents
虚拟对象控制方法及装置、设备、计算机可读存储介质 Download PDFInfo
- Publication number
- WO2022022028A1 WO2022022028A1 PCT/CN2021/095571 CN2021095571W WO2022022028A1 WO 2022022028 A1 WO2022022028 A1 WO 2022022028A1 CN 2021095571 W CN2021095571 W CN 2021095571W WO 2022022028 A1 WO2022022028 A1 WO 2022022028A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- real
- interactive
- display
- virtual
- virtual object
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/165—Detection; Localisation; Normalisation using facial parts and geometric relationships
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/18—Eye characteristics, e.g. of the iris
- G06V40/19—Sensors therefor
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2203/00—Indexing scheme relating to G06F3/00 - G06F3/048
- G06F2203/01—Indexing scheme relating to G06F3/01
- G06F2203/012—Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment
Definitions
- the present disclosure relates to image processing technologies, and in particular, to a virtual object control method, apparatus, device, and computer-readable storage medium.
- Embodiments of the present disclosure provide a virtual object control method, apparatus, device, and computer-readable storage medium, which can improve the display flexibility of virtual objects and display objects, and enrich the display effects of virtual objects and display objects.
- An embodiment of the present disclosure provides a method for controlling a virtual object, including: collecting multiple frames of interactive images of interactive objects in a real scene and a display image of a target display object in the real scene, respectively; and determining, according to the multiple frames of interactive images, The state information of the interactive object, and according to the state information of the interactive object, determine the posture information of the virtual object; according to the display image, determine the virtual effect data of the target display object; adopt the posture information of the virtual object Rendering with the virtual effect data to obtain a virtual effect image, and displaying an augmented reality effect including the virtual effect image.
- An embodiment of the present disclosure provides a virtual object control device, comprising: a collection part configured to collect multiple frames of interactive images of interactive objects in a real scene and a display image of a target display object in the real scene; a determination part, is configured to determine the state information of the interactive objects in the real scene according to the multi-frame interactive images; control the posture information of the virtual objects according to the state information of the interactive objects; determine the target according to the display images
- the rendering part is configured to use the posture information of the virtual object and the virtual effect data to render to obtain a virtual effect image;
- the display part is configured to display the augmented reality including the virtual effect image Effect.
- An embodiment of the present disclosure provides a display device, the display device includes: a display screen, a camera, a memory, and a processor; the memory is configured to store an executable computer program; the processor is configured to execute the When the executable computer program is stored in the memory, the above method is implemented in combination with the camera and the display screen.
- Embodiments of the present disclosure provide a computer-readable storage medium storing a computer program configured to cause a processor to execute the method to implement the above method.
- Embodiments of the present disclosure provide a computer program, including computer-readable codes, and when the computer-readable codes are executed in an electronic device, the processor in the electronic device implements the foregoing method when executed.
- the posture information of the virtual object is determined according to the state information of the interactive object in the real scene
- the posture information of the virtual object can be changed with the state information of the interactive object, and the relationship between the virtual object and the interactive object can be realized.
- the virtual effect data of the target display object is obtained according to the display image of the target display object, and the virtual effect data is rendered according to the virtual effect data. Therefore, the virtual effect corresponding to the display object in the real scene can be displayed, thereby increasing the display method of the display object, improving the display flexibility of the display object, and enriching the display effect of the display object.
- FIG. 1 is an optional schematic structural diagram of a display system provided by an embodiment of the present disclosure
- FIG. 2 is a schematic diagram of an application scenario provided by an embodiment of the present disclosure
- FIG. 3 is a schematic diagram 1 of a display device provided by an embodiment of the present disclosure.
- FIG. 4 is a second schematic diagram of a display device provided by an embodiment of the present disclosure.
- FIG. 5 is an optional schematic flowchart of a virtual object control method provided by an embodiment of the present disclosure
- FIG. 6 is a schematic diagram of a display interface of an exemplary display device provided by an embodiment of the present disclosure.
- FIG. 7 is a schematic diagram of a display interface of another exemplary display device provided by an embodiment of the present disclosure.
- FIG. 8 is another optional schematic flowchart of the virtual object control method provided by the embodiment of the present disclosure.
- FIG. 9 is another optional schematic flowchart of the virtual object control method provided by the embodiment of the present disclosure.
- FIG. 10 is another optional schematic flowchart of the virtual object control method provided by the embodiment of the present disclosure.
- FIG. 11 is another optional schematic flowchart of the virtual object control method provided by the embodiment of the present disclosure.
- FIG. 13 is another optional schematic flowchart of the virtual object control method provided by the embodiment of the present disclosure.
- 14A is a schematic diagram of the effect of a virtual object displayed by an exemplary display device provided by an embodiment of the present disclosure
- 14B is a schematic diagram of the effect of another exemplary virtual object displayed by a display device provided by an embodiment of the present disclosure.
- FIG. 16 is another optional schematic flowchart of the virtual object control method provided by the embodiment of the present disclosure.
- 17A is a schematic diagram of a display interface of an exemplary display device provided by an embodiment of the present disclosure.
- 17B is a schematic diagram of a display interface of another exemplary display device provided by an embodiment of the present disclosure.
- FIG. 18 is another optional schematic flowchart of the virtual object control method provided by the embodiment of the present disclosure.
- FIG. 19 is another optional schematic flowchart of the virtual object control method provided by the embodiment of the present disclosure.
- FIG. 20 is a schematic structural diagram of a virtual object control apparatus provided by an embodiment of the present disclosure.
- FIG. 21 is a schematic structural diagram of a display device provided by an embodiment of the present disclosure.
- Augmented Reality (AR) technology is a technology that ingeniously integrates virtual information with the real world. Users can view virtual objects superimposed on real scenes through AR devices, such as the real campus playground. How to make these virtual objects such as these virtual trees and virtual flying birds better integrate with the real scene and realize the virtual objects in the augmented reality scene?
- the presentation effect of the present disclosure is the content to be discussed in the embodiments of the present disclosure, which will be described below with reference to the following specific embodiments.
- the embodiments of the present disclosure provide a virtual object control method, which can improve the display flexibility of virtual objects and display objects, and enrich the display effects of virtual objects and display objects.
- the virtual object control method provided by the embodiment of the present disclosure is applied to a virtual object control device, and the virtual object control device may be a display device.
- the display device provided by the embodiment of the present disclosure may be implemented as AR glasses, a notebook computer, a tablet computer, a desktop computer, a set-top box, a mobile device (for example, a mobile phone, a portable music
- a mobile device for example, a mobile phone, a portable music
- terminals such as players, personal digital assistants, dedicated messaging devices, portable game devices, etc.
- servers can also be implemented as servers.
- the display device is implemented as a terminal
- multiple frames of interactive images of the interactive objects in the real scene and the displayed images of the target display objects in the real scene can be collected respectively, and the interactive images can be determined according to the multiple frames of interactive images.
- the state information of the object, the posture information of the virtual object is determined according to the state information of the interactive object, and the virtual effect data of the target display object is determined according to the display image;
- the terminal can also interact with the cloud server, by interacting with the cloud The interaction of the server, acquiring the posture information of the virtual object corresponding to the state information of the interactive object, acquiring the virtual effect data of the target display object, and using the posture information of the virtual object and the virtual effect data for rendering to obtain a virtual effect an image showing an augmented reality effect including the virtual effect image.
- FIG. 1 is a schematic diagram of an optional architecture of a display system provided by an embodiment of the present disclosure.
- a terminal 400 (display device, exemplarily shows the terminal 400 -1 and the terminal 400-2) are connected to the server 200 through the network 300, and the network 300 may be a wide area network or a local area network, or a combination of the two.
- the terminal 400 is configured to collect multiple frames of interactive images of the interactive objects in the real scene and display images of the target display objects in the real scene, respectively.
- determine the state information of the interactive object according to the multi-frame interactive images determine the posture information of the virtual object according to the state information of the interactive object, and determine the virtual effect data of the target display object according to the display image , using the posture information of the virtual object and the virtual effect data to render to obtain a virtual effect image, and displaying the image on the graphical interface 401 (the graphical interface 401-1 and the graphical interface 401-2 are exemplarily shown in the figure) including all the Augmented reality effect of the described virtual effect image.
- a preset display application on the mobile phone can be started, and the camera is invoked through the preset display application to collect the multi-frame interactive images of the interactive objects in the real scene and the interactive images in the real scene respectively.
- the display image of the target display object determine the state information of the interaction object, according to the display image, determine the target display object, initiate a data request to the server 200, the server 200 receives After the data request, the posture information of the virtual object corresponding to the state information of the interactive object is determined from the posture data of the virtual object pre-existed in the database 500, and the posture information corresponding to the target display object is determined from the virtual effect data pre-existed in the database 500. virtual effect data; and send the determined gesture information and virtual effect data of the virtual object back to the terminal 400 .
- the terminal 400 After the terminal 400 obtains the posture information and virtual effect data of the virtual object fed back by the server, it uses the posture data and virtual effect data of the virtual object to render to obtain a virtual effect image, and displays the virtual effect image on the graphical interface 401 of the terminal 400.
- augmented reality effect After the terminal 400 obtains the posture information and virtual effect data of the virtual object fed back by the server, it uses the posture data and virtual effect data of the virtual object to render to obtain a virtual effect image, and displays the virtual effect image on the graphical interface 401 of the terminal 400.
- augmented reality effect is augmented reality effect.
- the server 200 may be an independent physical server, or a server cluster or a distributed system composed of multiple physical servers, or may provide cloud services, cloud databases, cloud computing, cloud functions, cloud storage, Cloud servers for basic cloud computing services such as network services, cloud communications, middleware services, domain name services, security services, CDN, and big data and artificial intelligence platforms.
- the terminal 400 may be a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart speaker, a smart watch, etc., but is not limited thereto.
- the terminal and the server may be directly or indirectly connected through wired or wireless communication, which is not limited in this embodiment of the present disclosure.
- FIG. 2 is a schematic diagram of an application scenario provided by an embodiment of the present disclosure.
- a display device may include a movable display screen 101, and the movable display screen 101 may be arranged around multiple exhibits in an exhibition , the movable display screen 101 is provided with a rear camera, which can be configured to photograph exhibits, and the movable display screen 101 can display exhibits, virtual effects related to the exhibits, and virtual objects.
- the virtual effect of the exhibit can be at least one of the introduction information of the exhibit, the display information of the interior details of the exhibit, the outline of the exhibit, the virtual model of the exhibit, the objects related to the function of the exhibit, and the original information or components on the exhibit.
- the virtual effect corresponding to the exhibit can be an object related to the function of the wine glass, such as fine wine in the wine glass; in the case that the exhibit is a cultural relic that is a tripod,
- the virtual effect corresponding to the exhibit may be the original information on the exhibit, for example, the original text information on the outer wall of the tripod.
- the movable display screen 101 is also configured with a front camera, which is configured to photograph the interactive objects (such as exhibitors) located in front of the movable display screen 101 , and further, the movable display screen 101 can recognize the interactive objects in the captured images.
- the issued instructions for example, the gaze direction of the line of sight of the interactive object, the moving direction of the interactive object, and the gesture action of the interactive object, etc.), so as to realize the display and adjustment of the virtual effect of the exhibits.
- the display screen of the display device is a movable display screen.
- the display screen of the display device may move on a preset sliding track as shown in FIG. 3 , or may be fixed on a movable sliding bracket to realize sliding as shown in FIG. 4 .
- the display screen may display different contents for the user to perform at least one operation of viewing and clicking; the display screen may be a touch screen or a non-touch screen.
- FIG. 5 is an optional schematic flowchart of a virtual object control method provided by an embodiment of the present disclosure, which will be described in conjunction with the steps shown in FIG. 5 .
- S101 Collect multiple frames of interactive images of an interactive object in a real scene and a display image of a target display object in the real scene, respectively.
- the display device may use the first image acquisition device to collect multiple frames of interactive images of the interactive objects in the real scene, and use the second image acquisition device to collect the display images of the display objects in the real scene.
- the interactive objects can be objects in the real scene or other information of the real scene, and so on.
- a real scene can include multiple display objects, and the target display object can be any display object among the multiple display objects, or a display object belonging to a specific category or attribute among the multiple display objects, or it can be matched by a condition.
- a display object selected by other methods which is not limited in this embodiment of the present disclosure.
- the first image acquisition device and the second image acquisition device are located on the same side of the display device; or the first image acquisition device and the second image acquisition device are located on two opposite or adjacent sides of the display device, respectively. side.
- the first image capturing device may be a front-facing camera of a display device
- the second image capturing device may be a rear-facing camera of the display device.
- interactive objects are real people in real scenes; display objects may be exhibits in exhibitions, for example, cultural relics displayed in museums, high-tech products displayed in science and technology museums, and the like.
- S102 Determine the state information of the interactive object according to the multiple frames of interactive images, and determine the posture information of the virtual object according to the state information of the interactive object.
- the display device may obtain state information of the interactive objects in the real scene by performing image recognition and analysis on the collected interactive images of multiple frames of the interactive objects.
- the state information of the interactive object includes at least one of the following: motion state information, body motion information, and sight line information of a real person.
- the virtual object may be a virtual character; in some embodiments of the present disclosure, the virtual object may also be other types of objects, which are not limited in this embodiment of the present disclosure.
- the display device after the display device determines the state information of the interactive object according to the interactive image, it can determine the gesture information of the virtual object according to the state information of the interactive object.
- the gesture information of the virtual object includes at least one of the following: body movements and line-of-sight directions of the virtual object; in this way, the interaction between the virtual object and a real person can be realized.
- the virtual object may be a virtual lecture object displayed on a display device, such as a virtual lecturer, a virtual robot, and the like.
- FIG. 6 is a schematic diagram of a display interface of an exemplary display device provided by an embodiment of the present disclosure; as shown in FIG. 6 , the virtual object 402 may be a virtual instructor displayed on the display device 400 .
- the display device may determine the virtual effect data of the target display object according to the obtained display object.
- the virtual effect data may be virtual model data of the target display object, or may be virtual display content data corresponding to the display object, which is not limited in this embodiment of the present disclosure.
- the virtual model may be at least one of a virtual detail model, a virtual display object model, an object model related to the function of the display object, and a component model originally existing on the exhibit.
- the virtual presentation content may be at least one of a virtual introduction content of the presentation object and a virtual outline of the presentation object (eg, the virtual outline 404 around the presentation object 403 shown in FIG. 6 ).
- the virtual display content can be the enlarged effect of the text engraved on the surface of the "ding", the diameter of the ding, and the thickness of the wall of the ding.
- the display device after obtaining the gesture information of the virtual object and the virtual effect data of the target display object, the display device can render a virtual effect diagram including the virtual display effect of the virtual object and the target display object.
- the display device when the virtual effect data is the virtual model data of the target display object, the display device can render a virtual effect diagram including the virtual object and the virtual model of the target display object; when the virtual effect data is the virtual effect data corresponding to the target display object
- the display device may render a virtual effect image of the virtual display content including the virtual object and the target display object.
- FIG. 7 is a schematic diagram of a display interface of another exemplary display device provided by an embodiment of the present disclosure; as shown in FIG. 7 , the virtual effect image is displayed on the display device 400 including the virtual object 402 and the target display.
- the display screen of the display device is a transparent display screen or a non-transparent display screen.
- the material of the transparent display screen can be an OLED screen or an AMOLED screen, while the material of the non-transparent display screen can be an STN screen or an IPS hard screen.
- the display device After the display device obtains the virtual effect image, it can display the AR effect including the virtual effect image; for example, if the display screen of the display device is a transparent display screen, and the target display object can be seen through the transparent display screen, the display The device can display the AR effect in which the virtual effect image and the target display object are superimposed on the display screen; when the display device is a non-transparent display screen, or the target display object cannot be seen through the transparent display screen, the display device can The AR effect in which the virtual effect image is superimposed on the display image of the target display object is displayed on the display screen.
- the display device determines the posture information of the virtual object according to the state information of the interactive object in the real scene. Therefore, the posture information of the virtual object can be changed with the state information of the interactive object, and the virtual object and the interactive object can be realized. Therefore, the display flexibility of virtual objects is improved and the display effect of virtual objects is enriched; at the same time, the virtual effect data of the target display object is obtained according to the display image of the target display object, and the virtual effect data is rendered according to the virtual effect data. Therefore, the virtual effect corresponding to the display object in the real scene can be displayed, thereby increasing the display mode of the display object, improving the display flexibility of the display object, and enriching the display effect of the display object.
- the interactive object includes a real person; the state information of the interactive object includes motion state information of the real person;
- FIG. 8 is another optional schematic flowchart of the virtual object control method provided by the embodiment of the disclosure. The determination of the state information of the interaction object according to the multiple frames of interaction images in the above S102 may be implemented through S1021-S1022, which will be described with reference to the steps shown in FIG. 8 .
- the display device collects continuous multiple frames of interactive images of the interactive objects through the first image acquisition device, identifies the real person from each frame of the interactive image, and compares the position of the real person in the continuous frame images. Position, determine the moving direction and moving distance of the real person in the image, and then convert the obtained moving direction and moving distance of the real person in the image into the moving direction and moving distance in the real scene, and convert the converted moving direction. and moving distance as the moving direction and moving distance of the real person.
- the display device may further establish a coordinate system in advance, and determine the position coordinates of the interactive object in the coordinate system by analyzing the collected interactive images of multiple frames of the interactive object, so as to determine the position coordinates of the interactive object in the coordinate system according to the coordinate system of the interactive object in the coordinate system.
- the position coordinates in determine the moving direction and moving distance of the interactive object.
- the coordinate system may be a three-dimensional coordinate system or a world coordinate system, which is not limited in this embodiment of the present disclosure.
- the movement distance and movement direction of the real person may be used as the movement state information of the real person.
- the display device can control the display according to the moving distance and moving direction of the real person by determining the moving distance and moving direction of the real person, and using the moving distance and moving direction of the real person as the movement state information of the real person.
- the gesture of the virtual object and the display of the virtual effect of the display object realize the interaction between the virtual object, the display object and the real person.
- the interactive object includes a real person; the state information of the interactive object includes motion state information of the real person;
- FIG. 9 is another optional schematic flowchart of the virtual object control method provided by the embodiment of the present disclosure, The above S102 may also be implemented by S1023 or S1024, which will be described with reference to the steps shown in FIG. 9 .
- the display device can detect the number of real people existing in the real scene by identifying the real people in the collected multi-frame interactive images, and in the case of detecting that there are multiple real people in the real scene, Each real person in each frame of interactive images is identified, and the moving direction of each real person is obtained by comparing the position of each real person in consecutive frames of interactive images.
- the display device may also determine the moving direction of each real person in other manners, which is not limited in this embodiment of the present disclosure. It should be noted that the "plurality" in the present disclosure refers to two or more.
- the display device may compare the moving directions of each real person, and in the case of determining that the moving directions of a preset number of real people belong to the same direction, the moving directions of the preset number of real people are taken as The determined moving direction is determined as the motion state information of the real person.
- the preset number may be obtained by the display device according to the number of detected real persons and the preset score value.
- the preset score value may be 90% or three-quarters.
- the display device when the display device detects that there are 10 real people in the real scene, and the preset score value is 90%, the display device can determine that the preset number is 9, so that it can determine the moving directions of the 9 real people If they belong to the same direction, the moving directions of the nine real persons are regarded as the determined moving directions.
- the display device when the display device detects that there are multiple real people in the real scene, uses the movement directions of a preset number of real people as the movement state information of the real people, which can realize the control according to the movement direction of the crowd.
- the pose of the virtual object and the presentation of the display object are the movement directions of a preset number of real people as the movement state information of the real people, which can realize the control according to the movement direction of the crowd.
- the display device can detect the number of real people existing in the real scene by identifying real people in the collected multi-frame interactive images, and detect the situation that there are multiple real people in the real scene In the next step, identify each real person in each frame of interactive images, and determine whether each real person meets the preset conditions, and when the target person that meets the preset conditions is determined, the target person's The moving direction is determined as the motion state information of the real person.
- the target person may be a tour guide, a teacher, or a leader, etc.
- the preset condition may be: whether a microphone is worn, or whether it is located in the focus direction of the eyes of multiple real people, etc. For example, if the preset condition is whether a microphone is worn, and the display device determines that a real person among the multiple real persons is wearing a microphone, the real person wearing the microphone can be determined as the target person.
- the interactive object includes a real person, and the state information of the interactive object includes line-of-sight information of the real person;
- FIG. 10 is another optional schematic flowchart of the virtual object control method provided by the embodiment of the disclosure. Determining the state information of the interaction object according to the multiple frames of interaction images in S102 can be implemented through S201-S202, which will be described with reference to the steps shown in FIG. 10 .
- the display device may identify the eye sight of the real person by comparing the eye sight line of the real person in consecutive frames of interactive images by performing eye sight line recognition on each frame of the interactive image in multiple frames. direction of sight.
- the display device can perform eye sight recognition on each frame of interactive images in the multiple frames of interactive images, The eye sight area of the real person is obtained, and finally the eye sight direction of the real person is obtained.
- the eye sight direction of the real person can be to look at the left direction of the display device, or look at the direction on the display screen of the display device. Bottom left, etc., which are not limited in this embodiment of the present disclosure.
- S202 Determine the sight line direction as sight line information of a real person.
- the display device may determine the line of sight direction of the eyes of the real person as the line of sight information of the real person.
- the display device determines the line of sight direction of the real person, and determines the line of sight direction of the real person as the state information of the real person, so that the posture of the virtual object and the virtual effect of the display object can be controlled according to the line of sight direction of the real person. Display, so as to realize the interaction between virtual objects and display objects and real people.
- FIG. 11 is another optional schematic flowchart of the virtual object control method provided by the embodiments of the present disclosure. As shown in FIG. 11 , the above S201 can be implemented through S2011-S2014, which will be combined with FIG. 11 . The steps shown are explained.
- S2011 in the case of detecting that there are multiple real people in the real scene, perform face recognition on each frame of the interaction image in the multi-frame interaction image, and identify the key person.
- the display device may detect the number of real people existing in the real scene by identifying real people in the collected multi-frame interactive images, and in the case of detecting that there are multiple real people in the real scene, Through face recognition, key people among multiple real people are identified.
- key persons may be VIP customers, tour guides, teachers, etc., which are not limited in the embodiments of the present disclosure.
- the face images of multiple VIP customers can be pre-stored in the display device, and the display device can compare the face image of each VIP customer with the face image of each VIP customer identified from the multi-frame interactive images. It is determined whether there are VIP customers among the multiple real people by using face images of a real person; here, the present disclosure does not limit the method for determining a key person as a VIP customer.
- the display device can analyze at least one of the positional relationship and the line of sight direction of each real person identified from the multi-frame interactive images, and when one real person is located in multiple real people
- the real person can be determined to be a teacher in the middle of the characters or when the eyes of multiple real people are watching at least one of the scenarios of the real person.
- the present disclosure does not limit the method for determining the key person as the teacher.
- the display device can determine whether the real person is a tour guide by identifying whether the real person carries a microphone, so as to identify from multiple real people Tour guide; here, the present disclosure also does not limit the method of determining the key person as the tour guide.
- the display device may compare the eye image areas of the key person in consecutive frames of interactive images in the multi-frame interaction images to obtain at least one comparison result; for example, the comparison The result may be at least one of the position change process of the eyeball of the key person in the orbit and the position change process of the real person's eyes in the interactive image, etc., which is not limited in this embodiment of the present disclosure. It should be noted that, in the case where the comparison result can be at least one of the position change process of the real person's eyes in the orbit and the position change process of the real person's eyes in the interactive image, the display device can first pass the comparison process.
- the display device may determine the gaze direction of the eyes of the key person according to the comparison result.
- the comparison result is the position change process of the eyeballs of the key person in the orbit
- the display device can determine the line of sight direction of the eyes of the key person according to the position change process of the eyeballs of the key person in the orbit; If the initial position of the character's eyeball in the orbit is the middle of the orbit, and the final position is on the left side of the orbit, it can be determined that the direction of sight of the key person is the left direction.
- S2014 Determine the sight direction of the eyes of the key person as the sight direction of the eyes of the real person.
- the display device may determine the gaze direction of the eye of the key person as the gaze direction of a plurality of real persons.
- the display device when detecting that there are multiple real people in the real scene, determines the line of sight direction of a key person among the multiple real people as the line of sight direction of the multiple real people, and performs corresponding virtual
- the display of the gesture of the object and the virtual effect of the display object can make the interaction effect of the virtual object and the display of the virtual effect of the display object more targeted, thereby improving the interaction effect of the virtual object and the display effect of the virtual effect of the display object.
- FIG. 12 is another optional schematic flowchart of the virtual object control method provided by the embodiments of the present disclosure.
- the above S201 can also be implemented through S2001-S2003, which will be performed in conjunction with the steps shown in FIG. 12 . instruction.
- the display device can detect the number of real people existing in the real scene by identifying the real people in the collected multi-frame interactive images, and in the case of detecting that there are multiple real people in the real scene, Each real person in each frame of interactive images is identified, thereby determining each real person in multiple frames of images.
- the display device may compare the eye image areas of the real person in consecutive frames of interactive images in multiple frames of interaction images, and obtain the line-of-sight direction of the real person's eyes according to the comparison result.
- the comparison result may be at least one of the position change process of the real person's eyes in the orbit and the position change process of the real person's eyes in the interactive image, etc., so as to obtain the eyes of each real person. line of sight.
- the display device may compare the line of sight directions of the eyes of each real person, and compare the line of sight directions of the eyes of a preset number of real people In the case of belonging to the same direction, a preset number of sight directions of the eyes of the real person are used as the determined sight direction of the eyes of the real person.
- the preset number may be obtained by the display device according to the number of detected real persons and a preset score value (which may be the same as or different from the preset score value above), for example, the preset score value may be 80% or five points third and so on.
- the display device when the display device detects that there are 10 real people in the real scene, and the preset score value is 80%, the display device can determine that the preset number is 8, so that it can determine the eyes of the 8 real people. If the sight lines of the 8 real people belong to the same direction, the sight line directions of the eight real people are regarded as the determined sight line directions.
- the gaze direction of the eyes of the real person determined by the display device may include: the left side of the display device, the right side of the display device, the upper left of the display screen of the display device, the display device of the display device.
- the lower left of the screen, the upper right of the display screen of the display device, the lower right of the display screen of the display device, etc., are not limited in this embodiment of the present disclosure.
- the state information of the interactive object includes motion state information of a real person
- the gesture information of the virtual object includes body movements and line-of-sight directions of the virtual object
- the body movements of the virtual object include head movements of the virtual object
- FIG. 13 is another optional schematic flowchart of the virtual object control method provided by the embodiment of the present disclosure
- the determination of the posture information of the virtual object according to the state information of the interactive object in S102 in the above FIG. 8 can be realized by S301-S302 , which will be described in conjunction with the steps shown in FIG. 13 .
- the display device may first determine whether the moving distance of the real person is less than or equal to the preset distance, and when determining that the moving distance of the real person is less than or equal to the preset distance Next, it is determined to rotate the head of the virtual object by a target angle, so that the virtual object exhibits a corresponding head motion by rotating the head of the virtual object by the target angle.
- S302 determine to adjust the line of sight direction of the virtual object to the target line of sight direction; the target angle and the target line of sight direction are determined according to the current body orientation of the virtual object, the moving distance and direction of the real person, and the current position of the real person.
- the display device when the display device determines that the moving distance of the real person is less than or equal to the preset distance, the display device may determine the current position of the real person according to the obtained moving distance of the real person, and obtain the current body orientation of the virtual object. , according to the body orientation of the obtained virtual object, the determined current position of the real person, and the moving distance and moving direction of the real person, determine the angle at which the head of the virtual object needs to be rotated, and use the determined angle as the target angle; And, according to the obtained body orientation of the virtual object, the determined current position of the real person, and the moving direction of the real person, the direction in which the eyes of the virtual object need to be gazed is determined, and the determined direction is used as the target line of sight.
- the display device can first determine that the moving direction of the real person is the left side and the moving distance is 1 meter, and then determine whether the 1 meter is less than 1 meter. or equal to the preset distance, and when 1 meter is less than or equal to the preset distance, it is determined that the current body orientation of the virtual object is directly in front of the display device, and the real person is currently located 35° in front of the left of the display device, determine the virtual object.
- the head of the object needs to be rotated to the left by 35°, and the line of sight of the eyes of the virtual object is determined to be 35° in the front left direction of the display device, so that the line of sight of the virtual object is fixed on the real person.
- the above steps S301 and S302 may be performed simultaneously, so as to achieve the effect that the line of sight of the virtual object naturally follows the real person as the head of the virtual object turns.
- FIG. 14A is a schematic diagram of the effect of a virtual object displayed by an exemplary display device provided by an embodiment of the present disclosure
- FIG. 14B is a virtual object displayed by another exemplary display device provided by an embodiment of the present disclosure.
- Schematic diagram of the effect When the display device determines that the moving distance of the real person to the left is less than 1 meter, and the real person is currently located 35° in front of the display device, the display device can control the head of the virtual object in FIG. 14A to rotate 35 degrees to the left °, and determine the visual direction of the eyes of the virtual object to be the 35° direction of the front left of the display device, so that the virtual object can show the posture shown in Figure 14B, and realize the interaction between the virtual object and the real person.
- the preset distance in the case where the set preset distance is large, the preset distance can be made to correspond to the situation in which the real person is far away from the screen; in the case where the set preset distance is small , the preset distance can be made to correspond to the situation where the real person is closer to the screen; the value of the preset distance can be set by itself according to actual needs, which is not limited in this embodiment of the present disclosure.
- the display device can control the head movement of the virtual object and the line of sight direction of the eyes according to the moving direction and moving distance of the real person in the real scene, so that the virtual object displayed on the display device and the real scene can be realized. interactions between real people.
- the state information of the interactive object includes the body motion information of the real person, and the posture information of the virtual object includes the body motion of the virtual object;
- FIG. 15 is another example of the virtual object control method provided by the embodiment of the present disclosure.
- An optional schematic flowchart, as shown in FIG. 15 the above S102 can be implemented through S401, and the details are as follows:
- S401 Determine the state information of the interactive object according to the multiple frames of interactive images, and determine that the virtual object displays the physical action corresponding to the preset action under the condition that the body movement information of the real person is detected to represent the real person making the preset action .
- the display device can obtain the body movements of the real person by analyzing the collected multiple frames of interactive images, and determine whether the body movements of the real person are preset movements, and determine whether the body movements of the real person are certain
- a preset movement according to the preset correspondence between the body movement of the real person and the body movement of the virtual object, the body movement of the virtual object corresponding to the preset movement is determined, and the determined virtual body movement is determined.
- the body movement of the object is used as the action to be displayed by the virtual object.
- the display device may determine that the body movement of the virtual object corresponding to the squatting movement is bowing.
- the body movements to be displayed by the virtual object to control the virtual object to bow its head to explain.
- the display device can control the body movements of the virtual object according to the body movements of the real person in the real scene, so as to realize the interaction between the virtual object displayed on the display device and the real person in the real scene.
- the state information of the interactive object includes line-of-sight information of a real person, and the gesture information of the virtual object includes the body movements of the virtual object;
- FIG. 16 is another example of the virtual object control method provided by the embodiment of the present disclosure.
- the selected flow chart in the above-mentioned S102, according to the state information of the interactive object, determine the posture information of the virtual object, which can be realized through S501-S502, and determine the state information of the virtual object according to the state information of the interactive object in S102 in FIG. 10.
- the attitude information can be implemented by using S501-S502 as an example, and the description will be given in conjunction with the steps shown in FIG. 16 .
- S501 Determine the gaze position of the real person on the display device according to the line of sight direction.
- the display device may determine whether the line of sight of the real person corresponds to the position of the display screen of the display device, and determine whether the line of sight of the real person corresponds to the position of the display screen of the display device. If the position of the screen corresponds to the position of the screen, determine the gaze position of the real person on the display screen, for example, determine which area of the display screen the real person is looking at.
- the display device can control the display of the virtual object corresponding to the gazing position according to the content that can be seen through the gazing position or the content displayed by the gazing position For example, when the display device determines that the real person is looking at the lower left of the display screen, and the lower left of the display screen displays a tripod foot of the "Ding", the display device can determine that the virtual object's limb movements are Point to the position where the tripod is located to control the finger of the virtual object to point to the position of the tripod, and explain the tripod.
- the display device determines that a real person is looking at area 1 in the display screen, and what is seen through area 1 of the display screen is a tripod foot of the “Ding”, the display device can determine the limb movements of the virtual object In order to point to the position where the tripod is located, and to control the virtual object's finger to point to the position of the tripod, the tripod is explained.
- FIG. 17A is a schematic diagram of a display interface of an exemplary display device provided by an embodiment of the present disclosure
- FIG. 17B is a schematic diagram of a display interface of another exemplary display device provided by an embodiment of the present disclosure.
- the display device determines that the gaze position of the real person displays a tripod ear of the “Ding”
- the display device can determine that the limb movement of the virtual object is that the finger points to the position of the tripod ear, and explain the tripod ear, Therefore, the display device can control the posture of the virtual object on the display interface to change from the posture in FIG. 17A to the posture in FIG. 17B .
- the display device determines the body movements displayed by the virtual object according to the gaze position of the real person on the display device, which can realize the interaction between the virtual object displayed on the display device and the real person in the real scene.
- the method further includes S105.
- FIG. 18 is another optional schematic flowchart of the virtual object control method provided by the embodiment of the present disclosure. As shown in FIG. 18 , exemplarily, at S102 After that, S105 may also be performed, and S105 may be performed simultaneously with the above steps S103-S104, which will be described below with reference to the steps shown in FIG. 18 .
- the display device may control the movement of the display device according to at least one of motion state information, line of sight information, and body information of the interactive object. For example, in the case that the display device detects that the real person moves to the left, the display device can control the display screen to move to the left; in the case that the display device detects that the line of sight of the real person is focused on a display object, the display device can The display screen is controlled to move to the position of the display object; when the display device detects that a real person points to a display object, the display device can control the display screen to move to the position of the display object, and so on.
- the real scene includes at least one display object;
- the state information of the interactive object includes the line of sight information of the real person, and the gesture information of the virtual object includes the body movements of the virtual object;
- FIG. 19 Another optional schematic flowchart of the virtual object control method shown in FIG. 19 , the above S105 can be implemented by S1051, and after S1051, S106 can also be included, and the following description is combined with the steps shown in FIG. 19.
- the display device may determine the position that the real person is looking at according to the line-of-sight information, and the position that the real person is looking at is determined to be one of the multiple display objects in the real scene.
- the position where the display object is located is determined as the target position, and the display device is controlled to move to the target position.
- the sight line information of the real person may be the sight line direction of the real person.
- the display device may control the virtual object to display the body movements corresponding to the display object.
- the body movements displayed by the virtual object may be preset body movements that have a preset corresponding relationship with the displayed object.
- the physical action displayed by the virtual object may be wearing the display object (the virtual model of the display object) and explaining the object.
- the display device controls the movement of the display screen of the display device and the virtual objects display corresponding body movements according to the sight line information of the real people in the real scene, so that the virtual objects displayed on the display device and the real scene in the real scene can be realized. interactions between characters.
- FIG. 20 is a schematic structural diagram of the virtual object control apparatus provided by an embodiment of the present disclosure.
- the virtual object control apparatus 1 includes: a collection part 11, which is configured as Collecting multiple frames of interactive images of the interactive objects in the real scene and the display images of the target display objects in the real scene respectively; the determining part 12 is configured to determine the interaction in the real scene according to the multiple frames of interactive images.
- the interactive object includes a real person; and the state information of the interactive object includes at least one of the following: motion state information, body motion information, and line-of-sight information of the real person.
- the gesture information of the virtual object includes at least one of the following: a body motion and a gaze direction of the virtual object.
- the state information of the interactive object includes motion state information of the real person; the determining part 12 is further configured to determine the picture content of the real person in the multi-frame interactive images by identifying the picture content of the real person.
- the movement direction and movement distance of the real person; the movement direction and movement distance are determined as the movement state information of the real person.
- the state information of the interactive object includes motion state information of the real person; the determining part 12 is further configured to, when detecting that there are multiple real people in the real scene, , by identifying the picture content of the real people in the multi-frame interactive images, determine the moving direction of each real person in the multiple real people; in the case where the movement directions of a preset number of real people belong to the same direction, the The movement direction of the preset number of real people is determined as the motion state information of the real people; or, in the case of detecting that there are multiple real people in the real scene, by identifying the multi-frame interactive images.
- the picture content of the real person, the target person that meets the preset condition is determined from the plurality of real people, and the motion state information of the real person is determined according to the target person.
- the state information of the interactive object includes line-of-sight information of the real person; the determining part 12 is further configured to determine, by recognizing the face of the real person in the multi-frame interactive images, The line of sight direction of the eyes of the real person; the line of sight direction is determined as the line of sight information of the real person.
- the determining part 12 is further configured to, in the case of detecting that there are multiple real people in the real scene, perform a face detection on each frame of the interaction image in the multiple frames of interaction images Identify, identify the key person; compare the eye image area of the key person in the consecutive frames of interactive images in the multi-frame interactive images, and obtain a comparison result; according to the comparison result, determine the key person's The line of sight direction of the eye; the line of sight direction of the eye of the key person is determined as the line of sight direction of the eye of the real person.
- the determining part 12 is further configured to, in the case of detecting that there are multiple real people in the real scene, perform a face detection on each frame of the interaction image in the multiple frames of interaction images Identify and determine each real person; compare the eye image areas of each real person in the consecutive frames of interactive images in the multi-frame interactive images, and obtain the line of sight direction of the eyes of each real person; In the case where the sight directions of the eyes of the number of real people belong to the same direction, it is determined that the sight directions of the eyes of the preset number of real people are the sight directions of the eyes of the real people.
- the state information of the interactive object includes motion state information of the real person
- the gesture information of the virtual object includes body movements and the direction of sight of the virtual object
- the The body movement includes the head movement of the virtual object
- the determining part 12 is further configured to determine the target of turning the head of the virtual object when it is determined that the moving distance of the real person is less than or equal to a preset distance angle to obtain the head movement of the virtual object; determine to adjust the sight direction of the virtual object to the target sight direction; the target angle and the target sight direction are based on the current body orientation of the virtual object, the real The moving distance and moving direction of the character, and the current position of the real character are determined.
- the state information of the interactive object includes body motion information of the real person
- the posture information of the virtual object includes the body motion of the virtual object
- the determining part 12 is further configured to: In the case where it is detected that the body motion information of the real person represents that the real person performs a preset motion, it is determined that the virtual object exhibits a body motion corresponding to the preset motion.
- the state information of the interactive object includes line-of-sight information of the real person
- the gesture information of the virtual object includes body movements of the virtual object
- the determining part 12 is further configured to According to the gaze direction, the gazing position of the real person on the display device is determined; and it is determined that the virtual object exhibits body movements corresponding to the gazing position.
- the above apparatus further includes a control part 15 (not shown in FIG. 20 ), configured to control the movement of the display device according to the state information of the interactive object.
- the real scene includes at least one display object; the state information of the interactive object includes sight line information of the real person, and the gesture information of the virtual object includes limbs of the virtual object
- the control part 15 is further configured to control the display screen of the display device to move to the position of the display object when the sight line information is the position and direction of any one of the at least one display object. position; controlling the virtual object to display the body movements corresponding to the displayed object.
- the acquisition part 11 is further configured to use the first image acquisition device of the display device to acquire the multiple frames of interactive images of the interactive objects in the real scene; a second image acquisition device for acquiring the display image of the display object in the real scene, wherein the first image acquisition device and the second image acquisition device are located on the same side of the display device; or the The first image capturing device and the second image capturing device are respectively located on two opposite or adjacent side surfaces of the display device.
- the display screen of the display device moves on a preset sliding track.
- the display screen of the display device is a transparent display screen or a non-transparent display screen.
- a "part" may be a part of a circuit, a part of a processor, a part of a program or software, etc., of course, a unit, a module or a non-modularity.
- FIG. 21 is a schematic structural diagram of the display device provided by an embodiment of the present disclosure.
- the display device 2 includes: a display screen 21 , a camera 22 , a memory 23 and a processor 24 , wherein the display screen 21, the camera 22, the memory 23 and the processor 24 are connected through a communication bus 25; the memory 23 is configured to store executable computer programs; the processor 24 is configured to execute the executable computer program stored in the memory 23 During the program, the display screen 21 and the camera 22 are combined to implement the method provided by the embodiment of the present disclosure, for example, the virtual object control method provided by the embodiment of the present disclosure.
- the embodiments of the present disclosure provide a computer-readable storage medium storing a computer program configured to cause the processor 24 to execute the methods provided by the embodiments of the present disclosure, for example, the virtual object control methods provided by the embodiments of the present disclosure.
- a computer-readable storage medium may be a tangible device that holds and stores instructions for use by the instruction execution device, and may be a volatile storage medium or a non-volatile storage medium.
- the computer-readable storage medium may be, for example, but not limited to, an electrical storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing.
- a more specific list (non-exhaustive list) of computer readable storage media include: portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM or Flash memory), static random access memory reader (ROM), portable compact disk read only memory (CD-ROM), digital versatile disk (DVD), memory stick, floppy disk, memory encoding device, such as a printer with instructions stored thereon Hole cards or recessed structures in grooves, and any suitable combination of the above.
- RAM random access memory
- ROM read only memory
- EPROM or Flash memory erasable programmable read only memory
- static random access memory reader ROM
- portable compact disk read only memory CD-ROM
- DVD digital versatile disk
- memory stick floppy disk
- memory encoding device such as a printer with instructions stored thereon Hole cards or recessed structures in grooves, and any suitable combination of the above.
- Computer-readable storage media are not to be interpreted as transient signals per se, such as radio waves or other freely propagating battery waves, battery waves propagating through waveguides or other media media (eg, light pulses through fiber optic cables), or Electrical signals transmitted through wires.
- the computer-readable storage medium may also be memory such as FRAM, PROM, EEPROM, flash memory, magnetic surface memory, or optical disk; it may also be various devices including one or any combination of the foregoing memories.
- An embodiment of the present disclosure also provides a computer program, which, when executed by the processor 24, implements the method provided by the embodiment of the present disclosure, for example, the virtual object control method provided by the embodiment of the present disclosure.
- executable computer programs may take the form of programs, software, software modules, scripts, or code in any form of programming language, including compiled or interpreted languages, or declarative or procedural languages is written, and it may be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
- an executable computer program may, but does not necessarily correspond to a file in a file system, may be stored as part of a file that holds other programs or data, for example, in Hyper Text Markup Language (HTML)
- HTML Hyper Text Markup Language
- One or more scripts in a document stored in a single file dedicated to the program in question, or in multiple cooperating files (e.g., files that store one or more modules, subprograms, or code sections) .
- an executable computer program may be deployed to be executed on one computing device, or on multiple computing devices located at one site, or alternatively, among multiple computing devices distributed across multiple sites and interconnected by a communication network performed on the device.
- the posture information of the virtual object since the posture information of the virtual object is determined according to the state information of the interactive object in the real scene, the posture information of the virtual object can be changed with the change of the state information of the interactive object, and can Realize the interaction between virtual objects and interactive objects, thereby improving the display flexibility of virtual objects and enriching the display effect of virtual objects; at the same time, since the virtual effect data of the target display object is obtained according to the display image of the target display object, and The virtual effect image is obtained by rendering the effect data, so the virtual effect corresponding to the display object in the real scene can be displayed, thereby increasing the display method of the display object, improving the display flexibility of the display object, and enriching the display of the display object. Effect.
- the embodiments of the present disclosure disclose a virtual object control method, apparatus, device, and computer-readable storage medium.
- the method includes: respectively collecting multiple frames of interactive images of interactive objects in a real scene and a display image of a target display object in the real scene; determining state information of the interactive objects according to the multiple frames of interactive images, and according to The state information of the interactive object determines the posture information of the virtual object; according to the display image, the virtual effect data of the target display object is determined; and the virtual object is rendered by using the posture information of the virtual object and the virtual effect data to obtain a virtual object. effect image, and show the augmented reality effect including the virtual effect image.
- the display flexibility of virtual objects and display objects can be improved, and the display effects of virtual objects and display objects can be enriched.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Engineering & Computer Science (AREA)
- Ophthalmology & Optometry (AREA)
- Geometry (AREA)
- Processing Or Creating Images (AREA)
- User Interface Of Digital Computer (AREA)
- Position Input By Displaying (AREA)
Abstract
Description
Claims (20)
- 一种虚拟对象控制方法,包括:分别采集真实场景中的交互对象的多帧交互图像和所述真实场景中的目标展示对象的展示图像;根据所述多帧交互图像,确定所述交互对象的状态信息,并根据所述交互对象的状态信息,确定虚拟对象的姿态信息;根据所述展示图像,确定所述目标展示对象的虚拟效果数据;采用所述虚拟对象的姿态信息和所述虚拟效果数据进行渲染得到虚拟效果图像,并展示包括所述虚拟效果图像的增强现实效果。
- 根据权利要求1所述的方法,其中,所述交互对象包括真实人物;所述交互对象的状态信息包括以下至少一种:所述真实人物的运动状态信息、肢体动作信息和视线信息。
- 根据权利要求2所述的方法,其中,所述虚拟对象的姿态信息包括以下至少一种:所述虚拟对象的肢体动作和视线方向。
- 根据权利要求2或3所述的方法,其中,所述交互对象的状态信息包括所述真实人物的运动状态信息,所述根据所述多帧交互图像,确定所述交互对象的状态信息,包括:通过识别所述多帧交互图像中真实人物的画面内容,确定所述真实人物的移动方向和移动距离;将所述移动方向和移动距离确定为所述真实人物的运动状态信息。
- 根据权利要求2-4任一项所述的方法,其中,所述交互对象的状态信息包括所述真实人物的运动状态信息,所述根据所述多帧交互图像,确定所述交互对象的状态信息,包括:在检测到所述真实场景中存在多个真实人物的情况下,通过识别所述多帧交互图像中真实人物的画面内容,确定所述多个真实人物中每个真实人物的移动方向;在预设数量的真实人物的移动方向属于同一方向的情况下,将所述预设数量的真实人物的移动方向确定为所述真实人物的运动状态信息;或,在检测到所述真实场景中存在多个真实人物的情况下,通过识别所述多帧交互图像中真实人物的画面内容,从所述多个真实人物中确定出符合预设条件的目标人物,根据所述目标人物确定出所述真实人物的运动状态信息。
- 根据权利要求2-5任一项所述的方法,其中,所述交互对象的状态信息包括所述真实人物的视线信息,所述根据所述多帧交互图像,确定所述交互对象的状态信息,包括:通过对所述多帧交互图像中真实人物的人脸识别,确定所述真实人物的眼部的视线方向;将所述视线方向确定为所述真实人物的视线信息。
- 根据权利要求6所述的方法,其中,所述通过对所述多帧交互图像中真实人物的人脸识别,确定所述真实人物的眼部的视线方向,包括:在检测到所述真实场景中存在多个真实人物的情况下,对所述多帧交互图像中的每帧交互图像进行人脸识别,识别出关键人物;比对所述多帧交互图像中连续帧交互图像中的所述关键人物的眼部图像区域,获得比对结果;根据所述比对结果,确定所述关键人物的眼部的视线方向;将所述关键人物的眼部的视线方向确定为所述真实人物的眼部的视线方向。
- 根据权利要求6所述的方法,其中,所述通过对所述多帧交互图像中真实人物的人脸识别,确定所述真实人物的眼部的视线方向,包括:在检测到所述真实场景中存在多个真实人物的情况下,对所述多帧交互图像中的每帧 交互图像进行人脸识别,确定出每个真实人物;对所述多帧交互图像中连续帧交互图像中的每个真实人物的眼部图像区域分别进行比较,获得每个真实人物的眼部的视线方向;在预设数量的真实人物的眼部的视线方向属于同一方向的情况下,确定所述预设数量的真实人物的眼部的视线方向为所述真实人物的眼部的视线方向。
- 根据权利要求4所述的方法,其中,所述交互对象的状态信息包括所述真实人物的运动状态信息,所述虚拟对象的姿态信息包括所述虚拟对象的肢体动作和视线方向,所述虚拟对象的肢体动作包括所述虚拟对象的头部动作;所述根据所述交互对象的状态信息,确定虚拟对象的姿态信息,包括:在确定所述真实人物的移动距离小于或等于预设距离的情况下,确定将所述虚拟对象的头部转动目标角度,得到所述虚拟对象的头部动作;确定将所述虚拟对象的视线方向调整为目标视线方向;所述目标角度和所述目标视线方向根据所述虚拟对象的当前身体朝向、所述真实人物的移动距离和移动方向,以及所述真实人物的当前位置确定。
- 根据权利要求2-8任一项所述的方法,其中,所述交互对象的状态信息包括所述真实人物的肢体动作信息,所述虚拟对象的姿态信息包括所述虚拟对象的肢体动作;所述根据所述交互对象的状态信息,确定虚拟对象的姿态信息,包括:在检测到所述真实人物的肢体动作信息表征所述真实人物做出预设动作的情况下,确定所述虚拟对象展示与所述预设动作相对应的肢体动作。
- 根据权利要求6-8任一项所述的方法,其中,所述交互对象的状态信息包括所述真实人物的视线信息,且所述虚拟对象的姿态信息包括所述虚拟对象的肢体动作;所述根据所述交互对象的状态信息,确定虚拟对象的姿态信息,包括:根据所述视线方向,确定所述真实人物在所述显示设备上的注视位置;确定所述虚拟对象展示与所述注视位置相对应的肢体动作。
- 根据权利要求1-11任一项所述的方法,其中,所述方法还包括:根据所述交互对象的状态信息,控制所述显示设备的移动。
- 根据权利要求12所述的方法,其中,所述真实场景中包括至少一个展示对象;所述交互对象的状态信息包括所述真实人物的视线信息,所述虚拟对象的姿态信息包括所述虚拟对象的肢体动作;所述根据所述交互对象的状态信息,控制所述显示设备的移动,包括:在所述视线信息为所述至少一个展示对象中任意的一个展示对象的位置方向的情况下,控制所述显示设备的显示屏移动至所述展示对象的位置处;所述方法还包括:控制所述虚拟对象展示与所述展示对象对应的肢体动作。
- 根据权利要求1-13任一项所述的方法,其中,所述分别采集真实场景中的交互对象的多帧交互图像和真实场景中的目标展示对象的展示图像,包括:采用所述显示设备的第一图像采集装置,采集真实场景中的交互对象的所述多帧交互图像;采用所述显示设备的第二图像采集装置,采集所述真实场景中的展示对象的所述展示图像;其中,所述第一图像采集装置与所述第二图像采集装置位于所述显示设备的同一侧面;或者所述第一图像采集装置与所述第二图像采集装置分别位于所述显示设备的两个相对或相邻的侧面。
- 根据权利要求1-14任一项所述的方法,其中,所述显示设备的显示屏在预设滑动轨道上移动。
- 根据权利要求1-15任一项所述的方法,其中,所述显示设备的显示屏为透明显示屏或者非透明显示屏。
- 一种虚拟对象控制装置,包括:采集部分,被配置为分别采集真实场景中的交互对象的多帧交互图像和所述真实场景中的目标展示对象的展示图像;确定部分,被配置为根据所述多帧交互图像,确定所述真实场景中的交互对象的状态信息;根据所述交互对象的状态信息,控制虚拟对象的姿态信息;根据所述展示图像,确定所述目标展示对象的虚拟效果数据;渲染部分,被配置为采用所述虚拟对象的姿态信息和所述虚拟效果数据进行渲染得到虚拟效果图像;显示部分,被配置为展示包括所述虚拟效果图像的增强现实效果。
- 一种显示设备,其特征在于,所述显示设备包括:显示屏、摄像头、存储器和处理器;所述存储器,被配置为存储可执行计算机程序;所述处理器,被配置为执行所述存储器中存储的可执行计算机程序时,结合所述摄像头和显示屏,实现权利要求1至16任一项所述的方法。
- 一种计算机可读存储介质,其特征在于,存储有计算机程序,被配置为引起处理器执行时,实现权利要求1至16任一项所述的方法。
- 一种计算机程序,包括计算机可读代码,在所述计算机可读代码在电子设备中运行的情况下,所述电子设备中的处理器执行时实现权利要求1至16任一项所述的方法。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020227027031A KR20220119494A (ko) | 2020-07-31 | 2021-05-24 | 가상 객체 제어 방법 및 장치, 기기, 컴퓨터 판독 가능 저장 매체 |
JP2021570511A JP2022545851A (ja) | 2020-07-31 | 2021-05-24 | 仮想対象制御方法及び装置、機器、コンピュータ可読記憶媒体 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010761734.7A CN111880659A (zh) | 2020-07-31 | 2020-07-31 | 虚拟人物控制方法及装置、设备、计算机可读存储介质 |
CN202010761734.7 | 2020-07-31 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022022028A1 true WO2022022028A1 (zh) | 2022-02-03 |
Family
ID=73204365
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2021/095571 WO2022022028A1 (zh) | 2020-07-31 | 2021-05-24 | 虚拟对象控制方法及装置、设备、计算机可读存储介质 |
Country Status (4)
Country | Link |
---|---|
JP (1) | JP2022545851A (zh) |
KR (1) | KR20220119494A (zh) |
CN (1) | CN111880659A (zh) |
WO (1) | WO2022022028A1 (zh) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116704843A (zh) * | 2023-06-07 | 2023-09-05 | 广西茜英信息技术有限公司 | 基于通信工程勘察设计的虚拟仿真实训平台 |
Families Citing this family (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111880659A (zh) * | 2020-07-31 | 2020-11-03 | 北京市商汤科技开发有限公司 | 虚拟人物控制方法及装置、设备、计算机可读存储介质 |
CN112416196B (zh) * | 2020-11-19 | 2022-08-30 | 腾讯科技(深圳)有限公司 | 虚拟对象的控制方法、装置、设备及计算机可读存储介质 |
CN114693890A (zh) * | 2020-12-31 | 2022-07-01 | 华为技术有限公司 | 一种增强现实交互方法及电子设备 |
CN112669422B (zh) * | 2021-01-07 | 2024-06-14 | 深圳追一科技有限公司 | 仿真3d数字人生成方法、装置、电子设备及存储介质 |
CN112379812B (zh) * | 2021-01-07 | 2021-04-23 | 深圳追一科技有限公司 | 仿真3d数字人交互方法、装置、电子设备及存储介质 |
CN113721804A (zh) * | 2021-08-20 | 2021-11-30 | 北京市商汤科技开发有限公司 | 一种显示方法、装置、电子设备及计算机可读存储介质 |
CN113900526A (zh) * | 2021-10-29 | 2022-01-07 | 深圳Tcl数字技术有限公司 | 三维人体形象展示控制方法、装置、存储介质及显示设备 |
CN115390678B (zh) * | 2022-10-27 | 2023-03-31 | 科大讯飞股份有限公司 | 虚拟人交互方法、装置、电子设备及存储介质 |
CN117456611B (zh) * | 2023-12-22 | 2024-03-29 | 拓世科技集团有限公司 | 一种基于人工智能的虚拟人物训练方法及系统 |
CN117727303A (zh) * | 2024-02-08 | 2024-03-19 | 翌东寰球(深圳)数字科技有限公司 | 一种音视频的生成方法、装置、设备及存储介质 |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120113223A1 (en) * | 2010-11-05 | 2012-05-10 | Microsoft Corporation | User Interaction in Augmented Reality |
CN103544636A (zh) * | 2013-11-08 | 2014-01-29 | 梁涛 | 基于虚拟商场的互动方法及设备 |
US20150097865A1 (en) * | 2013-10-08 | 2015-04-09 | Samsung Electronics Co., Ltd. | Method and computing device for providing augmented reality |
CN107992188A (zh) * | 2016-10-26 | 2018-05-04 | 宏达国际电子股份有限公司 | 虚拟现实交互方法、装置与系统 |
CN110716645A (zh) * | 2019-10-15 | 2020-01-21 | 北京市商汤科技开发有限公司 | 一种增强现实数据呈现方法、装置、电子设备及存储介质 |
CN111367402A (zh) * | 2018-12-26 | 2020-07-03 | 阿里巴巴集团控股有限公司 | 任务触发方法、交互设备及计算机设备 |
CN111880659A (zh) * | 2020-07-31 | 2020-11-03 | 北京市商汤科技开发有限公司 | 虚拟人物控制方法及装置、设备、计算机可读存储介质 |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9741145B2 (en) * | 2012-06-29 | 2017-08-22 | Disney Enterprises, Inc. | Augmented reality simulation continuum |
CN111273772B (zh) * | 2020-01-17 | 2022-07-08 | 江苏艾佳家居用品有限公司 | 基于slam测绘方法的增强现实交互方法、装置 |
-
2020
- 2020-07-31 CN CN202010761734.7A patent/CN111880659A/zh not_active Withdrawn
-
2021
- 2021-05-24 KR KR1020227027031A patent/KR20220119494A/ko not_active Application Discontinuation
- 2021-05-24 JP JP2021570511A patent/JP2022545851A/ja not_active Withdrawn
- 2021-05-24 WO PCT/CN2021/095571 patent/WO2022022028A1/zh active Application Filing
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120113223A1 (en) * | 2010-11-05 | 2012-05-10 | Microsoft Corporation | User Interaction in Augmented Reality |
US20150097865A1 (en) * | 2013-10-08 | 2015-04-09 | Samsung Electronics Co., Ltd. | Method and computing device for providing augmented reality |
CN103544636A (zh) * | 2013-11-08 | 2014-01-29 | 梁涛 | 基于虚拟商场的互动方法及设备 |
CN107992188A (zh) * | 2016-10-26 | 2018-05-04 | 宏达国际电子股份有限公司 | 虚拟现实交互方法、装置与系统 |
CN111367402A (zh) * | 2018-12-26 | 2020-07-03 | 阿里巴巴集团控股有限公司 | 任务触发方法、交互设备及计算机设备 |
CN110716645A (zh) * | 2019-10-15 | 2020-01-21 | 北京市商汤科技开发有限公司 | 一种增强现实数据呈现方法、装置、电子设备及存储介质 |
CN111880659A (zh) * | 2020-07-31 | 2020-11-03 | 北京市商汤科技开发有限公司 | 虚拟人物控制方法及装置、设备、计算机可读存储介质 |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116704843A (zh) * | 2023-06-07 | 2023-09-05 | 广西茜英信息技术有限公司 | 基于通信工程勘察设计的虚拟仿真实训平台 |
CN116704843B (zh) * | 2023-06-07 | 2024-02-23 | 广西茜英信息技术有限公司 | 基于通信工程勘察设计的虚拟仿真实训平台 |
Also Published As
Publication number | Publication date |
---|---|
JP2022545851A (ja) | 2022-11-01 |
KR20220119494A (ko) | 2022-08-29 |
CN111880659A (zh) | 2020-11-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2022022028A1 (zh) | 虚拟对象控制方法及装置、设备、计算机可读存储介质 | |
US11747618B2 (en) | Systems and methods for sign language recognition | |
US11947729B2 (en) | Gesture recognition method and device, gesture control method and device and virtual reality apparatus | |
CN111226189A (zh) | 内容显示属性管理 | |
CN111897431B (zh) | 展示方法及装置、显示设备、计算机可读存储介质 | |
KR101563312B1 (ko) | 시선 기반 교육 콘텐츠 실행 시스템 | |
CN109219955A (zh) | 视频按入 | |
WO2013185714A1 (zh) | 增强现实中识别对象的方法及系统和计算机 | |
CN104536579A (zh) | 交互式三维实景与数字图像高速融合处理系统及处理方法 | |
US11532138B2 (en) | Augmented reality (AR) imprinting methods and systems | |
CN110717993B (zh) | 一种分体式ar眼镜系统的交互方法、系统及介质 | |
Lo et al. | Augmediated reality system based on 3D camera selfgesture sensing | |
Roccetti et al. | Day and night at the museum: intangible computer interfaces for public exhibitions | |
CN114296627A (zh) | 内容显示方法、装置、设备及存储介质 | |
Bai | Mobile augmented reality: Free-hand gesture-based interaction | |
KR20140136713A (ko) | 이미지를 활용하는 학습 시뮬레이션 모델 방법 및 장치 | |
US20240069642A1 (en) | Scissor hand gesture for a collaborative object | |
Eslami et al. | SignCol: Open-Source Software for Collecting Sign Language Gestures | |
KR20240009974A (ko) | 증강 현실 경험들을 위한 가상 가이드 피트니스 루틴들 | |
Shoaei Shirehjini | Smartphones as Visual Prosthesis | |
CN114332433A (zh) | 信息输出方法、装置、可读存储介质和电子设备 | |
Balani | Investigation of Interaction Metaphors for Augmented and Virtual Reality on Multi-Platforms for Medical Applications | |
CN116993949A (zh) | 虚拟环境的显示方法、装置、可穿戴电子设备及存储介质 | |
BR102015030766A2 (pt) | métodos associados em realidade virtual e realidade aumentada para construção de simuladores de baixo custo, atividades experimentais ou treinamento |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
ENP | Entry into the national phase |
Ref document number: 2021570511 Country of ref document: JP Kind code of ref document: A |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21849584 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 20227027031 Country of ref document: KR Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21849584 Country of ref document: EP Kind code of ref document: A1 |