CN111625101B - Display control method and device - Google Patents

Display control method and device Download PDF

Info

Publication number
CN111625101B
CN111625101B CN202010494566.XA CN202010494566A CN111625101B CN 111625101 B CN111625101 B CN 111625101B CN 202010494566 A CN202010494566 A CN 202010494566A CN 111625101 B CN111625101 B CN 111625101B
Authority
CN
China
Prior art keywords
display
target
face
relative
target user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010494566.XA
Other languages
Chinese (zh)
Other versions
CN111625101A (en
Inventor
揭志伟
孙红亮
王子彬
刘小兵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Sensetime Intelligent Technology Co Ltd
Original Assignee
Shanghai Sensetime Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Sensetime Intelligent Technology Co Ltd filed Critical Shanghai Sensetime Intelligent Technology Co Ltd
Priority to CN202010494566.XA priority Critical patent/CN111625101B/en
Publication of CN111625101A publication Critical patent/CN111625101A/en
Application granted granted Critical
Publication of CN111625101B publication Critical patent/CN111625101B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/178Human faces, e.g. facial parts, sketches or expressions estimating age from face image; using age information for improving recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The disclosure provides a display control method and device, comprising: acquiring a face image of a target user; determining relative orientation information of the face of the target user relative to the display device based on the face image; determining target display pose information of a target virtual scene based on the relative azimuth information; and acquiring the display animation of the target virtual scene corresponding to the target display pose information, and playing the display animation on the display equipment.

Description

Display control method and device
Technical Field
The disclosure relates to the technical field of information processing, and in particular relates to a display control method and device.
Background
When a exhibition hall is displayed, in order to improve the display effect, an electronic screen is generally arranged at some positions in the exhibition hall, and a landscape animation related to the exhibition hall is played on the electronic screen. In the related art, when the landscape animation is displayed, the preset landscape animation is generally circularly displayed, the display effect is improved in part of venues, the interaction between the landscape animation and the user can be increased, however, the general interaction mode is that the user manually switches the landscape animation by clicking an electronic screen, or the user manually selects the playable landscape animation, and the display mode is single and has poor display effect.
Disclosure of Invention
The embodiment of the disclosure at least provides a display control method and a display control device.
In a first aspect, an embodiment of the present disclosure provides a display control method, including:
Acquiring a face image of a target user;
Determining relative orientation information of the face of the target user relative to the display device based on the face image;
determining target display pose information of a target virtual scene based on the relative azimuth information;
And acquiring the display animation of the target virtual scene corresponding to the target display pose information, and playing the display animation on the display equipment.
According to the method, the selection of the display animation of the target virtual scenery is controlled through the relative azimuth information of the face of the target user relative to the display equipment, the target user can display the virtual scenery display animation under different display position information by changing the azimuth of the face relative to the display equipment, interaction between the target user and a venue is increased, a control method of playing is enriched, and the display effect is improved.
In a possible implementation manner, the determining, based on the face image, relative orientation information of the face of the target user relative to a display device includes:
based on the face image, determining relative azimuth information of the face of the target user relative to image acquisition equipment for acquiring the face image;
And determining the relative azimuth information of the face of the target user relative to the display device based on the relative azimuth information of the image acquisition device for relatively acquiring the face image and the relative position relationship between the image acquisition device and the display device.
In a possible implementation manner, before determining the target display pose information of the target virtual scenery based on the relative azimuth information, the method further comprises:
extracting face attribute features from the obtained face image;
And selecting the target virtual scenery from a plurality of candidate virtual sceneries according to the extracted face attribute characteristics.
In a possible implementation manner, the face attribute features include at least one of the following:
Gender, age, smile value, face value, mood, skin tone.
In a possible implementation manner, the acquiring the exhibition animation of the target virtual scene corresponding to the target exhibition pose information includes:
And selecting the display animation of the target virtual scenery matched with the relative azimuth information of the face of the target user relative to the display equipment from the display animations under the preset multiple display pose information corresponding to the target virtual scenery.
In a possible embodiment, the method further comprises:
And after detecting that the relative azimuth information of the face of the target user relative to the display equipment in the face image of the target user is changed, adjusting the display animation played in the display equipment according to the changed relative azimuth information.
In a second aspect, an embodiment of the present disclosure further provides a display control apparatus, including:
The acquisition module is used for acquiring a face image of a target user;
the first determining module is used for determining relative azimuth information of the face of the target user relative to the display equipment based on the face image;
The second determining module is used for determining target display pose information of the target virtual scenery based on the relative azimuth information;
and the playing module is used for acquiring the display animation of the target virtual scene corresponding to the target display pose information and playing the display animation on the display equipment.
In a possible implementation manner, the first determining module is configured to, when determining, based on the face image, relative orientation information of a face of the target user with respect to a display device:
based on the face image, determining relative azimuth information of the face of the target user relative to image acquisition equipment for acquiring the face image;
And determining the relative azimuth information of the face of the target user relative to the display device based on the relative azimuth information of the image acquisition device for relatively acquiring the face image and the relative position relationship between the image acquisition device and the display device.
In a possible implementation manner, the second determining module is further configured to, before determining the target display pose information of the target virtual scene based on the relative position information:
extracting face attribute features from the obtained face image;
And selecting the target virtual scenery from a plurality of candidate virtual sceneries according to the extracted face attribute characteristics.
In a possible implementation manner, the face attribute features include at least one of the following:
Gender, age, smile value, face value, mood, skin tone.
In a possible implementation manner, the playing module is configured to, when acquiring a presentation animation of the target virtual scene corresponding to the target presentation pose information:
And selecting the display animation of the target virtual scenery matched with the relative azimuth information of the face of the target user relative to the display equipment from the display animations under the preset multiple display pose information corresponding to the target virtual scenery.
In a possible implementation manner, the playing module is further configured to:
And after detecting that the relative azimuth information of the face of the target user relative to the display equipment in the face image of the target user is changed, adjusting the display animation played in the display equipment according to the changed relative azimuth information.
In a third aspect, embodiments of the present disclosure further provide a computer device, comprising: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory in communication via the bus when the computer device is running, the machine-readable instructions when executed by the processor performing the steps of the first aspect, or any of the possible implementations of the first aspect.
In a fourth aspect, the presently disclosed embodiments also provide a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the first aspect, or any of the possible implementations of the first aspect.
The foregoing objects, features and advantages of the disclosure will be more readily apparent from the following detailed description of the preferred embodiments taken in conjunction with the accompanying drawings.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings required for the embodiments are briefly described below, which are incorporated in and constitute a part of the specification, these drawings showing embodiments consistent with the present disclosure and together with the description serve to illustrate the technical solutions of the present disclosure. It is to be understood that the following drawings illustrate only certain embodiments of the present disclosure and are therefore not to be considered limiting of its scope, for the person of ordinary skill in the art may admit to other equally relevant drawings without inventive effort.
FIG. 1 shows a flow chart of a presentation control method provided by an embodiment of the present disclosure;
FIG. 2 illustrates a schematic diagram of the relative positions of an image capture device and a display device provided by embodiments of the present disclosure;
FIG. 3 illustrates a schematic diagram of the relative positions of another image capture device and a display device provided by embodiments of the present disclosure;
FIG. 4 is a schematic diagram of an architecture of a display control device according to an embodiment of the disclosure;
fig. 5 shows a schematic structural diagram of a computer device 500 provided by an embodiment of the present disclosure.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present disclosure more apparent, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the drawings in the embodiments of the present disclosure, and it is apparent that the described embodiments are only some embodiments of the present disclosure, but not all embodiments. The components of the embodiments of the present disclosure, which are generally described and illustrated in the figures herein, may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present disclosure provided in the accompanying drawings is not intended to limit the scope of the disclosure, as claimed, but is merely representative of selected embodiments of the disclosure. All other embodiments, which can be made by those skilled in the art based on the embodiments of this disclosure without making any inventive effort, are intended to be within the scope of this disclosure.
In the related art, when the landscape animation is displayed, only through the preset landscape animation, in the method, the displayed landscape animation is easily ignored by a user, so that the display effect is poor; in part of venues, interaction between users and landscape animation is increased, but the interaction mode is single, and the display effect is poor.
Based on this, the embodiment of the disclosure provides a display control method and device, which controls the selection of the display animation of the target virtual scenery by the relative azimuth information of the face of the target user relative to the display device, and the target user can display the virtual scenery display animation under different display position information by changing the azimuth of the face relative to the display device, so that the interaction between the target user and a venue is increased, the control method of playing is enriched, and the display effect is improved.
The present invention is directed to a method for manufacturing a semiconductor device, and a semiconductor device manufactured by the method.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures.
For the sake of understanding the present embodiment, first, a detailed description will be given of a display control method disclosed in an embodiment of the present disclosure, where an execution subject of the display control method provided in the embodiment of the present disclosure is generally a computer device having a certain computing capability, and the computer device includes, for example: the terminal device or server or other processing device may be a User Equipment (UE), a mobile device, a User terminal, a Personal digital assistant (Personal DIGITAL ASSISTANT, PDA), a computing device, or the like.
Referring to fig. 1, a flowchart illustrating a control method according to an embodiment of the disclosure is provided, where the method includes steps 101 to 104, where:
Step 101, acquiring a face image of a target user.
The target user may be a user who enters a target area, and the target area may be a preset certain position area. The face image can be acquired through the image acquisition equipment, the image acquisition equipment can be connected with the electronic equipment for executing the scheme, the connection mode of the image acquisition equipment comprises wired connection or wireless connection, and the wireless connection can comprise Bluetooth connection, wireless local area network connection and the like.
The installation position and the installation orientation of the image acquisition device can be fixed, so that the position area corresponding to the image acquired by the image acquisition device is fixed, the position area corresponding to the image acquired by the image acquisition device comprises the target area, or the position area corresponding to the image acquired by the image acquisition device is the target area.
In one possible implementation, the image capturing device may capture an image in real time, and then transmit the captured image to the electronic device that executes the scheme, where the electronic device analyzes whether the image captured by the image capturing device includes the target user.
In another possible implementation manner, the target area may be provided with an infrared detection device, through which whether the target area has a target user or not may be detected, and when the infrared device detects that the target area has a target user, feedback may be performed to the electronic device executing the scheme, and the electronic device further controls the image acquisition device to acquire the image.
Here, after the infrared device detects that the target area has the target user, the image capturing device may capture a face image of the target user in real time.
Step 102, determining the relative azimuth information of the face of the target user relative to the display equipment based on the face image.
In specific implementation, when determining the relative azimuth information of the face of the target user relative to the display device based on the face image, the relative azimuth information of the face of the target user relative to the image acquisition device for acquiring the face image may be determined based on the face image, and then the relative azimuth information of the face of the target user relative to the image acquisition device for acquiring the face image and the relative position relationship between the image acquisition device and the display device may be determined based on the relative azimuth information of the face of the target user relative to the image acquisition device.
The relative azimuth information of the face of the target user relative to the image acquisition equipment is the angle information of the face of the target user in the face image. When the relative azimuth information of the face of the target user relative to the image acquisition equipment for acquiring the face image is determined based on the face image, the face image can be input into a trained first neural network to obtain the relative azimuth information of the face of the target user relative to the image acquisition equipment.
The first neural network is obtained based on training of a sample face image carrying a relative azimuth label, specifically, the sample face image carrying the relative azimuth label can be input into the first neural network, predicted relative azimuth information can be obtained through output, then a loss value in the training process is determined based on the predicted relative azimuth information and the relative azimuth label, and when the loss value is smaller than a preset loss value, a network model of the first neural network can be adjusted, and training is conducted on the first neural network again.
The relative positional relationship between the image pickup device and the display device may be a relationship between mounting positions of the image pickup device and the display device, for example, the image pickup device may be directly above the display device, the image pickup device may be on the left side of the display device, the image pickup device may be on the right side of the display device, or the like.
The relative orientation information of the target user with respect to the image capturing device and the relative orientation information of the target user with respect to the display device may be different, for example, if the image capturing device is on the left side of the display device and the installation position of the image capturing device may be, for example, as shown in fig. 2, the target user is facing the image capturing device, the relative orientation information of the target user with respect to the image capturing device is 0 degrees, and the relative orientation information of the target user with respect to the display device is 90 degrees.
The relative orientation information of the target user with respect to the image capturing device and the relative orientation information of the target user with respect to the display device may also be different, for example, if the image capturing device is located directly above the display device, its installation position may be as shown in fig. 3, and the relative orientation information of the target user with respect to the image capturing device and the relative orientation information of the target user with respect to the display device are both 0 degrees.
And step 103, determining target display pose information of the target virtual scenery based on the relative azimuth information.
In one possible implementation, before determining the target display pose information of the target virtual scenery based on the relative azimuth information, the target virtual scenery may be further selected from a plurality of virtual sceneries according to the face attribute of the target user.
Specifically, the face attribute features may be extracted from the obtained face image, and then the target virtual scene may be selected from a plurality of candidate virtual scenes according to the extracted face attribute features.
Wherein the face attribute features may include at least one of the following features:
Gender, age, smile value, face value, mood, skin tone.
When the face attribute features are extracted from the obtained face images, the face images can be input into a trained second neural network to obtain the face attribute features of the target user, wherein the second neural network is obtained by training based on the sample face images carrying the face attribute feature labels, and specifically, the training process of the second neural network is similar to that of the first neural network, and will not be repeated here.
When selecting a target virtual scenery from a plurality of candidate virtual sceneries based on the face attribute characteristics, the target virtual sceneries matched with the face attribute characteristics can be searched in a mapping relation library between the preset face attribute characteristics and the virtual sceneries.
In another possible implementation manner, the face image may be input into a trained third neural network, so as to directly obtain the target virtual scenery corresponding to the face image, where the third neural network is obtained by training based on the sample face image carrying the target virtual scenery label, and the specific training process is similar to that of the first neural network, and will not be described herein.
The target display pose information of the target virtual scenery is the display angle of the target virtual scenery, and the corresponding display animations of the virtual scenery under different display pose information are different.
Step 104, obtaining a display animation of the target virtual scene corresponding to the target display pose information, and playing the display animation on the display equipment.
In one possible implementation manner, when the display animation of the target virtual scenery corresponding to the target display pose information is acquired, the display animation of the target virtual scenery matched with the relative azimuth information of the face of the target user relative to the display device can be selected from the display animations under the pre-stored multiple display pose information corresponding to the target virtual scenery.
In a specific implementation, the display pose information of the target virtual scenery includes a display angle of the target virtual scenery, and in practical application, the display angle of the target virtual scenery can be consistent with the relative azimuth information of a face of a target user relative to the display device.
For example, if the target virtual scenery is sunflower, the display animation under the plurality of display pose information corresponding to the target virtual scenery is the display animation that the sunflower is opened under different display angles, and if the relative azimuth information of the face of the target user relative to the display device is 0 degree, that is, the target user is just facing the display device, the display animation that the sunflower is opened under the display angle of 0 degree can be displayed.
In one possible implementation manner, after detecting that the relative orientation information of the face of the target user with respect to the display device in the face image of the target user changes, the display animation played in the display device may be adjusted according to the changed relative orientation information.
Specifically, the obtained relative azimuth information of the face of the target user relative to the display device can be detected in real time, then the relative azimuth information of the face of the target user relative to the display device is judged, whether the detected relative azimuth information of the face of the target user relative to the display device is changed or not is compared with the last detected relative azimuth information of the face of the target user relative to the display device, if the detected relative azimuth information of the face of the target user relative to the display device is changed, the target display pose information of the target virtual scenery is redetermined according to the current detected relative azimuth information of the face of the target user relative to the display device, then the display animation of the target virtual scenery corresponding to the redetermined target display pose information is obtained, and the display animation is played in the display device.
For example, if the current detected relative orientation information of the face of the target user with respect to the display device is 90 degrees, and the last detected relative orientation information of the face of the target user with respect to the display device is 60 degrees, the display animation under the display angle of 60 degrees currently displayed may be updated to the display animation under the display angle of 90 degrees.
In a possible implementation manner, the relative orientation information of the face of the target user relative to the display device further includes a position of the face of the target user relative to the display device, for example, the face of the target user is in the middle, left, right, etc. of the display device.
When the display animation is played in the display device, the display position of the display animation in the display device can be determined according to the position information of the face of the target user in the face image, and then the display animation is played at the display position of the display device.
When determining the display position of the display animation in the display device according to the position information of the face of the target user in the face image, the display position of the display animation in the display device can be determined based on the corresponding relation between the face position information in the preset image and the display position in the display device.
Because the installation position and the orientation of the image acquisition device are fixed, the corresponding relation between the display positions of the images acquired by the image acquisition device on the display device can be preset, and after the position information of the face of the target user in the face image is determined, the display position corresponding to the position information of the face of the target user in the face image acquired by the image acquisition device can be determined according to the corresponding relation.
In the preset corresponding relation, the display position coordinates of the pixel points (x, y) in the image acquired by the image acquisition device on the display device are (a, b), and if the position coordinates (x, y) in the face image of the target user are (x, y), the corresponding position coordinates in the display device are (a, b), the corresponding display animation can be displayed at the (a, b) position.
In a possible implementation manner, when the faces of a plurality of target users are detected in the face image, and the display device is controlled to play the display animation, different display methods can be determined according to the display positions corresponding to the respective display animations, which can be specifically divided into the following two cases:
and 1, in the case that the corresponding display positions of the plurality of display animations do not have a superposition area.
In this case, when the display device is controlled to play the presentation animation, the display device may be controlled to synchronously present the presentation animation corresponding to each presentation position at the presentation position corresponding to each presentation animation, respectively.
And 2, overlapping areas exist at the display positions corresponding to the plurality of display animations.
In this case, the display device may be controlled to sequentially display the respective display animations; or selecting one showing animation from the showing animations to play.
Specifically, the display priorities of different display animations can be preset, when the display positions corresponding to the plurality of display animations have the overlapping areas, the display sequence of the plurality of display animations can be determined according to the display priorities corresponding to the plurality of display animations, and then the display is performed based on the display sequence according to the corresponding display sequence.
In another possible embodiment, when one presentation animation is selected from the respective presentation animations to play, one presentation animation may be randomly selected from the respective presentation animations to play.
According to the method, the selection of the display animation of the target virtual scenery is controlled through the relative azimuth information of the face of the target user relative to the display equipment, the target user can display the virtual scenery display animation under different display position information by changing the azimuth of the face relative to the display equipment, interaction between the target user and a venue is increased, a control method of playing is enriched, and the display effect is improved.
It will be appreciated by those skilled in the art that in the above-described method of the specific embodiments, the written order of steps is not meant to imply a strict order of execution but rather should be construed according to the function and possibly inherent logic of the steps.
Based on the same inventive concept, the embodiment of the disclosure further provides a display control device corresponding to the display control method, and since the principle of solving the problem by the device in the embodiment of the disclosure is similar to that of the display control method in the embodiment of the disclosure, the implementation of the device can refer to the implementation of the method, and the repetition is omitted.
Referring to fig. 4, an architecture diagram of a display control device according to an embodiment of the disclosure is provided, where the device includes: an acquisition module 401, a first determination module 402, a second determination module 403, and a playback module 404; wherein,
An acquisition module 401, configured to acquire a face image of a target user;
A first determining module 402, configured to determine, based on the face image, relative orientation information of a face of the target user relative to a display device;
A second determining module 403, configured to determine target display pose information of a target virtual scene based on the relative azimuth information;
And the playing module 404 is configured to obtain a display animation of the target virtual scene corresponding to the target display pose information, and play the display animation on the display device.
In a possible implementation manner, the first determining module 402 is configured to, when determining, based on the face image, relative orientation information of a face of the target user with respect to a display device:
based on the face image, determining relative azimuth information of the face of the target user relative to image acquisition equipment for acquiring the face image;
And determining the relative azimuth information of the face of the target user relative to the display device based on the relative azimuth information of the image acquisition device for relatively acquiring the face image and the relative position relationship between the image acquisition device and the display device.
In a possible implementation manner, the second determining module 403 is further configured to, before determining, based on the relative position information, target display pose information of the target virtual scene:
extracting face attribute features from the obtained face image;
And selecting the target virtual scenery from a plurality of candidate virtual sceneries according to the extracted face attribute characteristics.
In a possible implementation manner, the face attribute features include at least one of the following:
Gender, age, smile value, face value, mood, skin tone.
In a possible implementation manner, the playing module 404 is configured to, when acquiring a presentation animation of the target virtual scene corresponding to the target presentation pose information:
And selecting the display animation of the target virtual scenery matched with the relative azimuth information of the face of the target user relative to the display equipment from the display animations under the preset multiple display pose information corresponding to the target virtual scenery.
In a possible implementation manner, the playing module 404 is further configured to:
And after detecting that the relative azimuth information of the face of the target user relative to the display equipment in the face image of the target user is changed, adjusting the display animation played in the display equipment according to the changed relative azimuth information.
According to the device, the selection of the display animation of the target virtual scenery is controlled through the relative azimuth information of the face of the target user relative to the display equipment, the target user can display the virtual scenery display animation under different display position information by changing the azimuth of the face relative to the display equipment, interaction between the target user and a venue is increased, a control method of playing is enriched, and the display effect is improved.
The process flow of each module in the apparatus and the interaction flow between the modules may be described with reference to the related descriptions in the above method embodiments, which are not described in detail herein.
Based on the same technical conception, the embodiment of the application also provides computer equipment. Referring to fig. 5, a schematic diagram of a computer device 500 according to an embodiment of the present application includes a processor 501, a memory 502, and a bus 503. The memory 502 is configured to store execution instructions, including a memory 5021 and an external memory 5022; the memory 5021 is also referred to as an internal memory, and is used for temporarily storing operation data in the processor 501 and data exchanged with an external memory 5022 such as a hard disk, the processor 501 exchanges data with the external memory 5022 through the memory 5021, and when the computer device 500 is running, the processor 501 and the memory 502 communicate through the bus 503, so that the processor 501 executes the following instructions:
Acquiring a face image of a target user;
Determining relative orientation information of the face of the target user relative to the display device based on the face image;
determining target display pose information of a target virtual scene based on the relative azimuth information;
And acquiring the display animation of the target virtual scene corresponding to the target display pose information, and playing the display animation on the display equipment.
The disclosed embodiments also provide a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the presentation control method described in the above method embodiments. Wherein the storage medium may be a volatile or nonvolatile computer readable storage medium.
The computer program product of the display control method provided in the embodiments of the present disclosure includes a computer readable storage medium storing program codes, where the instructions included in the program codes may be used to execute the steps of the display control method described in the above method embodiments, and specifically, reference may be made to the above method embodiments, which are not described herein.
The disclosed embodiments also provide a computer program which, when executed by a processor, implements any of the methods of the previous embodiments. The computer program product may be realized in particular by means of hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied as a computer storage medium, and in another alternative embodiment, the computer program product is embodied as a software product, such as a software development kit (Software Development Kit, SDK), or the like.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described system and apparatus may refer to corresponding procedures in the foregoing method embodiments, which are not described herein again. In the several embodiments provided in the present disclosure, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. The above-described apparatus embodiments are merely illustrative, for example, the division of the units is merely a logical function division, and there may be other manners of division in actual implementation, and for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some communication interface, device or unit indirect coupling or communication connection, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present disclosure may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer readable storage medium executable by a processor. Based on such understanding, the technical solution of the present disclosure may be embodied in essence or a part contributing to the prior art or a part of the technical solution, or in the form of a software product stored in a storage medium, including several instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method described in the embodiments of the present disclosure. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Finally, it should be noted that: the foregoing examples are merely specific embodiments of the present disclosure, and are not intended to limit the scope of the disclosure, but the present disclosure is not limited thereto, and those skilled in the art will appreciate that while the foregoing examples are described in detail, it is not limited to the disclosure: any person skilled in the art, within the technical scope of the disclosure of the present disclosure, may modify or easily conceive changes to the technical solutions described in the foregoing embodiments, or make equivalent substitutions for some of the technical features thereof; such modifications, changes or substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the disclosure, and are intended to be included within the scope of the present disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (10)

1. A display control method, characterized by comprising:
Acquiring a face image of a target user;
Determining relative orientation information of the face of the target user relative to the display device based on the face image;
determining target display pose information of a target virtual scene based on the relative azimuth information;
Acquiring a display animation of the target virtual scene corresponding to the target display pose information, and playing the display animation on the display equipment;
when the face images are detected to be of faces of a plurality of target users, the display equipment respectively determines the display positions of the corresponding display animations based on the relative azimuth information of each target user when displaying the animation, and displays the display animations in different modes according to the display positions of the display animations;
And under the condition that the display positions of the plurality of display animations have overlapping areas, determining the display sequence of the plurality of display animations based on the display priorities corresponding to the plurality of display animations, and displaying the plurality of display animations based on the display sequence.
2. The method of claim 1, wherein the determining the relative orientation information of the face of the target user with respect to the display device based on the face image comprises:
based on the face image, determining relative azimuth information of the face of the target user relative to image acquisition equipment for acquiring the face image;
And determining the relative azimuth information of the face of the target user relative to the display device based on the relative azimuth information of the image acquisition device for relatively acquiring the face image and the relative position relationship between the image acquisition device and the display device.
3. The method of claim 1, wherein prior to determining target display pose information for a target virtual scene based on the relative bearing information, further comprising:
extracting face attribute features from the obtained face image;
And selecting the target virtual scenery from a plurality of candidate virtual sceneries according to the extracted face attribute characteristics.
4. A method according to claim 3, wherein the face attribute features include at least one of:
Gender, age, smile value, face value, mood, skin tone.
5. The method of claim 1, wherein the obtaining a presentation animation of the target virtual scene corresponding to the target presentation pose information comprises:
And selecting the display animation of the target virtual scenery matched with the relative azimuth information of the face of the target user relative to the display equipment from the display animations under the preset multiple display pose information corresponding to the target virtual scenery.
6. The method according to claim 1, wherein the method further comprises:
And after detecting that the relative azimuth information of the face of the target user relative to the display equipment in the face image of the target user is changed, adjusting the display animation played in the display equipment according to the changed relative azimuth information.
7. A display control apparatus, comprising:
The acquisition module is used for acquiring a face image of a target user;
the first determining module is used for determining relative azimuth information of the face of the target user relative to the display equipment based on the face image;
The second determining module is used for determining target display pose information of the target virtual scenery based on the relative azimuth information;
The playing module is used for acquiring the display animation of the target virtual scenery corresponding to the target display pose information and playing the display animation on the display equipment; when the face images are detected to be of faces of a plurality of target users, the display equipment respectively determines the display positions of the corresponding display animations based on the relative azimuth information of each target user when displaying the animation, and displays the display animations in different modes according to the display positions of the display animations; and under the condition that the display positions of the plurality of display animations have overlapping areas, determining the display sequence of the plurality of display animations based on the display priorities corresponding to the plurality of display animations, and displaying the plurality of display animations based on the display sequence.
8. The apparatus of claim 7, wherein the play module, when obtaining the presentation animation of the target virtual scene corresponding to the target presentation pose information, is configured to:
And selecting the display animation of the target virtual scenery matched with the relative azimuth information of the face of the target user relative to the display equipment from the display animations under the preset multiple display pose information corresponding to the target virtual scenery.
9. A computer device, comprising: a processor, a memory and a bus, said memory storing machine readable instructions executable by said processor, said processor and said memory communicating over the bus when the computer device is running, said machine readable instructions when executed by said processor performing the steps of the presentation control method according to any of claims 1 to 6.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored thereon a computer program which, when executed by a processor, performs the steps of the presentation control method as claimed in any one of claims 1 to 6.
CN202010494566.XA 2020-06-03 2020-06-03 Display control method and device Active CN111625101B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010494566.XA CN111625101B (en) 2020-06-03 2020-06-03 Display control method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010494566.XA CN111625101B (en) 2020-06-03 2020-06-03 Display control method and device

Publications (2)

Publication Number Publication Date
CN111625101A CN111625101A (en) 2020-09-04
CN111625101B true CN111625101B (en) 2024-05-17

Family

ID=72260205

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010494566.XA Active CN111625101B (en) 2020-06-03 2020-06-03 Display control method and device

Country Status (1)

Country Link
CN (1) CN111625101B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112244588A (en) * 2020-09-25 2021-01-22 广东全石石材供应链管理有限公司 Stone display method and device

Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101408800A (en) * 2008-11-14 2009-04-15 东南大学 Method for performing three-dimensional model display control by CCD camera
WO2010143359A1 (en) * 2009-06-10 2010-12-16 日本電気株式会社 Avatar display system, device, method, and program
CN103873941A (en) * 2012-12-17 2014-06-18 联想(北京)有限公司 Display method and electronic equipment
EP2811462A1 (en) * 2013-06-07 2014-12-10 Samsung Electronics Co., Ltd Method and device for providing information in view mode
CN107613203A (en) * 2017-09-22 2018-01-19 维沃移动通信有限公司 A kind of image processing method and mobile terminal
CN108268227A (en) * 2017-01-04 2018-07-10 京东方科技集团股份有限公司 Show equipment
CN108510917A (en) * 2017-02-27 2018-09-07 北京康得新创科技股份有限公司 Event-handling method based on explaining device and explaining device
CN108509660A (en) * 2018-05-29 2018-09-07 维沃移动通信有限公司 A kind of broadcasting object recommendation method and terminal device
CN109615703A (en) * 2018-09-28 2019-04-12 阿里巴巴集团控股有限公司 Image presentation method, device and the equipment of augmented reality
CN109697623A (en) * 2017-10-23 2019-04-30 北京京东尚科信息技术有限公司 Method and apparatus for generating information
CN110124305A (en) * 2019-05-15 2019-08-16 网易(杭州)网络有限公司 Virtual scene method of adjustment, device, storage medium and mobile terminal
CN110555507A (en) * 2019-10-22 2019-12-10 深圳追一科技有限公司 Interaction method and device for virtual robot, electronic equipment and storage medium
CN110555876A (en) * 2018-05-30 2019-12-10 百度在线网络技术(北京)有限公司 Method and apparatus for determining position
CN110673810A (en) * 2019-09-27 2020-01-10 杭州鸿雁智能科技有限公司 Display device, display method and device thereof, storage medium and processor
CN110871447A (en) * 2018-08-31 2020-03-10 比亚迪股份有限公司 Vehicle-mounted robot and man-machine interaction method thereof
CN110968239A (en) * 2019-11-28 2020-04-07 北京市商汤科技开发有限公司 Control method, device and equipment for display object and storage medium
CN110996148A (en) * 2019-11-27 2020-04-10 重庆特斯联智慧科技股份有限公司 Scenic spot multimedia image flow playing system and method based on face recognition

Patent Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101408800A (en) * 2008-11-14 2009-04-15 东南大学 Method for performing three-dimensional model display control by CCD camera
WO2010143359A1 (en) * 2009-06-10 2010-12-16 日本電気株式会社 Avatar display system, device, method, and program
CN103873941A (en) * 2012-12-17 2014-06-18 联想(北京)有限公司 Display method and electronic equipment
EP2811462A1 (en) * 2013-06-07 2014-12-10 Samsung Electronics Co., Ltd Method and device for providing information in view mode
CN108268227A (en) * 2017-01-04 2018-07-10 京东方科技集团股份有限公司 Show equipment
CN108510917A (en) * 2017-02-27 2018-09-07 北京康得新创科技股份有限公司 Event-handling method based on explaining device and explaining device
CN107613203A (en) * 2017-09-22 2018-01-19 维沃移动通信有限公司 A kind of image processing method and mobile terminal
CN109697623A (en) * 2017-10-23 2019-04-30 北京京东尚科信息技术有限公司 Method and apparatus for generating information
CN108509660A (en) * 2018-05-29 2018-09-07 维沃移动通信有限公司 A kind of broadcasting object recommendation method and terminal device
CN110555876A (en) * 2018-05-30 2019-12-10 百度在线网络技术(北京)有限公司 Method and apparatus for determining position
CN110871447A (en) * 2018-08-31 2020-03-10 比亚迪股份有限公司 Vehicle-mounted robot and man-machine interaction method thereof
CN109615703A (en) * 2018-09-28 2019-04-12 阿里巴巴集团控股有限公司 Image presentation method, device and the equipment of augmented reality
CN110124305A (en) * 2019-05-15 2019-08-16 网易(杭州)网络有限公司 Virtual scene method of adjustment, device, storage medium and mobile terminal
CN110673810A (en) * 2019-09-27 2020-01-10 杭州鸿雁智能科技有限公司 Display device, display method and device thereof, storage medium and processor
CN110555507A (en) * 2019-10-22 2019-12-10 深圳追一科技有限公司 Interaction method and device for virtual robot, electronic equipment and storage medium
CN110996148A (en) * 2019-11-27 2020-04-10 重庆特斯联智慧科技股份有限公司 Scenic spot multimedia image flow playing system and method based on face recognition
CN110968239A (en) * 2019-11-28 2020-04-07 北京市商汤科技开发有限公司 Control method, device and equipment for display object and storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Using Facial Animation to Increase the Enfacement Illusion and Avatar Self-Identification;Mar Gonzalez-Franco et al.;《IEEE Transactions on Visualization and Computer Graphics》;20200213;全文 *
基于VRML的交互式虚拟产品展示技术;高建洪;胡志华;孙涌;;苏州大学学报(自然科学版);20051230(第04期);全文 *
基于虚拟技术的产品展示设计与研究;蔺相东;《万方数据库》;20090430;全文 *

Also Published As

Publication number Publication date
CN111625101A (en) 2020-09-04

Similar Documents

Publication Publication Date Title
CN112348969B (en) Display method and device in augmented reality scene, electronic equipment and storage medium
US11217006B2 (en) Methods and systems for performing 3D simulation based on a 2D video image
CN106982387B (en) Bullet screen display and push method and device and bullet screen application system
US10580453B1 (en) Cataloging video and creating video summaries
KR101535579B1 (en) Augmented reality interaction implementation method and system
CN109727303B (en) Video display method, system, computer equipment, storage medium and terminal
US10334222B2 (en) Focus-based video loop switching
JP2014511538A (en) Dynamic template tracking
KR20180111970A (en) Method and device for displaying target target
JP7127659B2 (en) Information processing device, virtual/reality synthesis system, method for generating learned model, method for executing information processing device, program
WO2018142756A1 (en) Information processing device and information processing method
US20160321833A1 (en) Method and apparatus for generating moving photograph based on moving effect
KR20140082610A (en) Method and apaaratus for augmented exhibition contents in portable terminal
CN109753145B (en) Transition animation display method and related device
CN111970557A (en) Image display method, image display device, electronic device, and storage medium
CN111639613B (en) Augmented reality AR special effect generation method and device and electronic equipment
JP6730461B2 (en) Information processing system and information processing apparatus
CN113487709A (en) Special effect display method and device, computer equipment and storage medium
CN112929683A (en) Video processing method and device, electronic equipment and storage medium
CN111625101B (en) Display control method and device
WO2022166173A1 (en) Video resource processing method and apparatus, and computer device, storage medium and program
CN112333498A (en) Display control method and device, computer equipment and storage medium
CN114584681A (en) Target object motion display method and device, electronic equipment and storage medium
KR20190101620A (en) Moving trick art implement method using augmented reality technology
CN112288877A (en) Video playing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant