CN111643900B - Display screen control method and device, electronic equipment and storage medium - Google Patents

Display screen control method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN111643900B
CN111643900B CN202010515234.5A CN202010515234A CN111643900B CN 111643900 B CN111643900 B CN 111643900B CN 202010515234 A CN202010515234 A CN 202010515234A CN 111643900 B CN111643900 B CN 111643900B
Authority
CN
China
Prior art keywords
park
target
user
virtual
special effect
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010515234.5A
Other languages
Chinese (zh)
Other versions
CN111643900A (en
Inventor
潘思霁
揭志伟
李炳泽
张一�
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Shangtang Technology Development Co Ltd
Original Assignee
Zhejiang Shangtang Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Shangtang Technology Development Co Ltd filed Critical Zhejiang Shangtang Technology Development Co Ltd
Priority to CN202010515234.5A priority Critical patent/CN111643900B/en
Publication of CN111643900A publication Critical patent/CN111643900A/en
Application granted granted Critical
Publication of CN111643900B publication Critical patent/CN111643900B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/178Human faces, e.g. facial parts, sketches or expressions estimating age from face image; using age information for improving recognition
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The disclosure provides a display screen control method, a display screen control device, electronic equipment and a storage medium, wherein the method comprises the following steps: after a scene image of an AR park field shot by target AR equipment is acquired, acquiring a face image of a target user corresponding to the target AR equipment; identifying user attribute features based on the face image; based on the identified user attribute characteristics, AR park special effect data matched with the user attribute characteristics is obtained; based on the AR park special effect data, the target AR equipment is controlled to display the AR special effect corresponding to the AR park special effect data, and by the method, different AR special effects can be displayed for different users for the same AR park, so that the AR special effect which can be displayed by the AR park is richer.

Description

Display screen control method and device, electronic equipment and storage medium
Technical Field
The disclosure relates to the technical field of AR, and in particular, to a display screen control method, a display screen control device, an electronic device and a storage medium.
Background
With the improvement of the living standard of people, the group play activities of the classmates or the company employee groups become more. In such activities, it is often difficult for physical amusement parks to meet the play needs of each user in the group due to limited physical amusement parks, limited play items, and the like.
In recent years, augmented reality (Augmented Reality, AR) technology has been developed, and for some amusement parks, how to better serve users through AR technology, enrich the project content of amusement parks, and improve the effective utilization rate of resources in all aspects is a problem worthy of research.
Disclosure of Invention
The embodiment of the disclosure at least provides a display screen control method, a display screen control device, electronic equipment and a storage medium.
In a first aspect, an embodiment of the present disclosure provides a display screen control method, where the method includes:
after a scene image of an AR park field shot by target AR equipment is acquired, acquiring a face image of a target user corresponding to the target AR equipment;
identifying user attribute features based on the face image;
based on the identified user attribute characteristics, AR park special effect data matched with the user attribute characteristics is obtained;
and controlling the target AR equipment to display the AR special effects corresponding to the AR park special effect data based on the AR park special effect data.
According to the method, for the same AR park, different AR effects can be displayed for different users, so that AR effects which can be displayed by the AR park are richer, and the determined AR effects are matched with the user attribute characteristics, so that AR effects displayed by the user are matched with the user, the acceptance of the AR effects displayed by the user to the AR resources is improved, and the effective utilization rate of the AR resources is improved.
In a possible implementation manner, acquiring the face image of the target user includes:
and controlling the target AR equipment to switch the camera, and shooting and obtaining the face image of the target user.
In the embodiment of the disclosure, the target AR device used by the user can be controlled to acquire the face image of the target user, and the acquisition mode is relatively simple and other additional devices are not required to be added.
In one possible implementation, identifying the user attribute feature based on the face image includes:
and carrying out attribute feature extraction on the face image based on a trained feature extraction network, wherein the feature extraction network is obtained by training based on a face image sample marked with attribute features.
In one possible implementation, based on the identified user attribute feature, obtaining AR park special effect data matching the user attribute feature includes:
determining at least one garden-topic type that matches the user attribute feature based on the identified user attribute feature;
and responding to a target park theme type selected by a user from the at least one park theme type, and acquiring AR park special effect data matched with the target park theme type.
In the embodiment of the disclosure, the determined types of the park topics matched with the attribute features of the user may include a plurality of types, in order to show the AR special effects most desired to be watched by the user, the determined types of the park topics may be shown to the user for selection by the user, then the type of the park topic selected by the user is used as the target type of the park topic, and AR special effect data matched with the target type of the park topic is obtained.
In one possible implementation, the AR park special effects data includes presentation data of virtual gates of a theme park;
after controlling the target AR device to present the virtual gate of the theme park, further comprising:
responding to an opening triggering instruction of the target user aiming at the virtual door, and acquiring AR park project content in the virtual door;
and controlling the target AR equipment to display the AR special effects corresponding to the AR park item content based on the AR park item content in the virtual gate.
In the embodiment of the disclosure, the playing efficiency of the user is improved.
In one possible implementation, after controlling the target AR device to present the virtual gate of the theme park, the method further includes:
responding to a switching trigger instruction of the target user for the virtual gate, and acquiring display data of other virtual gates matched with contents of other AR park items; the virtual gate is displayed with AR park project content prompt information in the gate;
and controlling the target AR equipment to display the updated virtual door so as to enable the user to select whether to trigger opening again.
In one possible implementation, the user attribute features include at least one of:
gender, age, color value, expression.
In a second aspect, an embodiment of the present disclosure provides a display screen control apparatus, including:
the first acquisition unit is used for acquiring a face image of a target user corresponding to target AR equipment after acquiring a scene image of an AR park shot by the target AR equipment;
the identification unit is used for identifying the attribute characteristics of the user based on the face image;
the second acquisition unit is used for acquiring AR park special effect data matched with the user attribute characteristics based on the identified user attribute characteristics;
and the display unit is used for controlling the target AR equipment to display the AR special effects corresponding to the AR park special effect data based on the AR park special effect data.
In a possible implementation manner, the configuration of the first obtaining unit when used for obtaining the face image of the target user includes:
and controlling the target AR equipment to switch the camera, and shooting and obtaining the face image of the target user.
In a possible implementation manner, the configuration of the identifying unit when used for identifying the user attribute features based on the face image includes:
and carrying out attribute feature extraction on the face image based on a trained feature extraction network, wherein the feature extraction network is obtained by training based on a face image sample marked with attribute features.
In one possible implementation manner, the configuration of the second obtaining unit when used for obtaining the AR park special effect data matched with the identified user attribute feature based on the identified user attribute feature includes:
determining at least one garden-topic type that matches the user attribute feature based on the identified user attribute feature;
and responding to a target park theme type selected by a user from the at least one park theme type, and acquiring AR park special effect data matched with the target park theme type.
In one possible implementation, the AR park special effects data includes presentation data of virtual gates of a theme park;
the second obtaining unit is further configured to, after controlling the target AR device to display a virtual gate of the theme park, obtain AR park item content in the virtual gate in response to an opening trigger instruction of the target user for the virtual gate;
the display unit is further configured to control the target AR device to display an AR special effect corresponding to the AR park item content based on the AR park item content in the virtual gate.
In a possible implementation manner, the second obtaining unit is further configured to, after controlling the target AR device to display the virtual gate of the theme park, obtain display data of other virtual gates matched with content of other AR park items in response to a switching trigger instruction of the target user for the virtual gate; the virtual gate is displayed with AR park project content prompt information in the gate;
the display unit is further configured to control the target AR device to display the updated virtual door, so that a user selects whether to trigger opening again.
In one possible implementation, the user attribute features include at least one of:
gender, age, color value, expression.
In a third aspect, an embodiment of the present disclosure provides an electronic device, including: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory in communication via the bus when the electronic device is running, the machine-readable instructions when executed by the processor performing the steps of the presentation screen control method as described in the first aspect.
In a fourth aspect, embodiments of the present disclosure provide a computer-readable storage medium having a computer program stored thereon, which when executed by a processor performs the steps of the presentation control method according to the first aspect.
The foregoing objects, features and advantages of the disclosure will be more readily apparent from the following detailed description of the preferred embodiments taken in conjunction with the accompanying drawings.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings required for the embodiments are briefly described below, which are incorporated in and constitute a part of the specification, these drawings showing embodiments consistent with the present disclosure and together with the description serve to illustrate the technical solutions of the present disclosure. It is to be understood that the following drawings illustrate only certain embodiments of the present disclosure and are therefore not to be considered limiting of its scope, for the person of ordinary skill in the art may admit to other equally relevant drawings without inventive effort.
Fig. 1 is a flow chart illustrating a method for controlling a display screen according to an embodiment of the disclosure;
fig. 2 is a flowchart illustrating another method for controlling a display screen according to an embodiment of the disclosure;
fig. 3 is a flowchart illustrating another method for controlling a display screen according to an embodiment of the disclosure;
fig. 4 is a flowchart illustrating another display screen control method according to an embodiment of the disclosure;
fig. 5 is a schematic structural diagram of a display screen control device according to an embodiment of the disclosure;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present disclosure more apparent, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the drawings in the embodiments of the present disclosure, and it is apparent that the described embodiments are only some embodiments of the present disclosure, but not all embodiments. The components of the embodiments of the present disclosure, which are generally described and illustrated in the figures herein, may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present disclosure provided in the accompanying drawings is not intended to limit the scope of the disclosure, as claimed, but is merely representative of selected embodiments of the disclosure. All other embodiments, which can be made by those skilled in the art based on the embodiments of this disclosure without making any inventive effort, are intended to be within the scope of this disclosure.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures.
The term "and/or" is used herein to describe only one relationship, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist together, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
Research shows that the AR technology can bring more vivid experiences to users, and after combining the AR technology and the theme park, the AR technology can also bring rich project contents to the users, such as: through the AR technology, different contents can be displayed in the same theme park, so that the content of the theme park is enriched, different AR special effects can be displayed for different users, the requirements of different users are further met, and compared with a traditional theme park, the occupied space of the theme park can be saved.
Based on the above study, the disclosure provides a display screen control method, a device, an electronic device and a storage medium, after obtaining a scene image of an AR park shot by a target AR device, obtaining a face image of a target user corresponding to the target AR device, identifying user attribute features of the target user based on the face image, determining relevant information of the target user through the user attribute features, then obtaining AR park special effect data matched with the user attribute features based on the user attribute features, and controlling the target AR device to display AR special effects corresponding to the AR park special effect data based on the AR park special effect data.
For the sake of understanding the present embodiment, first, a detailed description will be given of a display screen control method disclosed in the present embodiment, and an execution subject of the display screen control method provided in the present embodiment may be a computer device with a certain computing capability, specifically, a terminal device or a server or other processing devices, for example, may be a server connected to an AR device, where the AR device may include, for example, an AR glasses, a tablet computer, a smart phone, a smart wearable device, etc. with a display function and a data processing capability, which is not limited in the present embodiment, and in some possible implementations, the image processing method may be implemented by a processor calling computer readable instructions stored in a memory.
Fig. 1 is a flow chart illustrating a method for controlling a picture according to an embodiment of the disclosure, as shown in fig. 1, the method includes the following steps:
step 101, after obtaining a scene image of an AR park shot by target AR equipment, obtaining a face image of a target user corresponding to the target AR equipment.
Step 102, identifying user attribute features based on the face image.
Step 103, based on the identified user attribute characteristics, AR park special effect data matched with the user attribute characteristics is obtained.
And 104, controlling the target AR equipment to display the AR special effects corresponding to the AR park special effect data based on the AR park special effect data.
Specifically, when a user plays an AR park, the user may capture a scene image of the AR park through the AR device, and then send the captured scene image to the server, where the server may use the AR device as a target AR device, and after obtaining the scene image sent by the target AR device, the server needs to obtain a face image of the target user corresponding to the target AR device at this time, for example, after obtaining the scene image sent by the target AR device, to show the AR special effect matched with the target AR device to the user: when the AR equipment is a mobile phone, a user can install an applet of the AR park in the mobile phone when entering the AR park and shoot a face image to upload the face image to the server, the mobile phone can send the shot scene image of the AR park to the server through the applet, and after receiving the scene image, the server can determine the face image of a target user through the corresponding relation between the mobile phone and the face image, after acquiring the face image, the face image is identified, and user attribute characteristics corresponding to the face image are determined, and as the user attribute characteristics corresponding to the face images of different users are different, different AR park special effect data can be matched according to different user attribute characteristics, for example: after a user attribute feature is determined, AR park special effect data corresponding to the user attribute feature is determined according to the corresponding relation, then AR special effects corresponding to the AR park special effect data are displayed by the control target AR equipment based on the obtained AR park special effect data, at the moment, a user can see virtual animation displayed through an AR technology, and for the same AR park, different AR special effects can be displayed for different users, so that the AR special effects which can be displayed by the AR park are richer, and the determined AR special effects are matched with the user attribute feature, so that the AR special effects displayed by the user are matched with the user, and the acceptance of the displayed AR special effects by the user is improved.
Taking a marine organism AR paradise as an example, when the determined user attribute features are: when a girl under 13 years old is, the determined AR paradise special effect data can be virtual animation of a dolphin playing in the sea and calling, and when the determined user attribute features are as follows: when a boy under 13 years old, the determined AR paradise special effect data can be two virtual animations of mechanical shark fight, and when the determined user attribute is characterized in that: when a man of 25 years old, the determined AR paradise special effect data can be virtual animation of big sharks when foraging in the sea, or virtual animation of the big sharks when capturing prey, or virtual animation of octopus and shark fight, and the like, and when the determined user attribute is characterized in that: when a woman ages 25 years old, the determined AR park special effect data can be virtual animation of dolphin performance, and the AR park special effect data corresponding to different user attribute features can be set according to actual needs, and the specific limitation is not limited herein.
In a possible implementation manner, when a face image of a target user is acquired, the target AR device may be controlled to switch a camera, and the face image of the target user is acquired.
Specifically, the cameras of the AR device include two kinds of cameras, one is a camera for shooting an AR park, and the other is a camera for shooting a face image of a user, and these two kinds of cameras can be switched, for example: under normal conditions, the camera used by the target AR equipment is a camera used for shooting an AR park, after the scene image of the AR park shot by the target AR equipment is acquired, the target AR equipment is controlled to be switched from the camera used for shooting the AR park to the camera used for shooting the face image of the user, after the face image of the user is acquired, the camera used for shooting the face image of the user is switched to the camera used for shooting the AR park, the camera at the back of the mobile phone is the camera used for shooting the AR park, the camera at the front of the mobile phone is the camera used for shooting the face image, under normal conditions, the camera at the back of the mobile phone is used for shooting the camera of the AR park, after the scene image of the AR park shot by the mobile phone is acquired, the camera at the back of the mobile phone is controlled to be switched to the camera at the front, the camera at the front of the mobile phone starts to work, then the face image of the user is acquired, and the camera at the back of the mobile phone is switched to the camera at the front of the mobile phone. By the method, the target AR equipment used by the user can be controlled to acquire the face image of the target user, the acquisition mode is relatively simple, and other additional equipment is not required to be added.
In one possible embodiment, in performing step 102, the face image may be subjected to attribute feature extraction based on a trained feature extraction network, where the feature extraction network is trained based on face image samples labeled with attribute features.
Specifically, a face image marked with attribute features is obtained in advance, then the face image is used as a training sample to be input into a feature extraction network until the feature extraction network completes convergence, training of the feature extraction network is completed at the moment, after the feature extraction network is trained, the obtained face image is used as an input parameter to be input into the trained feature extraction network, and therefore user attribute features corresponding to the face image are obtained.
Of course, the user attribute features may also be determined by other means, such as: the use of image detection models to determine user attribute features is not particularly limited herein as to the manner in which the user attribute features are determined to be used in particular.
The specific usage feature extraction network may be set according to actual needs, and is not specifically limited herein.
In a possible implementation manner, fig. 2 is a schematic flow chart of another display screen control method provided in the embodiment of the disclosure, as shown in fig. 2, when step 103 is performed, the method may be implemented by the following steps:
step 201, determining at least one paradise theme type matched with the user attribute feature based on the identified user attribute feature.
Step 202, responding to a target park theme type selected by a user from the at least one park theme type, and acquiring AR park special effect data matched with the target park theme type.
Specifically, since the user attribute features may include multiple categories, and different categories may correspond to different types of park topics, the determined types of park topics that match the user attribute features may include multiple categories, such as: the determined user attribute characteristics of a certain user comprise: when a corresponding relation between the attribute characteristics of a user and the types of the park topics is established, a corresponding type of park topics can be established for each type, or a corresponding type of park topics can be established by using the two types, a plurality of types of park topics matched with the attribute characteristics of the user can be determined by the method, each type of park topics is related to the user, in order to show the AR special effects most expected to be watched by the user to the user, the determined types of park topics can be shown to the user for selection by the user, then the type of park topics selected by the user is used as the target type of park topics, AR special effect data matched with the target type of park topics is acquired, and the user can autonomously select the AR special effects to be watched by the user by the method, so that the AR special effects to be watched are more in accordance with the requirements of the user.
In a possible implementation manner, fig. 3 is a schematic flow chart of another method for controlling a display screen provided in this disclosure, as shown in fig. 3, where the AR park special effect data includes display data of a virtual gate of a theme park, and after controlling the target AR device to display the virtual gate of the theme park, the method further includes the following steps:
step 301, responding to an opening trigger instruction of the target user for the virtual gate, and acquiring AR park project content in the virtual gate.
Step 302, based on the AR park project content in the virtual gate, controlling the target AR device to display an AR special effect corresponding to the AR park project content.
Specifically, after determining the AR park special effect data matched with the user attribute features, before the AR park special effect corresponding to the AR park special effect data is displayed on the uncontrolled target AR device, the virtual door of the theme park is displayed to the user, the current virtual door is in a closed state, at this time, the user can perform an opening operation on the virtual door, after acquiring an opening trigger instruction of the user for the virtual door, in order to let the user understand the virtual content to be seen after entering the AR park, so as to let the user choose to enter the AR park or not enter the AR park, at this time, the AR park item content in the virtual door needs to be acquired, and then based on the AR park item content in the virtual door, the controlled target AR device displays the AR special effect corresponding to the AR park item content.
In a possible implementation manner, fig. 4 is a schematic flow chart of another method for controlling a display screen provided in an embodiment of the present disclosure, as shown in fig. 4, after controlling the target AR device to display a virtual gate of the theme park, the method further includes:
step 401, responding to a switching trigger instruction of the target user for the virtual gate, and acquiring display data of other virtual gates matched with contents of other AR park items; and the virtual gate is displayed with AR park project content prompt information in the gate.
Step 402, controlling the target AR device to display the updated virtual door, so that the user again selects whether to trigger opening.
Specifically, after determining the AR park special effect data matched with the user attribute feature, before the AR special effect corresponding to the AR park special effect data is displayed on the uncontrolled target AR device, displaying a virtual gate of the theme park to the user, and displaying AR park item content prompt information in the gate on the virtual gate, the user may switch the virtual gate according to his own preference, and then obtain display data of other virtual gates under the AR park item content to display the other virtual gates to the user, and when displaying the other virtual gates, display the AR park item content prompt information in the gate corresponding to the other virtual gates for the user to view, where the user may perform subsequent operations according to the prompt information displayed on the other virtual gates, for example: the virtual gate of an AR park comprises a virtual gate 1, a virtual gate 2 and a virtual gate 3, the virtual gate 1 is currently displayed, AR park item content prompt information in the gate corresponding to the virtual gate 1 is displayed on the virtual gate 1, after a switching trigger instruction for the virtual gate is received, display data of the virtual gate 2 and the virtual gate 3 are obtained and displayed, a user can select the virtual gate 2 and the virtual gate 3 according to the prompt information, and if the user selects the virtual gate 2, the control target AR equipment displays the virtual gate 2, and AR park item content prompt information in the gate corresponding to the virtual gate 2 is also displayed on the virtual gate 2. Through the method, when a user does not enter the AR park, the user can watch the AR special effect related to the content of the AR park item, so that the user can conveniently select whether to enter the AR park, the playing efficiency of the user can be improved, other options are provided for the user, the items of the AR park are richer, and the method is favorable for meeting various requirements of the user.
In one possible embodiment, the user attribute features include at least one of:
gender, age, color value, expression.
It should be noted that, regarding which attribute feature or which attribute features are specifically used as the user attribute feature, it is possible to set according to actual needs, and no specific limitation is made here.
Fig. 5 is a schematic structural diagram of a display screen control device according to an embodiment of the present disclosure, as shown in fig. 5, where the device includes:
a first obtaining unit 51, configured to obtain a face image of a target user corresponding to a target AR device after obtaining a scene image of an AR park shot by the target AR device;
a recognition unit 52 for recognizing a user attribute feature based on the face image;
a second obtaining unit 53, configured to obtain AR park special effect data matched with the identified user attribute feature based on the user attribute feature;
and the display unit 54 is configured to control the target AR device to display an AR effect corresponding to the AR park effect data based on the AR park effect data.
In a possible embodiment, the configuration of the first obtaining unit 51 when used for obtaining the face image of the target user includes:
and controlling the target AR equipment to switch the camera, and shooting and obtaining the face image of the target user.
In a possible implementation manner, the configuration of the identifying unit 52 when used for identifying the user attribute features based on the face image includes:
and carrying out attribute feature extraction on the face image based on a trained feature extraction network, wherein the feature extraction network is obtained by training based on a face image sample marked with attribute features.
In a possible implementation manner, the second obtaining unit 53 is configured to obtain AR park special effect data matched with the identified user attribute feature based on the identified user attribute feature, and includes:
determining at least one garden-topic type that matches the user attribute feature based on the identified user attribute feature;
and responding to a target park theme type selected by a user from the at least one park theme type, and acquiring AR park special effect data matched with the target park theme type.
In one possible implementation, the AR park special effects data includes presentation data of virtual gates of a theme park;
the second obtaining unit 53 is further configured to, after controlling the target AR device to display a virtual gate of the theme park, obtain AR park item content in the virtual gate in response to an opening trigger instruction of the target user for the virtual gate;
the display unit 54 is further configured to control the target AR device to display an AR special effect corresponding to the AR park item content based on the AR park item content in the virtual gate.
In a possible implementation manner, the second obtaining unit 53 is further configured to, after controlling the target AR device to display the virtual gate of the theme park, obtain display data of other virtual gates matching with other AR park item contents in response to a switching trigger instruction of the target user for the virtual gate; the virtual gate is displayed with AR park project content prompt information in the gate;
the display unit 54 is further configured to control the target AR device to display the updated virtual door, so that the user selects whether to trigger opening again.
In one possible implementation, the user attribute features include at least one of:
gender, age, color value, expression.
Corresponding to the display screen control method in fig. 1, fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure, where, as shown in fig. 6, the electronic device includes:
a processor 61, a memory 62, and a bus 63; memory 62 is used to store execution instructions, including memory 621 and external memory 622; the memory 621 is also referred to as an internal memory, and is used for temporarily storing operation data in the processor 61 and data exchanged with the external memory 622 such as a hard disk, the processor 61 exchanges data with the external memory 622 through the memory 621, and when the electronic device is running, the processor 61 and the memory 62 communicate with each other through the bus 63, so that the processor 61 executes the following instructions: after a scene image of an AR park field shot by target AR equipment is acquired, acquiring a face image of a target user corresponding to the target AR equipment; identifying user attribute features based on the face image; based on the identified user attribute characteristics, AR park special effect data matched with the user attribute characteristics is obtained; and controlling the target AR equipment to display the AR special effects corresponding to the AR park special effect data based on the AR park special effect data.
The disclosed embodiments also provide a computer-readable storage medium having a computer program stored thereon, which when executed by a processor performs the steps of the presentation control method described in the above method embodiments. Wherein the storage medium may be a volatile or nonvolatile computer readable storage medium.
The computer program product of the display screen control method provided in the embodiments of the present disclosure includes a computer readable storage medium storing program codes, where the instructions included in the program codes may be used to execute the steps of the display screen control method described in the above method embodiments, and specifically, reference may be made to the above method embodiments, which are not repeated herein.
The disclosed embodiments also provide a computer program which, when executed by a processor, implements any of the methods of the previous embodiments. The computer program product may be realized in particular by means of hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied as a computer storage medium, and in another alternative embodiment, the computer program product is embodied as a software product, such as a software development kit (Software Development Kit, SDK), or the like.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described system and apparatus may refer to corresponding procedures in the foregoing method embodiments, which are not described herein again. In the several embodiments provided in the present disclosure, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. The above-described apparatus embodiments are merely illustrative, for example, the division of the units is merely a logical function division, and there may be other manners of division in actual implementation, and for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some communication interface, device or unit indirect coupling or communication connection, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present disclosure may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer readable storage medium executable by a processor. Based on such understanding, the technical solution of the present disclosure may be embodied in essence or a part contributing to the prior art or a part of the technical solution, or in the form of a software product stored in a storage medium, including several instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method described in the embodiments of the present disclosure. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Finally, it should be noted that: the foregoing examples are merely specific embodiments of the present disclosure, and are not intended to limit the scope of the disclosure, but the present disclosure is not limited thereto, and those skilled in the art will appreciate that while the foregoing examples are described in detail, it is not limited to the disclosure: any person skilled in the art, within the technical scope of the disclosure of the present disclosure, may modify or easily conceive changes to the technical solutions described in the foregoing embodiments, or make equivalent substitutions for some of the technical features thereof; such modifications, changes or substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the disclosure, and are intended to be included within the scope of the present disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (9)

1. A display screen control method, the method comprising:
after a scene image of an AR park field shot by target AR equipment is acquired, acquiring a face image of a target user corresponding to the target AR equipment;
identifying user attribute features based on the face image;
based on the identified user attribute characteristics, AR park special effect data matched with the user attribute characteristics is obtained;
based on the AR park special effect data, controlling the target AR equipment to display an AR special effect corresponding to the AR park special effect data;
the AR park special effect data comprises display data of virtual doors of the theme park, wherein the virtual doors are in a closed state; after controlling the target AR device to present the virtual gate of the theme park, further comprising: responding to an opening triggering instruction of the target user aiming at the virtual gate, and acquiring AR park item content in the virtual gate, wherein the AR park item content is used for judging whether to choose to enter the AR park or not;
responding to a switching trigger instruction of the target user for the virtual gate, and acquiring display data of other virtual gates matched with contents of other AR park items; the virtual gate is displayed with AR park project content prompt information in the gate; and controlling the target AR equipment to display the updated virtual door so as to enable the user to select whether to trigger opening again.
2. The method of claim 1, wherein acquiring the face image of the target user comprises:
and controlling the target AR equipment to switch the camera, and shooting and obtaining the face image of the target user.
3. The method according to claim 1 or 2, wherein identifying user attribute features based on the face image comprises:
and carrying out attribute feature extraction on the face image based on a trained feature extraction network, wherein the feature extraction network is obtained by training based on a face image sample marked with attribute features.
4. The method of claim 1 or 2, wherein obtaining AR park special effects data matching the user attribute feature based on the identified user attribute feature comprises:
determining at least one garden-topic type that matches the user attribute feature based on the identified user attribute feature;
and responding to a target park theme type selected by a user from the at least one park theme type, and acquiring AR park special effect data matched with the target park theme type.
5. The method of claim 1 or 2, further comprising, after controlling the target AR device to present the virtual gate of the theme park:
and controlling the target AR equipment to display the AR special effects corresponding to the AR park item content based on the AR park item content in the virtual gate.
6. The method according to claim 1 or 2, wherein the user attribute features comprise at least one of:
gender, age, color value, expression.
7. A display screen control apparatus, the apparatus comprising:
the first acquisition unit is used for acquiring a face image of a target user corresponding to target AR equipment after acquiring a scene image of an AR park shot by the target AR equipment;
the identification unit is used for identifying the attribute characteristics of the user based on the face image;
the second acquisition unit is used for acquiring AR park special effect data matched with the user attribute characteristics based on the identified user attribute characteristics;
the display unit is used for controlling the target AR equipment to display the AR special effects corresponding to the AR park special effect data based on the AR park special effect data;
the AR park special effect data comprises display data of virtual doors of the theme park, wherein the virtual doors are in a closed state; the second obtaining unit is further configured to, after controlling the target AR device to display a virtual door of the theme park, obtain AR park item content in the virtual door in response to an opening trigger instruction of the target user for the virtual door, where the AR park item content is used to determine whether to select to enter the AR park;
the second acquisition unit is also used for responding to the switching trigger instruction of the target user for the virtual gate and acquiring the display data of other virtual gates matched with the contents of other AR paradise items; the virtual gate is displayed with AR park project content prompt information in the gate;
the display unit is further configured to control the target AR device to display the updated virtual door, so that a user selects whether to trigger opening again.
8. An electronic device, comprising: a processor, a memory and a bus, said memory storing machine readable instructions executable by said processor, said processor and said memory communicating over the bus when the electronic device is running, said machine readable instructions when executed by said processor performing the steps of the presentation screen control method according to any of claims 1 to 6.
9. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored thereon a computer program which, when executed by a processor, performs the steps of the presentation control method as claimed in any one of claims 1 to 6.
CN202010515234.5A 2020-06-08 2020-06-08 Display screen control method and device, electronic equipment and storage medium Active CN111643900B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010515234.5A CN111643900B (en) 2020-06-08 2020-06-08 Display screen control method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010515234.5A CN111643900B (en) 2020-06-08 2020-06-08 Display screen control method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111643900A CN111643900A (en) 2020-09-11
CN111643900B true CN111643900B (en) 2023-11-28

Family

ID=72343880

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010515234.5A Active CN111643900B (en) 2020-06-08 2020-06-08 Display screen control method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111643900B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112308979A (en) * 2020-10-30 2021-02-02 北京市商汤科技开发有限公司 Prompt message display method and device, computer equipment and storage medium
CN112714305A (en) * 2020-12-25 2021-04-27 北京市商汤科技开发有限公司 Presentation method, presentation device, presentation equipment and computer-readable storage medium
CN112925595A (en) * 2021-01-25 2021-06-08 北京达佳互联信息技术有限公司 Resource distribution method and device, electronic equipment and storage medium
CN113115099B (en) * 2021-05-14 2022-07-05 北京市商汤科技开发有限公司 Video recording method and device, electronic equipment and storage medium
CN113238657A (en) * 2021-06-03 2021-08-10 北京市商汤科技开发有限公司 Information display method and device, computer equipment and storage medium
CN113478485A (en) * 2021-07-06 2021-10-08 上海商汤智能科技有限公司 Robot, control method and device thereof, electronic device and storage medium
CN115016688A (en) * 2022-06-28 2022-09-06 维沃移动通信有限公司 Virtual information display method and device and electronic equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20140140837A (en) * 2013-05-30 2014-12-10 에스케이플래닛 주식회사 System for providing virtual fence service, method and apparatus for providing virtual fence service service in the system
CN108079577A (en) * 2018-01-05 2018-05-29 玛雅国际文化发展有限公司 The management system and management method of a kind of recreation ground
CN109876450A (en) * 2018-12-14 2019-06-14 深圳壹账通智能科技有限公司 Implementation method, server, computer equipment and storage medium based on AR game
CN110716645A (en) * 2019-10-15 2020-01-21 北京市商汤科技开发有限公司 Augmented reality data presentation method and device, electronic equipment and storage medium
CN110858134A (en) * 2018-08-22 2020-03-03 阿里巴巴集团控股有限公司 Data, display processing method and device, electronic equipment and storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11032662B2 (en) * 2018-05-30 2021-06-08 Qualcomm Incorporated Adjusting audio characteristics for augmented reality
US10915740B2 (en) * 2018-07-28 2021-02-09 International Business Machines Corporation Facial mirroring in virtual and augmented reality

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20140140837A (en) * 2013-05-30 2014-12-10 에스케이플래닛 주식회사 System for providing virtual fence service, method and apparatus for providing virtual fence service service in the system
CN108079577A (en) * 2018-01-05 2018-05-29 玛雅国际文化发展有限公司 The management system and management method of a kind of recreation ground
WO2019134308A1 (en) * 2018-01-05 2019-07-11 玛雅国际文化发展有限公司 Amusement park management system and management method
CN110858134A (en) * 2018-08-22 2020-03-03 阿里巴巴集团控股有限公司 Data, display processing method and device, electronic equipment and storage medium
CN109876450A (en) * 2018-12-14 2019-06-14 深圳壹账通智能科技有限公司 Implementation method, server, computer equipment and storage medium based on AR game
CN110716645A (en) * 2019-10-15 2020-01-21 北京市商汤科技开发有限公司 Augmented reality data presentation method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN111643900A (en) 2020-09-11

Similar Documents

Publication Publication Date Title
CN111643900B (en) Display screen control method and device, electronic equipment and storage medium
CN112348969B (en) Display method and device in augmented reality scene, electronic equipment and storage medium
CN110062271B (en) Scene switching method, device, terminal and storage medium
CN111556278B (en) Video processing method, video display device and storage medium
CN111640202B (en) AR scene special effect generation method and device
CN106161939B (en) Photo shooting method and terminal
WO2022116604A1 (en) Image captured image processing method and electronic device
US11778263B2 (en) Live streaming video interaction method and apparatus, and computer device
CN111694430A (en) AR scene picture presentation method and device, electronic equipment and storage medium
CN110545442B (en) Live broadcast interaction method and device, electronic equipment and readable storage medium
US20150002690A1 (en) Image processing method and apparatus, and electronic device
CN111651047B (en) Virtual object display method and device, electronic equipment and storage medium
CN111744202A (en) Method and device for loading virtual game, storage medium and electronic device
CN113453034B (en) Data display method, device, electronic equipment and computer readable storage medium
CN112532882B (en) Image display method and device
CN109670385A (en) The method and device that expression updates in a kind of application program
CN111638784A (en) Facial expression interaction method, interaction device and computer storage medium
CN106778627A (en) Detect method, device and the mobile terminal of face face value
CN113487709A (en) Special effect display method and device, computer equipment and storage medium
CN114598919B (en) Video processing method, device, computer equipment and storage medium
WO2020215776A1 (en) Multimedia data processing method and apparatus
US11889127B2 (en) Live video interaction method and apparatus, and computer device
CN111665942A (en) AR special effect triggering display method and device, electronic equipment and storage medium
CN112843695B (en) Method and device for shooting image, electronic equipment and storage medium
CN111625101B (en) Display control method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant