CN117292310A - Virtual digital person application method - Google Patents

Virtual digital person application method Download PDF

Info

Publication number
CN117292310A
CN117292310A CN202311073542.7A CN202311073542A CN117292310A CN 117292310 A CN117292310 A CN 117292310A CN 202311073542 A CN202311073542 A CN 202311073542A CN 117292310 A CN117292310 A CN 117292310A
Authority
CN
China
Prior art keywords
user
information
neural network
virtual digital
digital person
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311073542.7A
Other languages
Chinese (zh)
Other versions
CN117292310B (en
Inventor
吴建斌
周孝歌
林骞
张壹
陈延峰
李亮
邱政杰
蒋楷昕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Kongjie Lishui Digital Technology Co ltd
Original Assignee
Hangzhou Kongjie Vision Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Kongjie Vision Technology Co ltd filed Critical Hangzhou Kongjie Vision Technology Co ltd
Priority to CN202311073542.7A priority Critical patent/CN117292310B/en
Publication of CN117292310A publication Critical patent/CN117292310A/en
Application granted granted Critical
Publication of CN117292310B publication Critical patent/CN117292310B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/907Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/02Reservations, e.g. for tickets, services or events
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/26Government or public services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Tourism & Hospitality (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Computational Linguistics (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Human Resources & Organizations (AREA)
  • Development Economics (AREA)
  • Marketing (AREA)
  • Economics (AREA)
  • General Business, Economics & Management (AREA)
  • Strategic Management (AREA)
  • Quality & Reliability (AREA)
  • Educational Administration (AREA)
  • Primary Health Care (AREA)
  • Library & Information Science (AREA)
  • Operations Research (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Processing Or Creating Images (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a virtual digital person application method, which comprises the following steps: arranging the virtual digital person display device on a predetermined showcase; the virtual digital person judges whether a viewing user watching the exhibits in the preset showcase is about to execute photographing action or not based on a depth algorithm; under the condition that a viewing user watching the exhibits in the preset showcase is judged to execute a photographing action, the virtual digital person acquires the facial information of the viewing user and monitors whether a flash lamp is started or not when the viewing user photographs in real time; under the condition that a flash lamp is started when the user is judged to be watched to take a picture, searching corresponding user identity information in a large database based on the acquired facial information; and recording the flash lamp shooting information of the exhibition user on the exhibited item into user account information corresponding to the user identity information in the large database.

Description

Virtual digital person application method
Technical Field
The invention relates to the field of artificial intelligence, in particular to a virtual digital person, and especially relates to a virtual digital person application method.
Background
According to the general group of the chinese artificial intelligence industry development alliance and the "white book of virtual digital person in 2020" issued by the digital person working committee of the national institutes of the chinese artificial intelligence industry, the virtual digital person is defined to mean a "virtual person having a digitized appearance". Unlike robots with entities, virtual digital people exist depending on the display device. It is noted that the virtual digital person preferably has the following three features: firstly, the appearance of the owner has the characteristics of specific looks, sexes, characters and the like; secondly, the behavior of the owner has the ability of expressing the behavior by language, facial expression and limb actions; thirdly, the idea of the owner has the capability of identifying the external environment and exchanging interaction with the person.
The popularity of the "meta-universe" concept has greatly driven the upgrading of the virtual digital human industry. Virtual digital people have been widely used in the industries of culture, entertainment, services, etc. For example, one application scenario for virtual digital people is the museum area. Virtual digital people as digital instructors have been developed for 24-hour museum unmanned instructors. There are also a number of museums that have applied virtual digital human instructors that are capable of interacting with users.
However, the application of the virtual digital person in the museum is still only remained on the guiding function, and the function is relatively single. It is desirable to be able to further expand the functionality of virtual digital people in museum application scenarios, for example to exhibit protection.
Disclosure of Invention
The invention aims to solve the technical problem that the prior art has the defects, and provides a virtual digital person application method with an exhibit protection function in a museum application scene.
According to the present invention, there is provided a virtual digital person application method including:
arranging the virtual digital person display device on a predetermined showcase;
the virtual digital person determining whether a viewing user viewing an exhibit in a predetermined showcase is about to perform a photographing action based on a depth algorithm, comprising:
acquiring an image;
training a first convolutional neural network comprising a pooling layer to obtain a first neural network model for judging shooting expectations;
training a first convolutional neural network which does not contain a pooling layer to obtain a second neural network model for judging shooting expectations;
adding a pooling layer (parameters are kept unchanged during adding) in the first neural network model into the second neural network model to obtain a third neural network model;
training the third neural network model again; the first neural network model, the second neural network model and the third neural network model are trained by adopting the same image data;
the virtual digital person judges whether a viewing user watching the exhibits in the preset showcase is about to execute photographing action or not based on the third neural network model after training again by using the acquired image;
under the condition that a viewing user watching the exhibits in the preset showcase is judged to execute a photographing action, the virtual digital person acquires the facial information of the viewing user and monitors whether a flash lamp is started or not when the viewing user photographs in real time;
under the condition that a flash lamp is started when the user is judged to be watched to take a picture, searching corresponding user identity information in a large database based on the acquired facial information;
and recording the flash lamp shooting information of the exhibition user on the exhibited item into user account information corresponding to the user identity information in the large database.
Preferably, the virtual digital person display device is placed in a location that is visible to a viewing user on a showcase that cannot be photographed using a flashing light.
Preferably, the virtual digital human display device is disposed at a position that can be seen by a viewing user on a showcase of ancient books and ancient calligraphy and painting.
Preferably, the large database contains a history of user reservation information for the current museum, wherein the user reservation information includes personal information such as user face information, user certificate information, and user phone numbers.
Preferably, the large database contains a history of user subscription information for a plurality of museums sharing user information, wherein the user subscription information includes personal information such as user face information, user credentials information, and user phone numbers.
Preferably, the virtual digital person application method further comprises:
under the condition that a viewing user watching the exhibits in the preset showcase is judged to execute a photographing action, after the virtual digital person obtains the face information of the viewing user, searching the corresponding user identity information in the large database based on the obtained face information;
checking whether the information of flash lamp shooting of the exhibited user is recorded in the user account information corresponding to the searched user identity information;
under the condition that information that the exhibition user shoots the flash lamp is recorded, the virtual digital person starts interactive exhibition with the exhibition user so as to remind the user not to use the flash lamp.
Preferably, the virtual digital person display video reminds the user in a voice manner that the user does not use the flash, wherein the voice information contains the name of the user.
Preferably, in the case that the user navigates through the museum application software in the networked museum navigation device or mobile phone, the virtual digital person plays the emitted voice information through the museum navigation device or mobile phone based on the login information of the user in the museum navigation device or mobile phone.
According to the invention, there is also provided a virtual digital person application method, comprising:
arranging the virtual digital person display device on a predetermined showcase;
the virtual digital person determining whether a viewing user viewing an exhibit in a predetermined showcase is about to perform a photographing action based on a depth algorithm, comprising:
acquiring an image;
training a first convolutional neural network comprising a pooling layer to obtain a first neural network model for judging shooting expectations;
training a first convolutional neural network which does not contain a pooling layer to obtain a second neural network model for judging shooting expectations;
adding a pooling layer (parameters are kept unchanged during adding) in the first neural network model into the second neural network model to obtain a third neural network model;
training the third neural network model again; the first neural network model, the second neural network model and the third neural network model are trained by adopting the same image data;
the virtual digital person judges whether a viewing user watching the exhibits in the preset showcase is about to execute photographing action or not based on the third neural network model after training again by using the acquired image;
under the condition that a viewing user who views the exhibits in the preset showcase is judged to execute a photographing action, the virtual digital person obtains the facial information of the viewing user;
searching corresponding user identity information in a large database based on the acquired face information;
checking whether the information of flash lamp shooting of the exhibited user is recorded in the user account information corresponding to the searched user identity information;
under the condition that information that the exhibition user shoots the flash lamp is recorded, the virtual digital person starts interactive exhibition with the exhibition user so as to remind the user not to use the flash lamp.
Preferably, the virtual digital person display video reminds the user in a voice manner that the user does not use the flash, wherein the voice information contains the name of the user.
The virtual digital person is applied to the protection of the cultural relics in the museum, and the virtual digital person and the big data are combined, so that the virtual digital person can conduct conventional exhibition guiding and simultaneously prevent damage to the cultural relics caused by improper behaviors of observers.
Drawings
The invention will be more fully understood and its attendant advantages and features will be more readily understood by reference to the following detailed description, when taken in conjunction with the accompanying drawings, in which:
fig. 1 schematically shows an overall flow chart of a virtual digital person application method according to a preferred embodiment of the invention.
Fig. 2 schematically shows an overall flow chart of a virtual digital person application method according to a second preferred embodiment of the invention.
It should be noted that the drawings are for illustrating the invention and are not to be construed as limiting the invention. Note that the drawings representing structures may not be drawn to scale. Also, in the drawings, the same or similar elements are denoted by the same or similar reference numerals.
Detailed Description
In order that the invention may be more readily understood, a detailed description of the invention is provided below along with specific embodiments and accompanying figures.
< first preferred embodiment >
Fig. 1 schematically shows an overall flow chart of a virtual digital person application method according to a first preferred embodiment of the invention. Specifically, as shown in fig. 1, the virtual digital person application method according to the first preferred embodiment of the present invention may include:
step S1: arranging the virtual digital person display device on a predetermined showcase;
for example, a virtual digital person display device is placed in a location that can be seen by a viewing user on a showcase that cannot be photographed using a flashing light. Specifically, for example, the virtual digital person display apparatus is arranged at a position that can be seen by a viewing user on a showcase of ancient books and ancient calligraphy and painting.
For example, a virtual digital person display device includes units such as a display and speakers.
Step S2: the virtual digital person judges whether a viewing user watching the exhibits in the preset showcase is about to execute photographing action or not based on a depth algorithm;
before each step of the invention is executed, the virtual digital person can be trained to improve the accuracy of accurately judging whether the exhibition user watching the exhibits in the predetermined showcase is about to execute the photographing action.
It will be appreciated that a suitable deep learning algorithm may be employed to effect the determination of this step. In a preferred embodiment, convolutional neural networks may be employed.
In a preferred embodiment of the present invention, the step S2 specifically includes:
acquiring an image;
training a first convolutional neural network comprising a pooling layer to obtain a first neural network model for judging shooting expectations;
training a first convolutional neural network which does not contain a pooling layer to obtain a second neural network model for judging shooting expectations;
adding a pooling layer (parameters are kept unchanged during adding) in the first neural network model into the second neural network model to obtain a third neural network model;
training the third neural network model again; wherein the first neural network model, the second neural network model, and the third neural network model are trained using the same image data.
And the virtual digital person judges whether a viewing user watching the exhibits in the predetermined showcase is about to execute photographing action or not based on the third neural network model after training again by using the acquired image.
Preferably, the first neural network model has only a single pooling layer therein, for example, the first neural network model includes a convolution layer, a pooling layer, and a fully-connected layer in order; the second neural network model comprises a plurality of convolution layers and a full connection layer; and the pooling layer in the first neural network model is added after the last convolutional layer of the second neural network model. Based on the third neural network model thus obtained, the accuracy of the final model trained using the same training data is relatively optimal.
The inventors advantageously found that the accuracy of the determination of whether a viewing user viewing an exhibit in a predetermined showcase is about to perform a photographing action is greatly improved by the third neural network model trained again using the related data.
Step S3: under the condition that a viewing user watching the exhibits in the preset showcase is judged to execute a photographing action, the virtual digital person acquires the facial information of the viewing user and monitors whether a flash lamp is started or not when the viewing user photographs in real time;
step S4: under the condition that a flash lamp is started when the user is judged to be watched to take a picture, searching corresponding user identity information in a large database based on the acquired facial information;
the large database contains, for example, a history of user reservation information of the current museum, wherein the user reservation information includes personal information such as user face information, user certificate information, and user telephone numbers.
More preferably, the large database contains a history of user subscription information for a plurality of museums sharing user information, wherein the user subscription information includes personal information such as user face information, user credentials information, and user phone numbers.
Preferably, a large database may be stored in the cloud for common access and reading by multiple museums sharing user information.
Step S5: and recording the flash lamp shooting information of the exhibition user on the exhibited item into user account information corresponding to the user identity information in the large database.
In this embodiment, the behavior of the observing user that the flash cannot be photographed may be recorded to a large database for subsequent use. More specifically, if the information of flash photographing of the exhibited item by the exhibited user is recorded in the user account information, it is indicated that the exhibited user has photographed the exhibited item which cannot be photographed using the flash, and such an unexplained exhibited behavior is recorded in a large database.
< second preferred embodiment >
The second preferred embodiment may be implemented on the basis of the first preferred embodiment or may be implemented separately.
Fig. 2 schematically shows an overall flow chart of a virtual digital person application method according to a second preferred embodiment of the invention. Specifically, as shown in fig. 2, the virtual digital person application method according to the second preferred embodiment of the present invention may include:
step S10: arranging the virtual digital person display device on a predetermined showcase;
for example, a virtual digital person display device is placed in a location that can be seen by a viewing user on a showcase that cannot be photographed using a flashing light. Specifically, for example, the virtual digital person display apparatus is arranged at a position that can be seen by a viewing user on a showcase of ancient books and ancient calligraphy and painting.
Step S20: the virtual digital person judges whether a viewing user watching the exhibits in the preset showcase is about to execute photographing action or not based on a depth algorithm;
before each step of the invention is executed, the virtual digital person can be trained to improve the accuracy of accurately judging whether the exhibition user watching the exhibits in the predetermined showcase is about to execute the photographing action.
It will be appreciated that a suitable deep learning algorithm may be employed to effect the determination of this step. In a preferred embodiment, convolutional neural networks may be employed.
In a preferred embodiment of the present invention, the step S2 specifically includes:
acquiring an image;
training a first convolutional neural network comprising a pooling layer to obtain a first neural network model for judging shooting expectations;
training a first convolutional neural network which does not contain a pooling layer to obtain a second neural network model for judging shooting expectations;
adding a pooling layer (parameters are kept unchanged during adding) in the first neural network model into the second neural network model to obtain a third neural network model;
training the third neural network model again; wherein the first neural network model, the second neural network model, and the third neural network model are trained using the same image data.
And the virtual digital person judges whether a viewing user watching the exhibits in the predetermined showcase is about to execute photographing action or not based on the third neural network model after training again by using the acquired image.
Preferably, the first neural network model has only a single pooling layer therein, for example, the first neural network model includes a convolution layer, a pooling layer, and a fully-connected layer in order; the second neural network model comprises a plurality of convolution layers and a full connection layer; and the pooling layer in the first neural network model is added after the last convolutional layer of the second neural network model. Based on the third neural network model thus obtained, the accuracy of the final model trained using the same training data is relatively optimal.
The inventors advantageously found that the accuracy of the determination of whether a viewing user viewing an exhibit in a predetermined showcase is about to perform a photographing action is greatly improved by the third neural network model trained again using the related data.
Step S30: under the condition that a viewing user who views the exhibits in the preset showcase is judged to execute a photographing action, the virtual digital person obtains the facial information of the viewing user;
step S40: searching corresponding user identity information in a large database based on the acquired face information;
the large database contains, for example, a history of user reservation information of the current museum, wherein the user reservation information includes personal information such as user face information, user certificate information, and user telephone numbers.
More preferably, the large database contains a history of user subscription information for a plurality of museums sharing user information, wherein the user subscription information includes personal information such as user face information, user credentials information, and user phone numbers.
Preferably, a large database may be stored in the cloud for common access and reading by multiple museums sharing user information.
Step S50: checking whether the information of flash lamp shooting of the exhibited user is recorded in the user account information corresponding to the searched user identity information;
step S60: under the condition that information that the exhibition user shoots the flash lamp is recorded, the virtual digital person starts interactive exhibition with the exhibition user so as to remind the user not to use the flash lamp.
Preferably, for example, in step S60, in the case where information is recorded that the viewing user performs flash photographing on the exhibited item, the protection cover disposed on the showcase glass is started, and the protection cover is closed after a predetermined time (for example, ten seconds) after reminding the user that the flash is not to be used has elapsed. Thus, the exhibited article can be protected best.
In step S60, if the virtual digital person is not already in the interactive display state with the viewing user, the virtual digital person starts the interactive display with the viewing user to remind the user not to use the flash; however, if the virtual digital person is already in a state of interactive presentation with the viewing user, the virtual digital person pauses the current interactive operation to remind the user not to use the flash, and resumes the paused interactive operation after reminding the user not to use the flash.
Specifically, for example, the virtual digital person displays a video to remind the user in a voice manner that the user does not need to use a flash lamp, wherein the voice information contains the name of the user so as to anchor a specific user for reminding, and the reminding effectiveness is improved.
For example, a virtual digital person speaks "× (user name) to a viewing user in a friendly manner, please do not use a flash to take a picture. The flash will cause damage to the cultural relics. "
Preferably, in the case that the user navigates through the museum guiding device or the museum application software in the mobile phone, the virtual digital person plays the sent voice information through the museum guiding device or the mobile phone based on the login information of the user in the museum guiding device or the mobile phone, so that the user is not disturbed, and meanwhile information leakage of the user is avoided.
General prompt information to the whole observer often cannot draw the attention of a specific individual. In the preferred embodiment of the invention, the prompt information aiming at the individual is sent out aiming at the specific user, so that the attention of specific observers can be most effectively brought, the security of the cultural relics is effectively protected, and the observation standards are maintained.
In summary, the virtual digital person is applied to the protection of the cultural relics of the museum, and the virtual digital person and the big data are combined, so that the virtual digital person can conduct conventional exhibition guiding and simultaneously prevent the damage to the cultural relics caused by improper behaviors of the observers.
Furthermore, unless specifically stated otherwise, the terms "first," "second," "third," and the like in the description herein, are used for distinguishing between various components, elements, steps, etc. in the description, and not for indicating a logical or sequential relationship between various components, elements, steps, etc.
It will be appreciated that although the invention has been described above in terms of preferred embodiments, the above embodiments are not intended to limit the invention. Many possible variations and modifications of the disclosed technology can be made by anyone skilled in the art without departing from the scope of the technology, or the technology can be modified to be equivalent. Therefore, any simple modification, equivalent variation and modification of the above embodiments according to the technical substance of the present invention still fall within the scope of the technical solution of the present invention.

Claims (10)

1. A method of virtual digital person application, comprising:
arranging the virtual digital person display device on a predetermined showcase;
the virtual digital person determining whether a viewing user viewing an exhibit in a predetermined showcase is about to perform a photographing action based on a depth algorithm, comprising:
acquiring an image;
training a first convolutional neural network comprising a pooling layer to obtain a first neural network model for judging shooting expectations;
training a first convolutional neural network which does not contain a pooling layer to obtain a second neural network model for judging shooting expectations;
adding the pooling layer in the first neural network model into the second neural network model to obtain a third neural network model;
training the third neural network model again; the first neural network model, the second neural network model and the third neural network model are trained by adopting the same image data;
the virtual digital person judges whether a viewing user watching the exhibits in the preset showcase is about to execute photographing action or not based on the third neural network model after training again by using the acquired image;
under the condition that a viewing user watching the exhibits in the preset showcase is judged to execute a photographing action, the virtual digital person acquires the facial information of the viewing user and monitors whether a flash lamp is started or not when the viewing user photographs in real time;
under the condition that a flash lamp is started when the user is judged to be watched to take a picture, searching corresponding user identity information in a large database based on the acquired facial information;
and recording the flash lamp shooting information of the exhibition user on the exhibited item into user account information corresponding to the user identity information in the large database.
2. The method of claim 1, wherein the virtual digital person display device is disposed in a location viewable by a viewing user on a showcase that is not capable of taking pictures using a flashing light.
3. The virtual digital person application method according to claim 1, wherein the virtual digital person display device is arranged at a position visible to a viewing user on a showcase of ancient books and ancient calligraphy and painting.
4. A virtual digital person application method according to one of the claims 1 to 3, characterized in that the large database contains a history of user subscription information of the current museum, wherein the user subscription information comprises user face information, user credentials information and user phone numbers.
5. A virtual digital person application method according to one of claims 1 to 3, characterized in that the large database contains a history of user subscription information of a plurality of museums sharing user information, wherein the user subscription information comprises user face information, user credentials information and user telephone numbers.
6. The virtual digital person application method according to claim 1 or 2, characterized by further comprising:
under the condition that a viewing user watching the exhibits in the preset showcase is judged to execute a photographing action, after the virtual digital person obtains the face information of the viewing user, searching the corresponding user identity information in the large database based on the obtained face information;
checking whether the information of flash lamp shooting of the exhibited user is recorded in the user account information corresponding to the searched user identity information;
under the condition that information that the exhibition user shoots the flash lamp is recorded, the virtual digital person starts interactive exhibition with the exhibition user so as to remind the user not to use the flash lamp.
7. The method of claim 6, wherein the virtual digital person presents video to audibly alert the user that the flash is not to be used, wherein the audio information includes the user's name.
8. The method of claim 7, wherein in case the user navigates through the museum application software in the networked museum navigation device or phone, the virtual digital person plays the voice information sent out through the museum navigation device or phone based on the user's login information in the museum navigation device or phone.
9. A method of virtual digital person application, comprising:
arranging the virtual digital person display device on a predetermined showcase;
the virtual digital person determining whether a viewing user viewing an exhibit in a predetermined showcase is about to perform a photographing action based on a depth algorithm, comprising:
acquiring an image;
training a first convolutional neural network comprising a pooling layer to obtain a first neural network model for judging shooting expectations;
training a first convolutional neural network which does not contain a pooling layer to obtain a second neural network model for judging shooting expectations;
adding the pooling layer in the first neural network model into the second neural network model to obtain a third neural network model;
training the third neural network model again; the first neural network model, the second neural network model and the third neural network model are trained by adopting the same image data;
the virtual digital person judges whether a viewing user watching the exhibits in the preset showcase is about to execute photographing action or not based on the third neural network model after training again by using the acquired image;
under the condition that a viewing user who views the exhibits in the preset showcase is judged to execute a photographing action, the virtual digital person obtains the facial information of the viewing user;
searching corresponding user identity information in a large database based on the acquired face information;
checking whether the information of flash lamp shooting of the exhibited user is recorded in the user account information corresponding to the searched user identity information;
under the condition that information that the exhibition user shoots the flash lamp is recorded, the virtual digital person starts interactive exhibition with the exhibition user so as to remind the user not to use the flash lamp.
10. The method of claim 9, wherein the virtual digital person presents video to audibly alert the user that the flash is not to be used, wherein the audio information includes the user's name.
CN202311073542.7A 2023-08-22 2023-08-22 Virtual digital person application method Active CN117292310B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311073542.7A CN117292310B (en) 2023-08-22 2023-08-22 Virtual digital person application method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311073542.7A CN117292310B (en) 2023-08-22 2023-08-22 Virtual digital person application method

Publications (2)

Publication Number Publication Date
CN117292310A true CN117292310A (en) 2023-12-26
CN117292310B CN117292310B (en) 2024-09-27

Family

ID=89252578

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311073542.7A Active CN117292310B (en) 2023-08-22 2023-08-22 Virtual digital person application method

Country Status (1)

Country Link
CN (1) CN117292310B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106650683A (en) * 2016-12-29 2017-05-10 佛山市幻云科技有限公司 Shooting detection system and method
CN108010078A (en) * 2017-11-29 2018-05-08 中国科学技术大学 A kind of grasping body detection method based on three-level convolutional neural networks
CN110619332A (en) * 2019-08-13 2019-12-27 中国科学院深圳先进技术研究院 Data processing method, device and equipment based on visual field inspection report
WO2021068356A1 (en) * 2019-10-10 2021-04-15 浙江大学 User-to-exhibit-distance-based cooperative interaction method and system for augmented reality museum
CN115511860A (en) * 2019-11-28 2022-12-23 深圳硅基智控科技有限公司 Tissue lesion identification method based on complementary attention mechanism

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106650683A (en) * 2016-12-29 2017-05-10 佛山市幻云科技有限公司 Shooting detection system and method
CN108010078A (en) * 2017-11-29 2018-05-08 中国科学技术大学 A kind of grasping body detection method based on three-level convolutional neural networks
CN110619332A (en) * 2019-08-13 2019-12-27 中国科学院深圳先进技术研究院 Data processing method, device and equipment based on visual field inspection report
WO2021068356A1 (en) * 2019-10-10 2021-04-15 浙江大学 User-to-exhibit-distance-based cooperative interaction method and system for augmented reality museum
CN115511860A (en) * 2019-11-28 2022-12-23 深圳硅基智控科技有限公司 Tissue lesion identification method based on complementary attention mechanism

Also Published As

Publication number Publication date
CN117292310B (en) 2024-09-27

Similar Documents

Publication Publication Date Title
JP6349031B2 (en) Method and apparatus for recognition and verification of objects represented in images
CN111191640B (en) Three-dimensional scene presentation method, device and system
CN108600632B (en) Photographing prompting method, intelligent glasses and computer readable storage medium
WO2018000609A1 (en) Method for sharing 3d image in virtual reality system, and electronic device
WO2014035367A1 (en) Generating augmented reality exemplars
CN112348968B (en) Display method and device in augmented reality scene, electronic equipment and storage medium
CN111654715A (en) Live video processing method and device, electronic equipment and storage medium
WO2018000608A1 (en) Method for sharing panoramic image in virtual reality system, and electronic device
McGrath Performing surveillance
CN111667590B (en) Interactive group photo method and device, electronic equipment and storage medium
CN117292310B (en) Virtual digital person application method
CN110516426A (en) Identity identifying method, certification terminal, device and readable storage medium storing program for executing
Blascovich et al. Immersive virtual environments and education simulations
CN112488650A (en) Conference atmosphere adjusting method, electronic equipment and related products
CN110866292B (en) Interface display method and device, terminal equipment and server
US20190012834A1 (en) Augmented Content System and Method
CN109510752A (en) Information displaying method and device
JP2019012509A (en) Program for providing virtual space with head-mounted display, method, and information processing apparatus for executing program
CN113900751A (en) Method, device, server and storage medium for synthesizing virtual image
von Mossner Larger than life: Endangered species across media in Louis Psihoyos’s racing extinction
KR20120031373A (en) Learning service system and method thereof
JP7069550B2 (en) Lecture video analyzer, lecture video analysis system, method and program
WO2018216213A1 (en) Computer system, pavilion content changing method and program
CN114339356B (en) Video recording method, device, equipment and storage medium
JP7559026B2 (en) VIRTUAL SPACE PROVIDING SYSTEM, VIRTUAL SPACE PROVIDING METHOD, AND PROGRAM

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20240830

Address after: Room 29, Building 3, No. 50 Hubin Road, Hushan Township, Suichang County, Lishui City, Zhejiang Province 323308

Applicant after: Kongjie (Lishui) Digital Technology Co.,Ltd.

Country or region after: China

Address before: 310000, Room A3-3/101, Zhongchuangju, Fenghuang Creative Park, Wangjiangshan Road, Xihu District, Hangzhou City, Zhejiang Province, China Academy of Fine Arts Art Creative Town

Applicant before: Hangzhou Kongjie Vision Technology Co.,Ltd.

Country or region before: China

GR01 Patent grant
GR01 Patent grant