Disclosure of Invention
The invention aims to solve the technical problem that the prior art has the defects, and provides a virtual digital person application method with an exhibit protection function in a museum application scene.
According to the present invention, there is provided a virtual digital person application method including:
arranging the virtual digital person display device on a predetermined showcase;
the virtual digital person determining whether a viewing user viewing an exhibit in a predetermined showcase is about to perform a photographing action based on a depth algorithm, comprising:
acquiring an image;
training a first convolutional neural network comprising a pooling layer to obtain a first neural network model for judging shooting expectations;
training a first convolutional neural network which does not contain a pooling layer to obtain a second neural network model for judging shooting expectations;
adding a pooling layer (parameters are kept unchanged during adding) in the first neural network model into the second neural network model to obtain a third neural network model;
training the third neural network model again; the first neural network model, the second neural network model and the third neural network model are trained by adopting the same image data;
the virtual digital person judges whether a viewing user watching the exhibits in the preset showcase is about to execute photographing action or not based on the third neural network model after training again by using the acquired image;
under the condition that a viewing user watching the exhibits in the preset showcase is judged to execute a photographing action, the virtual digital person acquires the facial information of the viewing user and monitors whether a flash lamp is started or not when the viewing user photographs in real time;
under the condition that a flash lamp is started when the user is judged to be watched to take a picture, searching corresponding user identity information in a large database based on the acquired facial information;
and recording the flash lamp shooting information of the exhibition user on the exhibited item into user account information corresponding to the user identity information in the large database.
Preferably, the virtual digital person display device is placed in a location that is visible to a viewing user on a showcase that cannot be photographed using a flashing light.
Preferably, the virtual digital human display device is disposed at a position that can be seen by a viewing user on a showcase of ancient books and ancient calligraphy and painting.
Preferably, the large database contains a history of user reservation information for the current museum, wherein the user reservation information includes personal information such as user face information, user certificate information, and user phone numbers.
Preferably, the large database contains a history of user subscription information for a plurality of museums sharing user information, wherein the user subscription information includes personal information such as user face information, user credentials information, and user phone numbers.
Preferably, the virtual digital person application method further comprises:
under the condition that a viewing user watching the exhibits in the preset showcase is judged to execute a photographing action, after the virtual digital person obtains the face information of the viewing user, searching the corresponding user identity information in the large database based on the obtained face information;
checking whether the information of flash lamp shooting of the exhibited user is recorded in the user account information corresponding to the searched user identity information;
under the condition that information that the exhibition user shoots the flash lamp is recorded, the virtual digital person starts interactive exhibition with the exhibition user so as to remind the user not to use the flash lamp.
Preferably, the virtual digital person display video reminds the user in a voice manner that the user does not use the flash, wherein the voice information contains the name of the user.
Preferably, in the case that the user navigates through the museum application software in the networked museum navigation device or mobile phone, the virtual digital person plays the emitted voice information through the museum navigation device or mobile phone based on the login information of the user in the museum navigation device or mobile phone.
According to the invention, there is also provided a virtual digital person application method, comprising:
arranging the virtual digital person display device on a predetermined showcase;
the virtual digital person determining whether a viewing user viewing an exhibit in a predetermined showcase is about to perform a photographing action based on a depth algorithm, comprising:
acquiring an image;
training a first convolutional neural network comprising a pooling layer to obtain a first neural network model for judging shooting expectations;
training a first convolutional neural network which does not contain a pooling layer to obtain a second neural network model for judging shooting expectations;
adding a pooling layer (parameters are kept unchanged during adding) in the first neural network model into the second neural network model to obtain a third neural network model;
training the third neural network model again; the first neural network model, the second neural network model and the third neural network model are trained by adopting the same image data;
the virtual digital person judges whether a viewing user watching the exhibits in the preset showcase is about to execute photographing action or not based on the third neural network model after training again by using the acquired image;
under the condition that a viewing user who views the exhibits in the preset showcase is judged to execute a photographing action, the virtual digital person obtains the facial information of the viewing user;
searching corresponding user identity information in a large database based on the acquired face information;
checking whether the information of flash lamp shooting of the exhibited user is recorded in the user account information corresponding to the searched user identity information;
under the condition that information that the exhibition user shoots the flash lamp is recorded, the virtual digital person starts interactive exhibition with the exhibition user so as to remind the user not to use the flash lamp.
Preferably, the virtual digital person display video reminds the user in a voice manner that the user does not use the flash, wherein the voice information contains the name of the user.
The virtual digital person is applied to the protection of the cultural relics in the museum, and the virtual digital person and the big data are combined, so that the virtual digital person can conduct conventional exhibition guiding and simultaneously prevent damage to the cultural relics caused by improper behaviors of observers.
Detailed Description
In order that the invention may be more readily understood, a detailed description of the invention is provided below along with specific embodiments and accompanying figures.
< first preferred embodiment >
Fig. 1 schematically shows an overall flow chart of a virtual digital person application method according to a first preferred embodiment of the invention. Specifically, as shown in fig. 1, the virtual digital person application method according to the first preferred embodiment of the present invention may include:
step S1: arranging the virtual digital person display device on a predetermined showcase;
for example, a virtual digital person display device is placed in a location that can be seen by a viewing user on a showcase that cannot be photographed using a flashing light. Specifically, for example, the virtual digital person display apparatus is arranged at a position that can be seen by a viewing user on a showcase of ancient books and ancient calligraphy and painting.
For example, a virtual digital person display device includes units such as a display and speakers.
Step S2: the virtual digital person judges whether a viewing user watching the exhibits in the preset showcase is about to execute photographing action or not based on a depth algorithm;
before each step of the invention is executed, the virtual digital person can be trained to improve the accuracy of accurately judging whether the exhibition user watching the exhibits in the predetermined showcase is about to execute the photographing action.
It will be appreciated that a suitable deep learning algorithm may be employed to effect the determination of this step. In a preferred embodiment, convolutional neural networks may be employed.
In a preferred embodiment of the present invention, the step S2 specifically includes:
acquiring an image;
training a first convolutional neural network comprising a pooling layer to obtain a first neural network model for judging shooting expectations;
training a first convolutional neural network which does not contain a pooling layer to obtain a second neural network model for judging shooting expectations;
adding a pooling layer (parameters are kept unchanged during adding) in the first neural network model into the second neural network model to obtain a third neural network model;
training the third neural network model again; wherein the first neural network model, the second neural network model, and the third neural network model are trained using the same image data.
And the virtual digital person judges whether a viewing user watching the exhibits in the predetermined showcase is about to execute photographing action or not based on the third neural network model after training again by using the acquired image.
Preferably, the first neural network model has only a single pooling layer therein, for example, the first neural network model includes a convolution layer, a pooling layer, and a fully-connected layer in order; the second neural network model comprises a plurality of convolution layers and a full connection layer; and the pooling layer in the first neural network model is added after the last convolutional layer of the second neural network model. Based on the third neural network model thus obtained, the accuracy of the final model trained using the same training data is relatively optimal.
The inventors advantageously found that the accuracy of the determination of whether a viewing user viewing an exhibit in a predetermined showcase is about to perform a photographing action is greatly improved by the third neural network model trained again using the related data.
Step S3: under the condition that a viewing user watching the exhibits in the preset showcase is judged to execute a photographing action, the virtual digital person acquires the facial information of the viewing user and monitors whether a flash lamp is started or not when the viewing user photographs in real time;
step S4: under the condition that a flash lamp is started when the user is judged to be watched to take a picture, searching corresponding user identity information in a large database based on the acquired facial information;
the large database contains, for example, a history of user reservation information of the current museum, wherein the user reservation information includes personal information such as user face information, user certificate information, and user telephone numbers.
More preferably, the large database contains a history of user subscription information for a plurality of museums sharing user information, wherein the user subscription information includes personal information such as user face information, user credentials information, and user phone numbers.
Preferably, a large database may be stored in the cloud for common access and reading by multiple museums sharing user information.
Step S5: and recording the flash lamp shooting information of the exhibition user on the exhibited item into user account information corresponding to the user identity information in the large database.
In this embodiment, the behavior of the observing user that the flash cannot be photographed may be recorded to a large database for subsequent use. More specifically, if the information of flash photographing of the exhibited item by the exhibited user is recorded in the user account information, it is indicated that the exhibited user has photographed the exhibited item which cannot be photographed using the flash, and such an unexplained exhibited behavior is recorded in a large database.
< second preferred embodiment >
The second preferred embodiment may be implemented on the basis of the first preferred embodiment or may be implemented separately.
Fig. 2 schematically shows an overall flow chart of a virtual digital person application method according to a second preferred embodiment of the invention. Specifically, as shown in fig. 2, the virtual digital person application method according to the second preferred embodiment of the present invention may include:
step S10: arranging the virtual digital person display device on a predetermined showcase;
for example, a virtual digital person display device is placed in a location that can be seen by a viewing user on a showcase that cannot be photographed using a flashing light. Specifically, for example, the virtual digital person display apparatus is arranged at a position that can be seen by a viewing user on a showcase of ancient books and ancient calligraphy and painting.
Step S20: the virtual digital person judges whether a viewing user watching the exhibits in the preset showcase is about to execute photographing action or not based on a depth algorithm;
before each step of the invention is executed, the virtual digital person can be trained to improve the accuracy of accurately judging whether the exhibition user watching the exhibits in the predetermined showcase is about to execute the photographing action.
It will be appreciated that a suitable deep learning algorithm may be employed to effect the determination of this step. In a preferred embodiment, convolutional neural networks may be employed.
In a preferred embodiment of the present invention, the step S2 specifically includes:
acquiring an image;
training a first convolutional neural network comprising a pooling layer to obtain a first neural network model for judging shooting expectations;
training a first convolutional neural network which does not contain a pooling layer to obtain a second neural network model for judging shooting expectations;
adding a pooling layer (parameters are kept unchanged during adding) in the first neural network model into the second neural network model to obtain a third neural network model;
training the third neural network model again; wherein the first neural network model, the second neural network model, and the third neural network model are trained using the same image data.
And the virtual digital person judges whether a viewing user watching the exhibits in the predetermined showcase is about to execute photographing action or not based on the third neural network model after training again by using the acquired image.
Preferably, the first neural network model has only a single pooling layer therein, for example, the first neural network model includes a convolution layer, a pooling layer, and a fully-connected layer in order; the second neural network model comprises a plurality of convolution layers and a full connection layer; and the pooling layer in the first neural network model is added after the last convolutional layer of the second neural network model. Based on the third neural network model thus obtained, the accuracy of the final model trained using the same training data is relatively optimal.
The inventors advantageously found that the accuracy of the determination of whether a viewing user viewing an exhibit in a predetermined showcase is about to perform a photographing action is greatly improved by the third neural network model trained again using the related data.
Step S30: under the condition that a viewing user who views the exhibits in the preset showcase is judged to execute a photographing action, the virtual digital person obtains the facial information of the viewing user;
step S40: searching corresponding user identity information in a large database based on the acquired face information;
the large database contains, for example, a history of user reservation information of the current museum, wherein the user reservation information includes personal information such as user face information, user certificate information, and user telephone numbers.
More preferably, the large database contains a history of user subscription information for a plurality of museums sharing user information, wherein the user subscription information includes personal information such as user face information, user credentials information, and user phone numbers.
Preferably, a large database may be stored in the cloud for common access and reading by multiple museums sharing user information.
Step S50: checking whether the information of flash lamp shooting of the exhibited user is recorded in the user account information corresponding to the searched user identity information;
step S60: under the condition that information that the exhibition user shoots the flash lamp is recorded, the virtual digital person starts interactive exhibition with the exhibition user so as to remind the user not to use the flash lamp.
Preferably, for example, in step S60, in the case where information is recorded that the viewing user performs flash photographing on the exhibited item, the protection cover disposed on the showcase glass is started, and the protection cover is closed after a predetermined time (for example, ten seconds) after reminding the user that the flash is not to be used has elapsed. Thus, the exhibited article can be protected best.
In step S60, if the virtual digital person is not already in the interactive display state with the viewing user, the virtual digital person starts the interactive display with the viewing user to remind the user not to use the flash; however, if the virtual digital person is already in a state of interactive presentation with the viewing user, the virtual digital person pauses the current interactive operation to remind the user not to use the flash, and resumes the paused interactive operation after reminding the user not to use the flash.
Specifically, for example, the virtual digital person displays a video to remind the user in a voice manner that the user does not need to use a flash lamp, wherein the voice information contains the name of the user so as to anchor a specific user for reminding, and the reminding effectiveness is improved.
For example, a virtual digital person speaks "× (user name) to a viewing user in a friendly manner, please do not use a flash to take a picture. The flash will cause damage to the cultural relics. "
Preferably, in the case that the user navigates through the museum guiding device or the museum application software in the mobile phone, the virtual digital person plays the sent voice information through the museum guiding device or the mobile phone based on the login information of the user in the museum guiding device or the mobile phone, so that the user is not disturbed, and meanwhile information leakage of the user is avoided.
General prompt information to the whole observer often cannot draw the attention of a specific individual. In the preferred embodiment of the invention, the prompt information aiming at the individual is sent out aiming at the specific user, so that the attention of specific observers can be most effectively brought, the security of the cultural relics is effectively protected, and the observation standards are maintained.
In summary, the virtual digital person is applied to the protection of the cultural relics of the museum, and the virtual digital person and the big data are combined, so that the virtual digital person can conduct conventional exhibition guiding and simultaneously prevent the damage to the cultural relics caused by improper behaviors of the observers.
Furthermore, unless specifically stated otherwise, the terms "first," "second," "third," and the like in the description herein, are used for distinguishing between various components, elements, steps, etc. in the description, and not for indicating a logical or sequential relationship between various components, elements, steps, etc.
It will be appreciated that although the invention has been described above in terms of preferred embodiments, the above embodiments are not intended to limit the invention. Many possible variations and modifications of the disclosed technology can be made by anyone skilled in the art without departing from the scope of the technology, or the technology can be modified to be equivalent. Therefore, any simple modification, equivalent variation and modification of the above embodiments according to the technical substance of the present invention still fall within the scope of the technical solution of the present invention.