CN117679027A - Personality analysis method based on meta universe and artificial intelligence - Google Patents

Personality analysis method based on meta universe and artificial intelligence Download PDF

Info

Publication number
CN117679027A
CN117679027A CN202211389395.XA CN202211389395A CN117679027A CN 117679027 A CN117679027 A CN 117679027A CN 202211389395 A CN202211389395 A CN 202211389395A CN 117679027 A CN117679027 A CN 117679027A
Authority
CN
China
Prior art keywords
personality
user
predicted
meta
universe
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211389395.XA
Other languages
Chinese (zh)
Inventor
裴永植
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from KR1020220112115A external-priority patent/KR20230071053A/en
Application filed by Individual filed Critical Individual
Publication of CN117679027A publication Critical patent/CN117679027A/en
Pending legal-status Critical Current

Links

Landscapes

  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The personality analysis method based on the meta universe and the artificial intelligence comprises the following steps: acquiring at least one of voice information and visual information of a user; analyzing the personality of the user using at least one of the voice information, the visual information, and the biometric information; generating user data including a personality analysis result of the user; and generating an avatar of the user using the respective user data.

Description

Personality analysis method based on meta universe and artificial intelligence
Technical Field
The invention relates to a personality analysis method, in particular to a personality analysis method based on meta universe and artificial intelligence.
Background
Meta space (metaverse) is a composite word meaning a virtual, overrun "meta (meta)" and a "universe (universe)" of the world, and means a virtual world connecting real life and legally approved activities, i.e., occupation, finance, learning, etc., in three-dimensional elements. In other words, the meta universe is a concept that evolves more than Virtual Reality (VR) by one stage, and the use of an avatar can not only enjoy a game or virtual reality, but also perform social cultural activities like reality.
The non-profit technology research community accelerated research foundation (ASF, acceleration Studies Foundation) classifies the metauniverse into four types, namely augmented reality (augmented reality), life logging (life logging), mirror world (virtual worlds), and virtual worlds.
Augmented reality refers to an environment in which objects that appear to virtually overlap interact by being represented in 2D or 3D in real space. The virtual world input method has the characteristics of reducing rejection feeling of people to the virtual world and improving input feeling. When a user shoots a trace of only a site left at present with a terminal camera, it is an example of augmented reality to see a scene where a past building constructed with a digital code is overlapped on the user terminal.
Life recordings are a technology that captures, stores, and describes daily experience and information of things and people. The user can capture all moments occurring in daily life through text, images, sounds, etc., and store the contents in a server for arrangement, sharing with other users. The activities of storing and sharing running distance, calories consumed, music selected, etc. are examples of daily recordings by associating the sportswear with a network-connectable MP3 player.
The mirror world refers to a virtual world that reflects the real world as realistically as possible, directly, and expands in information. Representative examples are Google Earth (Google Earth). Google earth collects all the satellite photos of the world and updates the photos with a certain period, which directly reflects the real world pattern changing from moment to moment. With the continuous development of technology, the mirror world reflecting reality will gradually approach the real world, which will become a huge investment element of virtual reality in the future. Such a mirror world user views the virtual world, and thus information about the real world can be obtained.
The virtual world is a substitute world that is similar to or completely different from reality constructed with digital data. In the virtual world, there is a feature that a user performs an activity similar to an economic and social activity of the real world through an avatar. The virtual world is a general term concept of communities represented in three-dimensional computer image environments, from online role-playing games to living virtual worlds such as sandboxes (sadbox) and ZEPETO, roblox.
[ Prior Art literature ]
[ patent literature ]
(patent document 0001) korean laid-open patent publication No. 10-2019-0108523 (2019.09.24.)
Disclosure of Invention
Personality checks are performed in real life by human subjects, and it is difficult to easily perform the check if the participants do not directly face-to-face. Thus, there is a need for a method for performing personality testing based on the meta-universe that is relatively simple.
The invention aims to provide a personality analysis method based on metauniverse and artificial intelligence, which can obtain voice information and/or visual information of a user, analyze the personality of the user by using the artificial intelligence, generate each user data and generate an avatar of the user active in the metauniverse by using each user data.
Another object of the present invention is to provide a personality analysis method based on metauniverse and artificial intelligence, which can update a personality in real time based on utterances or behaviors in the metauniverse of an avatar generated using initial user data.
Another object of the present invention is to provide a personality analysis method based on meta universe and artificial intelligence that can be used in various fields such as prospect, learning, communication, personnel/recording of companies, travel recommendation, product recommendation, etc. by reflecting the avatar of the user personality.
However, the problem to be solved by the present invention is not limited thereto, and various extensions can be made within the scope not departing from the spirit and scope of the present invention.
A personality analysis method based on meta universe and artificial intelligence performed in a personality analysis system according to one embodiment of the present invention includes the steps of: acquiring at least one of voice information, visual information, brain waves, DNA and other biological information of a user; analyzing personality of the user by using at least one of the voice information, the visual information, the brain waves, the DNA and the like; generating user data including a personality analysis result of the user; and generating an avatar of the user using the respective user data.
The step of analyzing the personality of the user may comprise the steps of: analyzing the voice information; based on the analysis result of the voice information, predicting personality of the user; based on the predictions, recommendation is made to the personality of the user.
The step of analyzing the speech information may comprise: removing noise from the voice information and normalizing the voice information; generating voice data markable by time axis (timeline) based on the voice information; based on the voice information, a voice style including at least one of a voice speed (tempo), a pause time (idle time), a tone color (tone), and a volume (volume) is generated.
The step of predicting the personality of the user based on the analysis result of the voice information is to predict the personality of the user based on the voice data and input the voice data, and the personality of the user can be predicted by the first engine that learns by means of the learning voice data classified by the category.
The step of predicting the personality of the user based on the analysis result of the voice information is to predict the personality of the user based on the voice style and input an average value of the voice style of a specific portion that is easy to read, and the personality of the user can be predicted by the second engine that learns by means of the learning voice style classified by the category.
The step of predicting the personality of the user based on the analysis result of the voice information is to predict the personality of the user based on the voice style, input the voice style marked at intervals, and predict the personality of the user by a third engine that learns by means of each marked data or the continuous mode data of the marked section.
The step of predicting the personality of the user based on the analysis result of the voice information is to predict the personality of the user based on the voice data and the voice style, and the personality of the user can be predicted by performing statistical processing on the result predicted based on the voice data and the result predicted based on the voice style.
The step of analyzing the personality of the user may comprise the steps of: analyzing the visual information; based on the analysis result of the visual information, predicting personality of the user; and recommending the personality of the user based on the prediction.
The step of analyzing the visual information may comprise the steps of: identifying the face of the user in the visual information, and extracting the face of the user; generating face data which is a field of the specific image or an entire specific image and can be marked by a time axis (timeline) based on the visual information; based on the visual information, a face shape is generated that includes at least one of a pattern and size of a face, a pattern and position of eyes, a pattern and position of a nose, a pattern and position of a mouth, a pattern and position of an ear, a pattern and position of an eyebrow, an expression, and a hairstyle.
The step of predicting the personality of the user based on the analysis result of the visual information is to predict the personality of the user based on the face data, and input the face data, and the personality of the user can be predicted by the fourth engine that performs learning by means of the learning face data classified by the category of the personality.
The step of predicting the personality of the user based on the analysis result of the visual information is to predict the personality of the user based on the face shape, and input a 2D or 3D face model generated based on the face shape of one scene of the specific image which is easy to read, and the personality of the user can be predicted by the fifth engine which learns by means of the face shape for learning classified by the category.
The step of predicting the personality of the user based on the analysis result of the visual information is to predict the personality of the user based on the face shape, and input a 2D or 3D face model generated based on the face shape marked by time for the whole of the specific image, and the personality of the user can be predicted by the sixth engine that learns by means of the learning face shape classified by the category.
The step of predicting the personality of the user based on the analysis result of the visual information is to predict the personality of the user based on the face data and the face shape, and the personality of the user can be predicted by statistically processing the result predicted based on the face data and the result predicted based on the face shape.
The step of analyzing the personality of the user may comprise the steps of: analyzing the biological information; based on the analysis result of the biological information, predicting personality of the user; based on the predictions, recommendation is made to the personality of the user.
The step of analyzing the biological information may include a brain wave analysis step of obtaining brain waves reacting under a plurality of stimuli of at least one of vision, hearing, taste, sense of touch and smell for a specific time and a DNA information analysis step of analyzing inherent DNA information by comparing structural features of each DNA element with an analyzed data information map.
The step of predicting the personality of the user based on the analysis results of the biological information may include the step of predicting the personality by comparing the analysis of the results predicted by the accumulated biological analysis.
Based on personality predictions, the step of recommending the personality of the user may include the steps of guiding the final recommended personality by single or comprehensive analysis of the personality result values predicted by more than one analysis, and querying whether it is appropriate for the situation in which the avatar is generated. The personality analysis method based on the meta universe and the artificial intelligence may further include a step of updating each user data including the result of the personality analysis of the user in real time or periodically by reflecting an action, a dialogue, an utterance, or a text of the avatar generated within the meta universe.
From the point of view of the multi-mode action mode, the meta-universe and artificial intelligence based personality analysis method may further include the step of analyzing at least one of areas and trends of interest of a user using the avatar by considering simultaneously the words, dialogs or words used by the avatar generated within the meta-universe and actions of the avatar, including actions of hands, heads, and lines of sight of the avatar.
The generated avatar may have artificial intelligence properties and may actively be active to interact with other avatars to operate even without user connection. In addition, the device can be moved to other places according to conditions and copied, and can be used for various experiments or metauniverse (virtualization and digital twin biochemical) structures.
The disclosed technology may have the following effects. However, it is not intended that the specific embodiments necessarily include all or only the following effects, and thus the scope of the claims of the disclosed technology should not be construed as limiting.
According to the personality analysis method based on the meta universe and the artificial intelligence, which is provided by the embodiment of the invention, the voice information and/or the visual information of the user can be obtained, the personality of the user is analyzed by the artificial intelligence, so that each user data is generated, and the avatar of the user active in the meta universe can be generated by each user data.
According to the meta space and artificial intelligence based personality analysis method of the embodiment of the present invention described above, the personality can be updated in real time based on the words or behaviors in the meta space of the avatar generated using the initial user data.
According to the personality analysis method based on meta universe and artificial intelligence of the embodiment of the invention, the virtual image reflecting the personality of the user can be used in various fields such as learning of artificial intelligence for personality analysis, provision of communication service, and recording process of enterprises.
Drawings
FIG. 1 is a sequence diagram of a meta-universe and artificial intelligence based personality analysis method in accordance with one embodiment of the present invention.
Fig. 2 schematically illustrates an overall process for generating an avatar according to one embodiment of the present invention.
FIG. 3 illustrates a process of analyzing speech information according to one embodiment of the invention.
Fig. 4 illustrates a process of analyzing personality using voice information according to one embodiment of the present invention.
Fig. 5 shows a process of analyzing visual information according to an embodiment of the present invention.
Fig. 6 illustrates a process of analyzing personality using visual information according to one embodiment of the present invention.
Description of the reference numerals
200: artificial intelligence engine
Detailed Description
The invention is capable of many modifications and embodiments and its specific embodiments are described in detail below with reference to the drawings.
However, the present invention should not be construed as being limited to the specific embodiments thereof, but should be construed to include all modifications, equivalents, and alternatives falling within the spirit and scope of the invention.
The terms first, second, etc. are used to describe various components, but the components are not limited by these terms. The term is merely intended to distinguish one component from another. For example, a first component may be termed a second component, and, similarly, a second component may be termed a first component, without departing from the scope of the present disclosure.
When a component is referred to as being "connected" or "connected" to another component, it can be directly connected or connected to the other component, but another component may be present therebetween. Conversely, when reference is made to a certain component being "directly connected" or "directly connected" to another component, it is to be understood that no additional component is present in between.
The terminology used in the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. The singular reference includes the plural reference unless the context clearly differs. In this application, the terms "comprises" and "comprising" and the like are to be construed as specifying the presence of the stated features, numbers, steps, acts, components, elements or combinations thereof, as referred to in the specification, without precluding the presence or addition of one or more other features or numbers, steps, acts, components, elements or combinations thereof.
Unless otherwise defined, all terms used herein, including technical or scientific terms, have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. Those terms defined in commonly used dictionaries should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
Hereinafter, preferred embodiments of the present invention will be described in detail and explicitly with reference to the accompanying drawings so that a person having ordinary skill in the art to which the present invention pertains can easily practice the present invention.
Fig. 1 is a sequence diagram of a meta-universe and artificial intelligence based personality analysis method performed in a personality analysis system according to one embodiment of the present invention.
Referring to fig. 1, in a personality analysis system, at least one of voice information and visual information is obtained in step S110. The voice information may be obtained through an ontology (natural language machine learning) -based voice recognition technique, and the visual information may include, without limitation: text information such as fonts of a user and models displayed on objects in hands of the user; facial information such as long looks, expressions, shape changes at ordinary times and during hard activities of the user; gesture information of the user; the object information of clothing and the like worn by the user and the like can be information judged visually. In the step of obtaining, in addition to the voice information and the visual information, various information including brain waves and biological information such as DNA, which can be obtained using an appropriate sensor, may be used. The exchanged conversations, recorded information, patterns, etc. may also be used as analysis and learning information.
In the personality analysis system, in step S130, the user personality is analyzed using at least one of the voice information, the visual information, and the biological information as a simple selection step for the user personality. Specifically, at least one of voice information, visual information, and biometric information of the user may be analyzed, and the personality of the user may be analyzed at a time using the analyzed data. Analysis of the biological information may obtain brain waves that react under stimulation of at least one of vision, hearing, taste, sense of touch, and smell in a specific time, and perform brain wave analysis. Further, analysis of biological information can analyze inherent DNA information by comparing the structural features of each DNA element with the analyzed data information map. In the case of biological information, the personality of the user may be predicted by comparative analysis with the results predicted by the accumulated biological analysis.
In the personality analysis system, in step S130, the user personality of the first analysis is the first-time personality data, and based thereon, recommendations may be made to the user personality. For example, when generating first personality data, the user's occupation or personality may be analyzed from the aspect of facies through artificial intelligence learning. In other words, with artificial intelligence that learns the user's face or voice, the user's occupation is president or athlete can be analyzed for the user's voice or facial information, and first personality data about the personality can also be analyzed. These first personality data may then be updated based on actions or utterances of the user's avatar to generate second personality data.
In the personality analysis system, in step S150, each user data (user data) including the personality analysis result of the user in the simple sorting step is generated. Each user data may include, for example, a user name (ID), password, personality analysis result data of the user. Rather than the user data comprising speech data of the user, facial image data itself, each user data may comprise data that analyzes the user's speech or facial image.
The user data is user personal data of the user temporarily obtained in the previous step of generating second personality data obtained after analysis based on the action or the utterance of the avatar. Here, each user data may be updated on the time axis. For example, in the simple selection step of the personality of the user, the personality data may be updated continuously on a time axis after data analyzed from the level of the audio or facial looks of the user.
In the personality analysis system, an avatar of the user is generated using the user data in step S170. Based on each user data, the appearance, dressing, etc. of the avatar can be decided, and the user can live in the meta universe through the generated avatar.
In the personality analysis system, the personality of the user is analyzed in real time using the avatar, and artificial intelligence is learned in step S190. In other words, in step S130, the personality of the user analyzed by the artificial intelligence is not fixed. The personality of the user is simply screened according to the minimum voice information and/or visual information to generate initial data, then actions or words of the avatar of the user are reflected in the meta universe, and the personality data of the user can be updated in real time or periodically.
As an example of reflecting the avatar's action, the personality data of the user may be updated according to whether the avatar moves to a specific place, is a running or going action. In addition, according to whether the avatar moves to a specific place or moves by a straight path or detour path, personality data of the user can be updated.
As an example of the utterance reflecting the avatar, after periodically or repeatedly used words (subject, object, adverb, aid, verb) in the utterance, dialogue, or text used by the avatar are extracted through the repeatability check, the areas and trends of interest of the user using the avatar may be analyzed.
From the point of view of the multi-mode action mode, the words, dialogs or words used by the avatar and the actions (movements of hands, head, line of sight) of the avatar can be simultaneously considered, and the areas and trends of interest of the user using the avatar can be analyzed. For example, in a specific word used by the avatar, when the hand or head of the avatar is shaken or more actively moved, an area and a tendency of interest of a user using the avatar may be analyzed in consideration of whether the line of sight of the avatar is moved to a specific position or a specific object in the virtual space, whether the line of sight is stopped, a duration of a specific action, etc.
On the other hand, since personality data is updated with actions or utterances of the avatar, not with actions or utterances of the user, when the user wants to know his/her own real personality, the user can act or speak the same as his/her actual action or utterance within the meta universe. Conversely, in order to generate an avatar having a personality desired by the user, the user may act or speak in a different manner within the meta-universe than the user's usual act or speak, and reflect this, thereby updating the personality desired by the user.
Through the analyzed individual user data or the user's avatar, communication services can be provided by matching with other users having similar personality to the user within the meta universe, or matching with other users having opposite personality to the user, or the like.
In addition, as part of the enterprise logging process, personality checks may be performed with each user's data or user's avatar that has been analyzed. For example, after an enterprise sets a business environment and constructs a meta universe according to the related settings, it can be checked whether the user's avatar can well adapt to the business environment set by the enterprise and execute the business. For example, when changing the business space setting to change the work environment of an enterprise where the user's avatar is working from a wide space to a narrow space (e.g., a space where people are full or a space where light is not coming in), it is possible to check whether the user's work progress is lowered. Alternatively, when the affiliated department of the enterprise having the avatar of the user of the developer's job is changed to the business department instead of the development department, the user's adaptation degree to the business may be checked.
In addition, for example, for a new employee for recording, it is possible to simulate how an artificial intelligent avatar reflecting the personality of the new employee for recording is active in the virtual business environment, thereby verifying the effect at the time of recording.
In addition, for example, for a new employee for recording, when the virtual image of the new employee for recording in the meta-universe performs a business according to each department in the enterprise, the business result can be evaluated from the viewpoint of personnel assessment, thereby verifying the effect for recording.
In addition, for example, an avatar of a person who has already been recorded or a new employee to be recorded in an enterprise is put in a virtual team for executing a new item in the enterprise, and the new item is executed in a metaspace, thereby verifying the effect of the item and seeing what synergistic effect is to be produced.
Compared with the case of receiving a personality test or performing a recording process in reality, the user may not directly receive the test, and thus an effective personality test or recording process may be realized, and thus may contribute to the management of the environment, society, and administration (ESG, social and Governance) from the standpoint of the enterprise.
When creating the avatar, although each user data analyzed is reflected, whether each user data itself can be disclosed to others or not can be determined by the user's choice.
The personality analysis method according to one embodiment of the present invention may be used to analyze not only the personality of a user (i.e., a person) but also the fitness of animals or things.
The foregoing steps S130, S150, S170 and S190 may be performed by an artificial intelligence engine of the personality analysis system, and detailed operations will be described later with reference to fig. 2 to 6.
Fig. 2 schematically illustrates an overall process for generating an avatar according to one embodiment of the present invention.
The personality analysis method according to one embodiment of the present invention may serve users as a personality solution for the meta-universe. The meta-universe personality solution is based on online, and can also provide personality analysis services when users connect in real time, and can also provide personality analysis services offline based on existing stored data when online connection is disabled. The off-line data may then be synchronized with the on-line data, and the results of the off-line analysis may also be reflected on-line.
The meta-universe may be a virtual world that replicates the real world, or may be a digital twin that operates as a reality. In addition, it is also possible that the world is created not to faithfully reflect reality but according to a special purpose or imagination.
A user may connect to the metaspace through a VR device such as a head mounted display (HMD, head Mounted Display) or through various methods such as an application or browser of a smartphone, PC, or the like.
As shown in fig. 2, when a user connects to a metaspace and logs in (log), a registration process 203 for generating an avatar is performed. The registration process 203 may be composed of a Basic Mode (Basic Mode) 210 and an Expert Mode (Expert Mode) 291, and the Basic Mode 210 generates an avatar by simple personality selection using the voice information 205 and/or the face information 207, the Expert Mode 291 is used as a precision check to perform more precise analysis of the personality through various checks such as a brute force check, an MLST learning strategy check, an MBTI, a nine-type personality type check, a bioinformatic analysis (brain wave, DNA), and the like. Expert mode may be used as a choice to decide whether to execute or not according to the user's choice.
Personality analysis may be performed by means of the artificial intelligence engine 200.
In the basic mode 210, the speech information 205 and/or the facial information 207 are analyzed 211, 213, the personality is analyzed 231, 233, 235 by means of an artificial intelligence program, as a result of which a recommendation 250 is made for the personality 250 of the user and a recommendation 270 is made for the avatar. The details are described later.
In the expert mode 291, personality analysis 293 is performed by means of an expert personality program, or personality analysis 295 is performed by actual personality check, and a recommendation 297 is made to a personality and 270 is made to an avatar according to the result thereof. The details are described later.
In other words, the artificial intelligence engine 200 recommends the avatar after analyzing the personality of the user using the voice information 205 and/or the face information 207. The user may use the avatar 270 recommended by the artificial intelligence engine 200 directly as his/her own avatar, or may make a customization.
Once the avatar is generated, the user enters the meta-universe and lives within the meta-universe through his/her avatar.
FIG. 3 illustrates a process of analyzing speech information according to one embodiment of the invention.
Referring to fig. 3, noise (Noise cancel) is first removed from the voice information 205, and normalization (normal) 211-1 is then performed. In other words, in the obtained voice information 205, the remaining sound is eliminated except for the voice of the user, and the voice section is corrected to the maximum size.
Next, based on the Voice information 205, voice Data (Voice Data) 211-3, which can be marked on a time axis (timeline), is generated. The voice data may be provided as input data for personality prediction, as data marked on the time axis with reference to the length of the entire voice, and may be a part of a specific area of the voice information 205 or the entire area of the voice information 205. The Voice Data (Voice Data) 211-3, which may be marked on the time axis (timeline), may distinguish, for example, by Voice pattern recognition, whether the user is a section speaking quickly or slowly, or a section speaking emphasized. The voice data 211-3, which may be marked on the time axis (timeline), may be automatically analyzed and marked, rather than manually analyzing which voice pattern corresponds to which voice pattern at which point in time. The case of a slow speech after a fast speech can be determined as a section in which the speech is emphasized. Further, even if the user speaks slowly after speaking quickly, the section may not emphasize the speech, and thus, it is possible to determine whether the section emphasizes the speech by the user, considering whether the tone or tone of the user rises, the line of sight of the user, or the like.
From the viewpoint of multiple modes, not only the user's utterance but also the presence or absence of the user's hand motion, the line-of-sight position, and other action data can be marked on the time axis (timeline) within a specific section on the time axis (timeline).
Next, based on the Voice information 205, a Voice Style (Voice Style) 211-5 including at least one of a Voice velocity (Voice tempo), a pause time between utterances (Voice idle time), a tone (tone), and a volume (volume) is generated. The speech style may be input data for personality prediction or input data for personality prediction in the form of a time-axis mark.
Fig. 4 illustrates a process of analyzing personality using voice information according to one embodiment of the present invention.
Referring to fig. 4, the prediction of the personality 231-3 may be performed based on at least one of the speech data 211-3 and the speech style 211-5, and the personality 231-3 may be predicted and learned in real-time by the machine learning (or deep learning) engine 231-5. The voice personality data set 231-7 includes at least one of data classifying personality according to the voice data 211-3 and data classifying personality according to a voice style.
When the personality of the user is predicted 231-3 based on the voice data 211-3, by inputting the voice data 211-3, the personality of the user can be predicted 231-3 by the first engine that performs learning by means of the learning voice data classified by the category. For example, the similarity may be measured by the category of the personality, and the personality with high similarity may be predicted as the personality result recommended to the user. Based on the voice data 211-3, the personality 250 of the predicted user is recommended to the user, and the avatar 270 corresponding to the personality of the predicted user is recommended.
When the personality of the user is predicted 231-3 based on the speech styles 211-5, the personality of the user 231-3 may be predicted by inputting an average value of the speech styles 211-5 regarding a specific portion that is easy to read, by learning with the speech styles 211-5 by means of learning classified by the personality type. For example, the similarity may be measured by the category of the personality, and the personality with the high predicted similarity may be predicted as the personality result recommended to the user. Recommending the personality 250 of the predicted user to the user based on the speech style 211-5, and recommending the avatar 270 corresponding to the personality of the predicted user.
Alternatively, when predicting the personality 231-3 of the user based on the speech styles 211-5, the speech styles 211-5 marked at certain time intervals may be input, and the personality 231-3 of the user may be predicted by a third engine that learns by means of the respective marked data or the continuous mode data of the marked section. For example, the similarity may be determined by the personality type, and the measurement of the entire tag may be counted, or the continuous pattern classification may be predicted as the personality result recommended to the user.
When the personality of the user is predicted 231-3 based on the speech data 211-3 and the speech style 211-5, the result predicted by the first engine and the result predicted by the second engine may be statistically processed to predict the personality to be finally recommended to the user, or the result predicted by the first engine and the result predicted by the third engine may be statistically processed to predict the personality to be finally recommended to the user.
Fig. 5 shows a process of analyzing visual information according to an embodiment of the present invention.
Referring to fig. 5, first, a Face 270 of a user is recognized from visual information (Face recognition), and the Face 270 of the user is extracted (loop). When there are two or more persons in the visual information, the other persons than the object may be deleted, and the portion other than the face may also be deleted.
Next, based on the visual information, face Data (Face Data) 213-3, which can be marked on a time axis (time), of one scene (stills) or the entire specific image (Video) as a specific image is generated.
Next, based on the visual information, a Face Shape (Face Style) 213-5 including at least one of a Face Shape and Size (Face Shape/Size), an eye Shape and position (Eyes Shape/pts), a Nose Shape and position (Nose Shape/pts), a Mouth Shape and position (Mouth Shape/pts), an ear Shape and position (Ears Shape/pts), an eyebrow Shape and position (Eyes Shape/pts), an expression (Facial Expression), and a hairstyle (Hair Shape) is generated. In addition, the face shape may also include information that is individually identified by glasses, ear nails, beards, etc. The face shape may be input data for personality prediction or input data for personality prediction in the form of a time axis mark.
Fig. 6 illustrates a process of analyzing personality using visual information according to one embodiment of the present invention.
Referring to fig. 6, the prediction of the personality may be performed based on at least one of the face data 213-3 and the face shape 213-5, the personality is predicted by the machine learning (or deep learning) engine 233-5, and the personality is learned 233-3 in real time. The face personality data set 233-7 includes at least one of data classifying personality according to the face data 213-3 and data classifying personality according to the speech style 211-5.
When the personality of the user is predicted 233-3 based on the face data 213-3, by inputting the face data 213-3, the personality of the user can be predicted by the fourth engine that performs learning by means of the learning face data classified by the category, for example, the similarity can be measured by the category, and the personality with high similarity can be predicted as the result of the personality recommended to the user. Based on the face data 213-3, the personality 250 of the predicted user is recommended to the user, and the avatar 270 corresponding to the personality of the predicted user is recommended.
When the personality of the user is predicted 233-3 based on the face shape 213-5, the 2D or 3D face model generated based on the face shape of one scene of the specific image that is easy to read is input, and the personality of the user 233-3 can be predicted by the fifth engine that learns by means of the learning face shape classified by the category. For example, the similarity may be measured by the category of the personality, and the personality with high similarity may be predicted as the personality result recommended to the user. Based on the face shape 213-5, the personality 250 of the predicted user is recommended to the user, and the avatar 270 corresponding to the personality of the predicted user is recommended.
As another way, when the personality of the user is predicted 233-3 based on the face shape 213-5, a 2D or 3D face model generated based on the face shape time-stamped for the entire specific image is input, and the personality of the user 233-3 is predicted by a sixth engine that learns by means of the learning face shape classified by the category. For example, the tag similarity may be determined and final statistics made to predict personality results recommended to the user.
When the personality of the user is predicted 233-3 based on the face data 213-3 and the face shape 213-5, the result predicted by the aforementioned fourth engine and the result predicted by the fifth engine may be statistically processed to predict the personality to be finally recommended to the user, or the result predicted by the fourth engine and the result predicted by the sixth engine may be statistically processed to predict the personality to be finally recommended to the user.
Although the present invention has been described above with reference to the accompanying drawings and examples, it is not intended to limit the scope of the present invention to the above-described drawings or examples, but it will be understood that the present invention may be variously modified and altered by persons skilled in the relevant art without departing from the spirit and scope of the present invention as set forth in the claims.

Claims (15)

1. The personality analysis method based on the meta universe and the artificial intelligence is performed in a personality analysis system and is characterized by comprising the following steps of:
at least one of voice information, visual information and biological information of a user is obtained;
analyzing the personality of the user by using at least one of the voice information, the visual information and the biological information;
generating user data including a personality analysis result of the user; and
an avatar of the user is generated using the respective user data.
2. The meta-universe and artificial intelligence based personality analysis method of claim 1 wherein the step of analyzing the user's personality comprises the steps of:
analyzing the voice information;
based on the analysis result of the voice information, predicting personality of the user; and
based on the predictions, recommendation is made to the personality of the user.
3. The meta-universe and artificial intelligence based personality analysis method of claim 2 wherein the step of analyzing the voice information includes:
removing noise from the voice information and normalizing the voice information;
generating voice data markable on a time axis based on the voice information; and
based on the speech information, a speech style including at least one of speech speed, pause time, timbre, and volume is generated.
4. The personality analysis method based on the meta universe and the artificial intelligence according to claim 3, wherein,
based on the analysis result of the voice information, the steps of predicting the personality of the user are,
based on the voice data, the personality of the user is predicted,
voice data is input, and a personality of a user is predicted by a first engine that performs learning by means of the learning voice data classified by the personality type.
5. The personality analysis method based on the meta universe and the artificial intelligence according to claim 3, wherein,
based on the analysis result of the voice information, the steps of predicting the personality of the user are,
based on the speech style, the personality of the user is predicted,
the average value of the speech styles of the specific portion that is easy to read is input, and the personality of the user is predicted by the second engine that performs learning by means of the learning speech styles classified by the personality type.
6. The personality analysis method based on the meta universe and the artificial intelligence according to claim 3, wherein,
based on the analysis result of the voice information, the steps of predicting the personality of the user are,
based on the speech style, the personality of the user is predicted,
the speech style marked at intervals is input, and the personality of the user is predicted by a third engine that learns by means of the continuous mode data of each marking data or marking section.
7. The personality analysis method based on the meta universe and the artificial intelligence according to claim 3, wherein,
based on the analysis result of the voice information, the steps of predicting the personality of the user are,
based on the voice data and the voice style, the personality of the user is predicted,
the personality of the user is predicted by statistically processing the results predicted based on the speech data and the results predicted based on the speech style.
8. The meta-universe and artificial intelligence based personality analysis method of claim 1 wherein the step of analyzing the user's personality comprises the steps of:
analyzing the visual information;
based on the analysis result of the visual information, predicting personality of the user; and
based on the predictions, recommendation is made to the personality of the user.
9. The meta-universe and artificial intelligence based personality analysis method of claim 8 wherein the step of analyzing the visual information includes the steps of:
identifying the face of the user in the visual information, and extracting the face of the user;
generating face data, which is a scene of a specific image or a whole of the specific image, that can be marked on a time axis based on the visual information; and
based on the visual information, a face shape is generated that includes at least one of a pattern and size of a face, a pattern and position of eyes, a pattern and position of a nose, a pattern and position of a mouth, a pattern and position of an ear, a pattern and position of an eyebrow, an expression, and a hairstyle.
10. The personality analysis method based on the meta universe and the artificial intelligence of claim 9, wherein,
based on the analysis result of the visual information, the step of predicting the personality of the user is,
based on the face data, the personality of the user is predicted,
face data is input, and the personality of the user is predicted by a fourth engine that performs learning by means of learning face data classified by personality type.
11. The personality analysis method based on the meta universe and the artificial intelligence of claim 9, wherein,
based on the analysis result of the visual information, the step of predicting the personality of the user is,
based on the face shape, the personality of the user is predicted,
the 2D or 3D face model generated based on the face shape of one scene of the specific image that is easy to read is input, and the personality of the user is predicted by a fifth engine that performs learning by means of the learning face shape classified by the personality type.
12. The personality analysis method based on the meta universe and the artificial intelligence of claim 9, wherein,
based on the analysis result of the visual information, the step of predicting the personality of the user is,
based on the face shape, the personality of the user is predicted,
the 2D or 3D face model generated based on the face shape time-stamped for the entire specific image is input, and the personality of the user is predicted by a sixth engine that performs learning by means of the learning face shape classified by the personality type.
13. The personality analysis method based on the meta universe and the artificial intelligence of claim 9, wherein,
based on the analysis result of the visual information, the step of predicting the personality of the user is,
based on the face data and the face shape, the personality of the user is predicted,
the personality of the user is predicted by statistically processing the result predicted based on the face data and the result predicted based on the face shape.
14. The personality analysis method based on the meta universe and the artificial intelligence according to claim 1, characterized in that,
further comprising the step of updating each user data including the result of the personality analysis of the user in real time or periodically by reflecting the action, dialogue, utterance or text of the avatar generated within the meta-universe.
15. The personality analysis method based on the meta universe and the artificial intelligence according to claim 1, characterized in that,
from the point of view of the multimodal action pattern, it further comprises a step of analyzing at least one of the areas and trends of interest of the user using the avatar by considering simultaneously the words, dialogs or words used by the avatar generated within the meta-universe and actions of the avatar, including actions of hands, head, line of sight of the avatar.
CN202211389395.XA 2022-09-05 2022-11-08 Personality analysis method based on meta universe and artificial intelligence Pending CN117679027A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR10-2022-0112115 2022-09-05
KR1020220112115A KR20230071053A (en) 2021-11-15 2022-09-05 Method for analyzing personality or aptitude based on metaverse

Publications (1)

Publication Number Publication Date
CN117679027A true CN117679027A (en) 2024-03-12

Family

ID=90127196

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211389395.XA Pending CN117679027A (en) 2022-09-05 2022-11-08 Personality analysis method based on meta universe and artificial intelligence

Country Status (1)

Country Link
CN (1) CN117679027A (en)

Similar Documents

Publication Publication Date Title
Vinciarelli et al. A survey of personality computing
EP3596656B1 (en) Monitoring and analyzing body language with machine learning, using artificial intelligence systems for improving interaction between humans, and humans and robots
US20090044113A1 (en) Creating a Customized Avatar that Reflects a User's Distinguishable Attributes
CN109789550B (en) Control of social robots based on previous character depictions in novels or shows
Le et al. Live speech driven head-and-eye motion generators
CN111081371A (en) Virtual reality-based early autism screening and evaluating system and method
CN109637207A (en) A kind of preschool education interactive teaching device and teaching method
KR102532908B1 (en) Device, method and program for providing psychological counseling using deep learning and virtual reality
CN109176535A (en) Exchange method and system based on intelligent robot
CN114463827A (en) Multi-modal real-time emotion recognition method and system based on DS evidence theory
CN115951786A (en) Multi-bureau creative social game creation method utilizing AIGC technology
Calvo et al. Introduction to affective computing
Ibáñez et al. Using gestural emotions recognised through a neural network as input for an adaptive music system in virtual reality
CN113033693A (en) User subjective attribute fused personalized image aesthetic evaluation method and device
Truong et al. Unobtrusive multimodal emotion detection in adaptive interfaces: speech and facial expressions
Ajili et al. Expressive motions recognition and analysis with learning and statistical methods
Gatica-Perez et al. 25 Analysis of Small Groups
CN117679027A (en) Personality analysis method based on meta universe and artificial intelligence
US20230154093A1 (en) Method for analyzing personality or aptitude based on metaverse and artificial intelligence
Wu et al. Extrovert or Introvert? GAN-Based Humanoid Upper-Body Gesture Generation for Different Impressions
CN113808281B (en) Method, system, device and storage medium for generating virtual fairy image of automobile
Melder et al. Affective multimodal mirror: sensing and eliciting laughter
CN108765011A (en) The method and apparatus established user's portrait and establish status information analysis model
Krenn et al. Life-like agents for the internet: A cross-cultural case study
KR20230071053A (en) Method for analyzing personality or aptitude based on metaverse

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination