CN108776932A - Determination method, storage medium and the server of customer investment type - Google Patents
Determination method, storage medium and the server of customer investment type Download PDFInfo
- Publication number
- CN108776932A CN108776932A CN201810495325.XA CN201810495325A CN108776932A CN 108776932 A CN108776932 A CN 108776932A CN 201810495325 A CN201810495325 A CN 201810495325A CN 108776932 A CN108776932 A CN 108776932A
- Authority
- CN
- China
- Prior art keywords
- personality
- user
- scoring
- audio
- video
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 35
- 230000015572 biosynthetic process Effects 0.000 claims abstract description 33
- 238000003786 synthesis reaction Methods 0.000 claims abstract description 33
- 238000000605 extraction Methods 0.000 claims abstract description 28
- 230000004927 fusion Effects 0.000 claims abstract description 23
- 239000000284 extract Substances 0.000 claims abstract description 7
- 239000011159 matrix material Substances 0.000 claims description 41
- 238000004590 computer program Methods 0.000 claims description 19
- 238000012549 training Methods 0.000 claims description 19
- 239000003550 marker Substances 0.000 claims description 6
- 230000032258 transport Effects 0.000 claims 1
- 238000012550 audit Methods 0.000 abstract description 11
- 238000012795 verification Methods 0.000 abstract description 11
- 238000010586 diagram Methods 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 238000001914 filtration Methods 0.000 description 3
- 238000005242 forging Methods 0.000 description 3
- 238000004215 lattice model Methods 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 238000013475 authorization Methods 0.000 description 2
- 230000008921 facial expression Effects 0.000 description 2
- 241000208340 Araliaceae Species 0.000 description 1
- 235000005035 Panax pseudoginseng ssp. pseudoginseng Nutrition 0.000 description 1
- 235000003140 Panax quinquefolius Nutrition 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000010485 coping Effects 0.000 description 1
- 238000013075 data extraction Methods 0.000 description 1
- 230000005611 electricity Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 230000001815 facial effect Effects 0.000 description 1
- 235000008434 ginseng Nutrition 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000011176 pooling Methods 0.000 description 1
- 239000013598 vector Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q40/00—Finance; Insurance; Tax strategies; Processing of corporate or income taxes
- G06Q40/06—Asset management; Financial planning or analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/254—Fusion techniques of classification results, e.g. of results related to same input data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/174—Facial expression recognition
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/03—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/48—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Business, Economics & Management (AREA)
- General Health & Medical Sciences (AREA)
- Acoustics & Sound (AREA)
- Accounting & Taxation (AREA)
- Finance (AREA)
- Computational Linguistics (AREA)
- Signal Processing (AREA)
- Development Economics (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Game Theory and Decision Science (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Human Resources & Organizations (AREA)
- Evolutionary Biology (AREA)
- Entrepreneurship & Innovation (AREA)
- Operations Research (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Economics (AREA)
- Marketing (AREA)
- Strategic Management (AREA)
- Technology Law (AREA)
- General Business, Economics & Management (AREA)
- Electrically Operated Instructional Devices (AREA)
Abstract
The present invention provides a kind of determination method, storage medium and the servers of customer investment type, including:Obtain audio-video document of user during face is examined;Video data and audio data are isolated from the audio-video document;The characteristic image that the user is extracted according to the video data extracts the phonetic feature of the user according to the audio data;The characteristic image of the user of extraction is input to image personality model, obtains the first personality scoring for the characteristic image of the user;The phonetic feature of the user of extraction is input to audio personality model, obtains secondary personality's scoring for the phonetic feature of the user;First personality scoring is merged with secondary personality scoring, determines that the synthesis personality of the user scores according to fusion results;The investment types of the user are assessed according to the scoring of the synthesis personality of the determining user.The present invention can save the manpower of manual examination and verification, improve the efficiency of audit assessment.
Description
Technical field
The present invention relates to information monitoring field more particularly to a kind of determination method of customer investment type, storage medium and
Server.
Background technology
User usually requires carry out face when handling bank securities and examines to assess with the risk tolerance to user, existing
Some bank faces, which are examined to be usually used, fills in papery question-and-answer problem and self-assessment carrys out risk tolerance and personality to user
It is analyzed.Examine member for veteran face perhaps can also be used as ginseng by observing the expression of user to assess the personality of user
It examines.
In fact, the experience for filling in the careful member in question and answer, self-assessment and face according to user is inclined come the investment risk to user
Good and personality is analyzed, and predominantly user and face examine the subjective judgement of member, lack some objective auxiliary judgments, easily cause and comment
Estimate inaccuracy, the authenticity and accuracy of assessment are relatively low, influence the efficiency of audit assessment.
In conclusion existing credit authorization using manual examination and verification assess, heavy workload, manual examination and verification assessment efficiency compared with
It is low, meanwhile, manual examination and verification subjectivity is strong, lacks objective auxiliary judgment, and audit assessment accuracy is relatively low.
Invention content
An embodiment of the present invention provides a kind of determination method, storage medium and the servers of customer investment type, to solve
Existing credit authorization is assessed using manual examination and verification, heavy workload, and manual examination and verification are assessed less efficient, meanwhile, manual examination and verification
Subjectivity is strong, lacks objective auxiliary judgment, the relatively low problem of audit assessment accuracy.
The first aspect of the embodiment of the present invention provides a kind of determination method of customer investment type, including:
Obtain audio-video document of user during face is examined;
Video data and audio data are isolated from the audio-video document;
The characteristic image that the user is extracted according to the video data extracts the user's according to the audio data
Phonetic feature;
The characteristic image of the user of extraction is input to image personality model, obtains the characteristic pattern for the user
First personality of picture scores;
The phonetic feature of the user of extraction is input to audio personality model, obtains the phonetic feature for being directed to the user
Secondary personality scores;
First personality scoring is merged with secondary personality scoring, the user is determined according to fusion results
Synthesis personality scoring;
The investment types of the user are assessed according to the scoring of the synthesis personality of the determining user.
The second aspect of the embodiment of the present invention provides a kind of server, including memory and processor, the storage
Device is stored with the computer program that can be run on the processor, and the processor is realized such as when executing the computer program
Lower step:
Obtain audio-video document of user during face is examined;
Video data and audio data are isolated from the audio-video document;
The characteristic image that the user is extracted according to the video data extracts the user's according to the audio data
Phonetic feature;
The characteristic image of the user of extraction is input to image personality model, obtains the characteristic pattern for the user
First personality of picture scores;
The phonetic feature of the user of extraction is input to audio personality model, obtains the phonetic feature for being directed to the user
Secondary personality scores;
First personality scoring is merged with secondary personality scoring, the user is determined according to fusion results
Synthesis personality scoring;
The investment types of the user are assessed according to the scoring of the synthesis personality of the determining user.
The third aspect of the embodiment of the present invention provides a kind of computer readable storage medium, the computer-readable storage
Media storage has computer program, the computer program to realize following steps when being executed by processor:
Obtain audio-video document of user during face is examined;
Video data and audio data are isolated from the audio-video document;
The characteristic image that the user is extracted according to the video data extracts the user's according to the audio data
Phonetic feature;
The characteristic image of the user of extraction is input to image personality model, obtains the characteristic pattern for the user
First personality of picture scores;
The phonetic feature of the user of extraction is input to audio personality model, obtains the phonetic feature for being directed to the user
Secondary personality scores;
First personality scoring is merged with secondary personality scoring, the user is determined according to fusion results
Synthesis personality scoring;
The investment types of the user are assessed according to the scoring of the synthesis personality of the determining user.
In the embodiment of the present invention, by obtaining audio-video document of user during face is examined, from the audio-video document
In isolate video data and audio data, the characteristic image of the user is extracted according to the video data, according to the sound
The characteristic image of the user of extraction is input to image personality model, obtained by frequency according to the phonetic feature for extracting the user
The first personality scoring for the characteristic image of the user is taken, the phonetic feature of the user of extraction is input to audio personality mould
Type obtains secondary personality's scoring for the phonetic feature of the user, by first personality scoring and the secondary personality
Scoring is merged, and determines that the synthesis personality of the user scores according to fusion results, objectively according to the determining user
The scoring of synthesis personality assess the investment types of the user, avoid influencing the accuracy of audit assessment because of subjective judgement, improve
The accuracy of assessment is audited, while the manpower of manual examination and verification can be saved, improves the efficiency of audit assessment.
Description of the drawings
It to describe the technical solutions in the embodiments of the present invention more clearly, below will be to embodiment or description of the prior art
Needed in attached drawing be briefly described, it should be apparent that, the accompanying drawings in the following description be only the present invention some
Embodiment for those of ordinary skill in the art without having to pay creative labor, can also be according to these
Attached drawing obtains other attached drawings.
Fig. 1 is the implementation flow chart of the determination method of customer investment type provided in an embodiment of the present invention;
Fig. 2 is the specific implementation flow chart of the determination method S102 of customer investment type provided in an embodiment of the present invention;
Fig. 3 be customer investment type provided in an embodiment of the present invention determination method in image personality model training it is specific
Implementation flow chart;
Fig. 4 is the specific of the determination method sound intermediate frequency personality model training of customer investment type provided in an embodiment of the present invention
Implementation flow chart;
Fig. 5 is the specific implementation flow chart of the determination method S106 of customer investment type provided in an embodiment of the present invention;
Fig. 6 is the structure diagram of the determining device of customer investment type provided in an embodiment of the present invention;
Fig. 7 is the schematic diagram of server provided in an embodiment of the present invention.
Specific implementation mode
In order to make the invention's purpose, features and advantages of the invention more obvious and easy to understand, below in conjunction with the present invention
Attached drawing in embodiment, technical scheme in the embodiment of the invention is clearly and completely described, it is clear that disclosed below
Embodiment be only a part of the embodiment of the present invention, and not all embodiment.Based on the embodiments of the present invention, this field
All other embodiment that those of ordinary skill is obtained without making creative work, belongs to protection of the present invention
Range.
Fig. 1 shows the implementation process of the determination method of customer investment type provided in an embodiment of the present invention, this method stream
Journey includes step S101 to S106.The specific implementation principle of each step is as follows:
S101:Obtain audio-video document of user during face is examined.
Specifically, when user's progress face is examined, video record is carried out by camera, while examining using sound pick-up opposite
Cheng Jinhang records, i.e., the described audio-video document includes not only video data of user during face is examined, and further includes being examined in face
The audio data of user in journey.In embodiments of the present invention, it refers to when user's demand for credit is audited and auditor that process is examined in face
The process of exchange, certainly, this programme are not only limited the use of in credit applications, it may also be used for other need the event audited face to face.
S102:Video data and audio data are isolated from the audio-video document.
Under normal circumstances, video and audio are recorded simultaneously when shooting video using recording arrangement, is exported from recording arrangement
Recorded file be audio and video one, therefore, in order to analyze respectively video and audio, need from audio-video document
In isolate video data and audio data.
As an embodiment of the present invention, as shown in Fig. 2, above-mentioned S102 is specifically included:
A1:The audio-video document is demultiplexed, video stream file and audio stream file are obtained.Demultiplexing refers to from the sound
Video flowing and audio stream are distributed in video file.The data structure of data after structure demultiplexing, data structure includes audio
Two kinds of data structure and video data structure are stored in audio file in the audio stream that demultiplexing will identify that in the process, will identify
The video flowing storage video file gone out.
A2:The video stream file is decoded, and denoising is filtered to decoded video stream file, obtains video counts
According to.Noise can be effectively removed by being filtered to decoded video stream file, improves the clarity of image.Specifically, scheme
Picture noise is filtered removal noise by using Gaussian filter close to Gaussian Profile to video stream file.
A3:The audio stream file is decoded, and is filtered denoising to decoded audio stream file, obtains audio
Data.Specifically, format conversion is carried out to the audio stream file and coding compresses, call decoder to the audio stream file
It is decoded and filters, noise is removed by filtering, to arrive audio data.
In embodiments of the present invention, it is handled, is removed by video stream file to separating and audio stream file
Noise effectively improves the clarity of video image and the clarity of audio, is commented to improve audit to avoid noise jamming
The accuracy estimated.
S103:The characteristic image that the user is extracted according to the video data, according to audio data extraction
The phonetic feature of user.
In embodiments of the present invention, spy of the video frame images of specified quantity as user is extracted from the video data
Levy image, wherein characteristic image can be the facial expression image of the user.
Further, the video frame images of specified quantity, and the picture frame that gained will be extracted are extracted from video data
It is sent into Image Classifier, obtains the characteristic image of user.Wherein, Image Classifier is trained in advance, for people
What face image was classified.Specifically, it from the Sample video of user feeling state in known video, is set by setting frame per second extraction
The video frame images of fixed number amount are as sample key frame images, and according to the affective state of user in known video to the sample
This key frame images is labeled, i.e., the affective state of user such as indignation, glad, just is marked in the sample key frame images
It is often or excited, the key frame images are handled by pond (Pooling) and standard L2 regular terms, obtain specified dimension
The Sample video feature of degree (such as 512 dimension), is compared, so cycle adjustment with the personality of known user, until output
Sample video feature is consistent with the affective state of sample key frame images of input, and training is completed.In the embodiment of the present invention, from regarding
The video frame images that frequency extracts specified quantity in are sent into trained Image Classifier, by the video frame images of output
Characteristic image as user.
Optionally, the video frame images of specified quantity are chosen from the video data as key frame images, detect institute
The face of user in key frame images is stated, and facial modeling is carried out to the face, according to fixed by human face characteristic point
Position key frame images carry out face modeling, generate the corresponding faceform of the key frame images, by the faceform of generation with
The expression model pre-established is compared, and the corresponding expression of the key frame images is determined, according to determining expression to described
Key frame images are labeled, the characteristic image as the user.
Specifically, the phonetic feature for the user being extracted from the audio data includes the sound intensity, loudness and pitch.?
In physics, passing through the average acoustic energy of the unit area perpendicular to Acoustic Wave Propagation direction, the referred to as sound intensity, people couple in the unit interval
Feeling for sound intensity is not directly proportional to the sound intensity, but directly proportional to its logarithm.So the general sound intensity with sound intensity level come
It indicates.A kind of subjective psychology amount when loudness is the sound intensity degree that human subject feels.In general, sound frequency one
Periodically, the sound intensity is stronger, and loudness is also bigger.But loudness is related with frequency, the identical sound intensity, when frequency difference, loudness may also
It is different.Pitch is also a kind of subjective psychology amount, is feeling of the human auditory system for sound frequency height.Meier frequency can be used
The modes such as rate cepstrum coefficient MFCC, linear prediction residue error LPCC or Multimedia Content Description Interface MPEG7 are from audio number
According to the phonetic feature of middle extraction specified quantity.In embodiments of the present invention, OpenSMILE can be used to carry from the audio data
Take the phonetic feature of the user.
S104:The characteristic image of the user of extraction is input to image personality model, is obtained for the user's
First personality of characteristic image scores.
Wherein, described image personality model is by training in advance, for characteristic image (such as expression figure according to user
Picture) it scores the possible personality of user.In the embodiment of the present invention, to improve the accuracy of scoring, various dimensions are to user's
Personality scores, and specifically, is scored the five-factor model personality of the user by described image personality model, wherein people
Five factors in lattice structure be referred to as " big five " (big five), emphasize in the people's lattice model per dimension popularity.This
Five dimension factors are neurotic (N), extroversion (E), experience open (O), pleasant property (A) and conscientious property (C).
As an embodiment of the present invention, Fig. 3 shows image personality model training provided in an embodiment of the present invention
Specific implementation flow, details are as follows:
B1:Obtain sample characteristics image, sample characteristics picture strip someone's case marker note.That is, the personality is noted for
Identify the personality of the corresponding people of the sample characteristics image.Wherein, the personality mark includes five kinds of personalities in five-factor model personality
Score, for verifying the output of model in training image personality model as a result, to adjust the mould of the audio property lattice model
Shape parameter.Specifically, acquisition includes the Sample video of personality mark, and the Sample video is that the personality of people in known video is (big
Five personalities) video, personality mark is carried out to the Sample video using human hand work, the sample separated in the Sample video
The personality mark of eigen image is identical as the personality mark of the Sample video, and the personality mark includes in five-factor model personality
It is at least one.Further, the personality mark includes not only five-factor model personality, further includes the score of the five-factor model personality.
B2:Initialisation image personality model, and the figure is trained according to the sample characteristics image marked with personality
As personality model, until the first personality of sample of described image personality model output scores and according to the sample characteristics image
The difference of default first personality scoring is not more than preset error threshold, completes training.Specifically, it is determined that described image personality model
Current parameter is optimal model parameters, completes the training of described image personality model.
In embodiments of the present invention, the characteristic image of the user of extraction is input to trained image personality mould
Type obtains the first personality scoring for the characteristic image of the user.Specifically, described image personality model output includes five
Five dimensional vectors of kind personality scoring.By using the sample characteristics image training image personality model marked with personality, then will
The video data that user plane examines process is input in the trained image personality model five kinds of people in acquisition user's five-factor model personality
The scoring of lattice, to which the possible corresponding personality of facial expression of user during face is examined objectively to be quantified as scoring.
S105:The phonetic feature of the user of extraction is input to audio personality model, obtains the voice for the user
The secondary personality of feature scores.
Wherein, the audio personality model is trained in advance, is used for the phonetic feature according to the user to described
The possible personality of user scores.
As an embodiment of the present invention, Fig. 4 shows the determination of customer investment type provided in an embodiment of the present invention
The specific implementation flow of method sound intermediate frequency personality model training, details are as follows:
C1:Obtain sample voice feature, sample voice characteristic strip someone's case marker note.That is, the personality is noted for
Identify the personality of the corresponding people of the sample voice feature.Wherein, the personality mark includes five kinds of personalities in five-factor model personality
Score, the output for the verification model in training audio personality model is as a result, to adjust the mould of the audio property lattice model
Shape parameter.Specifically, acquisition includes the Sample video of personality mark, and the Sample video is that the personality of people in known video is (big
Five personalities) video, personality mark is carried out to the Sample video using human hand work, the sample separated in the Sample video
The personality mark of this phonetic feature is identical as the personality mark of the Sample video, and the personality mark includes in five-factor model personality
It is at least one.Further, the personality mark includes not only five-factor model personality, further includes the score of the five-factor model personality.
C2:Audio personality model is initialized, and the sound is trained according to the sample voice feature marked with personality
Frequency personality model, until the sample secondary personality scoring of audio personality model output and presetting for the sample voice feature
The difference of first personality scoring completes training no more than the error threshold of setting.Specifically, it is determined that the audio personality model is current
Parameter be optimal model parameters, complete the training of the audio personality model.
In embodiments of the present invention, the phonetic feature of the user of extraction is input to trained audio personality model, obtained
Take secondary personality's scoring for the phonetic feature of the user.Specifically, secondary personality's scoring includes commenting for five dimensions
Point, correspond to a kind of personality in five-factor model personality per dimension.Sound is trained by using the sample voice feature marked with personality
Frequency personality model, then user plane is examined to the phonetic feature of process and is input in the trained audio personality model that obtain user big
The scoring of five kinds of personalities in five personalities, to which the possible corresponding personality of voice of user during face is examined objectively to be quantified as
Scoring.
S106:First personality scoring is merged with secondary personality scoring, institute is determined according to fusion results
State the synthesis personality scoring of user.
Wherein, comprehensive personality scoring refers to five-factor model personality scoring.In embodiments of the present invention, fusion refers to combining the first personality
Scoring combines with secondary personality's scoring and then determines the personality of user.Specifically, when extracting characteristic image, this feature figure is obtained
As corresponding timestamp, meanwhile, when extracting phonetic feature, obtain the corresponding timestamp of the phonetic feature.By identical time stamp
Corresponding first personality scoring is scored with secondary personality into merging, and determines the synthesis personality scoring of user, at this point, comprehensive personality is commented
Point can be that the respective scoring of five kinds of personalities and corresponding five kinds of personalities in secondary personality's scoring be respectively during the first personality scores
Scoring average value, the corresponding personality that scores of highest in the scoring after fusion is determined as the main personality of user.
As an embodiment of the present invention, Fig. 5 shows the determination of customer investment type provided in an embodiment of the present invention
The specific implementation flow of method S106, details are as follows:
D1:Obtain the first personality scoring of multiple characteristic images.The video flowing text isolated in an audio-video document
Part includes more than one characteristic image.
D2:Obtain secondary personality's scoring of multistage phonetic feature.The audio stream text isolated in an audio-video document
In part, including more than one section of phonetic feature.
D3:The first personality scoring square based on five-factor model personality is established according to the scoring of the first personality of multiple characteristic images
Battle array.
D4:It is scored according to the secondary personality of the multistage phonetic feature and establishes secondary personality's scoring square based on five-factor model personality
Battle array.
D5:The first personality rating matrix and secondary personality's rating matrix are merged, according to fusion results
Determine the synthesis personality scoring of the user.
Specifically, in embodiments of the present invention, described that the first personality rating matrix and the secondary personality score
Matrix is merged, and is determined the step of synthesis personality of the user scores according to fusion results, is specifically included:
Define the first personality rating matrix Si:Si=(si1,si2,si3,si4,si5)T, wherein siIndicate i-th of first personalities
Scoring, si1-si5Indicate the video scoring of 5 kinds of personalities respectively in i-th of video data.
Define secondary personality's rating matrix Yj:Yj=(yj1,yj2,yj3,yj4,yj5)T, yjIndicate j-th of secondary personality's scoring,
yj1-yj5Indicate the audio scoring of 5 kinds of personalities respectively in j-th of video data.
The synthesis personality rating matrix PersonalityMatrix of the user is determined according to following formula (1):
Wherein, p is the number of the first personality rating matrix, and q is the number of secondary personality's rating matrix, and i, j, p, q are
Positive integer.In embodiments of the present invention, a characteristic image corresponds to the first personality scoring, and one section of phonetic feature corresponds to one
Secondary personality scores, and in comprehensive personality rating matrix, the corresponding personality of score highest is the main personality of user.
Illustratively, five-factor model personality includes Si1、Si2、Si3、Si4And Si5, Si1For neurotic (N), Si2For extroversion (E),
Si3Open (O), the S for experiencei4The human nature that is advisable (A), Si5For conscientious property (C), calculated according to i-th of video data/audio data
The score S of each personality obtainedim, m is positive integer and 1≤m≤5.For example, it is assumed that have ten the first personality rating matrixs and
Secondary personality's rating matrix:
1:[1,3,4,2,1];
2:[3,4,5,6,2];
……
10:[3,3, Isosorbide-5-Nitrae, 5].
Including p the first personality rating matrixs, q secondary personality's rating matrix scores according to above-mentioned comprehensive personality
The formula of matrix PersonalityMatrix calculates, and obtains the personality rating matrix of fusion video data and audio data, if meter
Output PersonalityMatrix=[2,4,5,1,3] is calculated, since this matrix corresponds to [N, E, O, A, C], output scoring highest
Personality, that is, export O, determine that the main personality of the user is that experience is open.Further, the high personality of output scoring time,
E is exported, determines that the secondary personality of the user is extroversion.
In the embodiment of the present invention, by merging the scoring of the first personality with secondary personality's scoring, further increase to user
The accuracy of personality evaluation.
S107:The investment types of the user are assessed according to the scoring of the synthesis personality of the determining user.
Specifically, the personality of the user is determined according to the scoring of the synthesis personality of the determining user, and based on determination
Personality assessment user risk tolerance, and then the investment types of the user are determined, to recommend to meet the risk
The investment product of ability to bear.
In embodiments of the present invention, investment types include " conservative, steady type, balanced type, growing, type of keeping forging ahead ", are led to
Cross video features and Audio feature analysis go out as a result, obtain the five-factor model personality of user, and be referred to it is " conservative, steady
Type, balanced type, growing, type of keeping forging ahead ".Specifically, five-factor model personality and the investment types table of comparisons are pre-established, according to the table of comparisons
The five-factor model personality that the user obtained is merged in the scoring of first personality with secondary personality's scoring is sorted out, and the " accurate of Investment & Financing is obtained
Portrait ", i.e. risk tolerance, and corresponding investment product is recommended according to the risk tolerance from system.It is exemplary
Ground, neurotic (N) corresponding growing, extroversion (E) correspond to balanced type, experience open (O) corresponding keep forging ahead type, pleasant property (A)
Corresponding conservative, the corresponding steady type of conscientious property (C).Further, to improve the accuracy that investment product is recommended, by five kinds of personalities
Combination of two, the table of comparisons further include the corresponding investment types of combination of two in five kinds of personalities.Specifically, five kinds of personalities are two-by-two
Combination generates ten kinds of combined situations, and corresponding investment types are arranged previously according to ten kinds of combined situations, are stored in the five-factor model personality
With in the investment types table of comparisons.
In the embodiment of the present invention, by obtaining audio-video document of user during face is examined, from the audio-video document
In isolate video data and audio data, the characteristic image of the user is extracted according to the video data, according to the sound
The characteristic image of the user of extraction is input to image personality model, obtained by frequency according to the phonetic feature for extracting the user
The first personality scoring for the characteristic image of the user is taken, the phonetic feature of the user of extraction is input to audio personality mould
Type obtains secondary personality's scoring for the phonetic feature of the user, by first personality scoring and the secondary personality
Scoring is merged, and determines that the synthesis personality of the user scores according to fusion results, objectively according to the determining user
The scoring of synthesis personality assess the investment types of the user, avoid influencing the accuracy of audit assessment because of subjective judgement, improve
The accuracy of assessment is audited, while the manpower of manual examination and verification can be saved, improves the efficiency of audit assessment.
It should be understood that the size of the serial number of each step is not meant that the order of the execution order in above-described embodiment, each process
Execution sequence should be determined by its function and internal logic, the implementation process without coping with the embodiment of the present invention constitutes any limit
It is fixed.
Corresponding to the determination method of the customer investment type described in foregoing embodiments, Fig. 6 shows that the embodiment of the present application carries
The structure diagram of the determining device of the customer investment type of confession illustrates only related to the embodiment of the present application for convenience of description
Part.
With reference to Fig. 6, the determining device of the customer investment type includes:File obtaining unit 61, data separating unit 62 are special
Sign acquiring unit 63, the first scoring unit 64, the second scoring unit 65, comprehensive score unit 66, investment types determination unit 67,
Wherein:
File obtaining unit 61, for obtaining audio-video document of user during face is examined;
Data separating unit 62, for isolating video data and audio data from the audio-video document;
Feature acquiring unit 63, the characteristic image for extracting the user according to the video data, according to the sound
Frequency is according to the phonetic feature for extracting the user;
First scoring unit 64 is obtained for the characteristic image of the user of extraction to be input to image personality model
For the first personality scoring of the characteristic image of the user;
Second scoring unit 65, for the phonetic feature of the user of extraction to be input to audio personality model, acquisition is directed to
The secondary personality of the phonetic feature of the user scores;
Comprehensive score unit 66, for first personality scoring to be merged with secondary personality scoring, according to
Fusion results determine the synthesis personality scoring of the user;
Investment types determination unit 67, for assessing the user's according to the scoring of the synthesis personality of the determining user
Investment types.
Optionally, the data separating unit 62 includes:
Demultiplexing module obtains video stream file and audio stream file for demultiplexing the audio-video document;
Video data acquisition module for being decoded to the video stream file, and carries out decoded video stream file
Filtering and noise reduction obtains video data;
Audio data acquisition module is decoded for the audio stream file, and to decoded audio stream file into
Row filtering and noise reduction obtains audio data.
Optionally, the determining device of the customer investment type further includes:
Sample image acquiring unit, for obtaining sample characteristics image, sample characteristics picture strip someone's case marker note;
First training unit is used for initialisation image personality model, and according to the sample characteristics marked with personality
Image train described image personality model, until described image personality model output the first personality of sample scoring with according to described
The difference of the default first personality scoring of personality mark is not more than preset error threshold, completes training.
Optionally, the determining device of the customer investment type further includes:
Sample voice acquiring unit, for obtaining sample voice feature, sample voice characteristic strip someone's case marker note;
Second training unit, for initializing audio personality model, and according to the sample voice marked with personality
Feature trains the audio personality model, until sample secondary personality scoring and the sample of audio personality model output
The difference of the default secondary personality scoring of phonetic feature completes training no more than the error threshold of setting.
Optionally, the comprehensive score unit 66 further includes:
First personality grading module, the first personality for obtaining multiple characteristic images score;
Secondary personality's grading module, the secondary personality for obtaining multistage phonetic feature score;
First rating matrix acquisition module, for being established based on big according to the first personality scoring of multiple characteristic images
First personality rating matrix of five personalities;
Second rating matrix acquisition module is established for being scored according to the secondary personality of the multistage phonetic feature based on big
Secondary personality's rating matrix of five personalities;
Comprehensive score acquisition module, for carrying out the first personality rating matrix and secondary personality's rating matrix
Fusion determines that the synthesis personality of the user scores according to fusion results.
Optionally, the comprehensive score acquisition module is specifically used for:
Define the first personality rating matrix Si:Si=(si1,si2,si3,si4,si5)T, wherein siIndicate i-th of first personalities
Scoring, si1-si5Indicate the video scoring of 5 kinds of personalities respectively in i-th of video data;
Define secondary personality's rating matrix Yj:Yj=(yj1,yj2,yj3,yj4,yj5)T, yjIndicate j-th of first personality scorings,
yj1-yj5Indicate the audio scoring of 5 kinds of personalities respectively in j-th of video data;
The synthesis personality rating matrix PersonalityMatrix of the user is determined according to following formula:
Wherein, p is the number of the first personality rating matrix, and q is the number of secondary personality's rating matrix.
In the embodiment of the present invention, by obtaining audio-video document of user during face is examined, from the audio-video document
In isolate video data and audio data, the characteristic image of the user is extracted according to the video data, according to the sound
The characteristic image of the user of extraction is input to image personality model, obtained by frequency according to the phonetic feature for extracting the user
The first personality scoring for the characteristic image of the user is taken, the phonetic feature of the user of extraction is input to audio personality mould
Type obtains secondary personality's scoring for the phonetic feature of the user, by first personality scoring and the secondary personality
Scoring is merged, and determines that the synthesis personality of the user scores according to fusion results, objectively according to the determining user
The scoring of synthesis personality assess the investment types of the user, avoid influencing the accuracy of audit assessment because of subjective judgement, improve
The accuracy of assessment is audited, while the manpower of manual examination and verification can be saved, improves the efficiency of audit assessment.
Fig. 7 is the schematic diagram for the server that one embodiment of the invention provides.As shown in fig. 7, the server 7 of the embodiment wraps
It includes:Processor 70, memory 71 and it is stored in the computer that can be run in the memory 71 and on the processor 70
Program 72, for example, customer investment type determination program.The processor 70 is realized above-mentioned when executing the computer program 72
Step in the determination embodiment of the method for each customer investment type, such as step 101 shown in FIG. 1 is to 107.Alternatively, described
Processor 70 realizes the function of each module/unit in above-mentioned each device embodiment, such as Fig. 6 when executing the computer program 72
The function of shown module 61 to 67.
Illustratively, the computer program 72 can be divided into one or more module/units, it is one or
Multiple module/units are stored in the memory 71, and are executed by the processor 70, to complete the present invention.Described one
A or multiple module/units can be the series of computation machine program instruction section that can complete specific function, which is used for
Implementation procedure of the computer program 72 in the server 7 is described.
The server 7 can be the computing devices such as desktop PC, notebook, palm PC and cloud server.
The server may include, but be not limited only to, processor 70, memory 71.It will be understood by those skilled in the art that Fig. 7 is only
It is the example of server 7, does not constitute the restriction to server 7, may include than illustrating more or fewer components or group
Close certain components or different components, for example, the server can also include input-output equipment, network access equipment,
Bus etc..
The processor 70 can be central processing unit (Central Processing Unit, CPU), can also be
Other general processors, digital signal processor (Digital Signal Processor, DSP), application-specific integrated circuit
(Application Specific Integrated Circuit, ASIC), ready-made programmable gate array (Field-
Programmable Gate Array, FPGA) either other programmable logic device, discrete gate or transistor logic,
Discrete hardware components etc..General processor can be microprocessor or the processor can also be any conventional processor
Deng.
The memory 71 can be the internal storage unit of the server 7, such as the hard disk or memory of server 7.
The memory 71 can also be that the plug-in type that is equipped on the External memory equipment of the server 7, such as the server 7 is hard
Disk, intelligent memory card (Smart Media Card, SMC), secure digital (Secure Digital, SD) card, flash card
(Flash Card) etc..Further, the memory 71 can also both include the internal storage unit of the server 7 or wrap
Include External memory equipment.The memory 71 is used to store other programs needed for the computer program and the server
And data.The memory 71 can be also used for temporarily storing the data that has exported or will export.
In addition, each functional unit in each embodiment of the present invention can be integrated in a processing unit, it can also
It is that each unit physically exists alone, it can also be during two or more units be integrated in one unit.Above-mentioned integrated list
The form that hardware had both may be used in member is realized, can also be realized in the form of SFU software functional unit.
If the integrated module/unit be realized in the form of SFU software functional unit and as independent product sale or
In use, can be stored in a computer read/write memory medium.Based on this understanding, the present invention realizes above-mentioned implementation
All or part of flow in example method, can also instruct relevant hardware to complete, the meter by computer program
Calculation machine program can be stored in a computer readable storage medium, the computer program when being executed by processor, it can be achieved that on
The step of stating each embodiment of the method.Wherein, the computer program includes computer program code, the computer program generation
Code can be source code form, object identification code form, executable file or certain intermediate forms etc..The computer-readable medium
May include:Any entity or device, recording medium, USB flash disk, mobile hard disk, magnetic of the computer program code can be carried
Dish, CD, computer storage, read-only memory (ROM, Read-Only Memory), random access memory (RAM,
Random Access Memory), electric carrier signal, telecommunication signal and software distribution medium etc..It should be noted that described
The content that computer-readable medium includes can carry out increasing appropriate according to legislation in jurisdiction and the requirement of patent practice
Subtract, such as in certain jurisdictions, according to legislation and patent practice, computer-readable medium does not include electric carrier signal and electricity
Believe signal.
Embodiment described above is merely illustrative of the technical solution of the present invention, rather than its limitations;Although with reference to aforementioned reality
Applying example, invention is explained in detail, it will be understood by those of ordinary skill in the art that:It still can be to aforementioned each
Technical solution recorded in embodiment is modified or equivalent replacement of some of the technical features;And these are changed
Or replace, the spirit and scope for various embodiments of the present invention technical solution that it does not separate the essence of the corresponding technical solution should all
It is included within protection scope of the present invention.
Claims (10)
1. a kind of determination method of customer investment type, which is characterized in that including:
Obtain audio-video document of user during face is examined;
Video data and audio data are isolated from the audio-video document;
The characteristic image that the user is extracted according to the video data extracts the voice of the user according to the audio data
Feature;
The characteristic image of the user of extraction is input to image personality model, obtains the characteristic image for being directed to the user
First personality scores;
The phonetic feature of the user of extraction is input to audio personality model, obtains second of the phonetic feature for the user
Personality scores;
First personality scoring is merged with secondary personality scoring, determines that the user's is comprehensive according to fusion results
Close personality scoring;
The investment types of the user are assessed according to the scoring of the synthesis personality of the determining user.
2. according to the method described in claim 1, it is characterized in that, described isolate video data from the audio-video document
And the step of audio data, including:
The audio-video document is demultiplexed, video stream file and audio stream file are obtained;
The video stream file is decoded, and denoising is filtered to decoded video stream file, obtains video data;
The audio stream file is decoded, and is filtered denoising to decoded audio stream file, obtains audio data.
3. according to the method described in claim 1, it is characterized in that, the characteristic image in the user by extraction inputs
To image personality model, before the step of obtaining the first personality scoring for the characteristic image of the user, including:
Obtain sample characteristics image, sample characteristics picture strip someone's case marker note;
Initialisation image personality model, and described image personality mould is trained according to the sample characteristics image marked with personality
Type, until the first personality of sample scoring of described image personality model output and default first personality marked according to the personality
The difference of scoring is not more than preset error threshold, completes training.
4. according to the method described in claim 1, it is characterized in that, the phonetic feature in the user by extraction is input to sound
Before the step of frequency personality model, acquisition is scored for the secondary personality of the phonetic feature of the user, including:
Obtain sample voice feature, sample voice characteristic strip someone's case marker note;
Audio personality model is initialized, and the audio personality mould is trained according to the sample voice feature marked with personality
Type, until the default secondary personality of sample the secondary personality scoring and the sample voice feature of audio personality model output
The difference of scoring completes training no more than the error threshold of setting.
5. method according to any one of claims 1 to 4, which is characterized in that described by first personality scoring and institute
It states secondary personality's scoring to merge, the step of synthesis personality of the user scores is determined according to fusion results, including:
Obtain the first personality scoring of multiple characteristic images;
Obtain secondary personality's scoring of multistage phonetic feature;
It is scored according to the first personality of multiple characteristic images and establishes the first personality rating matrix based on five-factor model personality;
It is scored according to the secondary personality of the multistage phonetic feature and establishes secondary personality's rating matrix based on five-factor model personality;
The first personality rating matrix and secondary personality's rating matrix are merged, determined according to fusion results described in
The synthesis personality of user scores.
6. according to the method described in claim 5, it is characterized in that, described by the first personality rating matrix and described second
Personality rating matrix is merged, and the step of synthesis personality of the user scores is determined according to fusion results, including:
Define the first personality rating matrix Si:Si=(si1,si2,si3,si4,si5)T, wherein siIndicate that i-th of first personalities are commented
Point, si1-si5Indicate the video scoring of 5 kinds of personalities respectively in i-th of video data;
Define secondary personality's rating matrix Yj:Yj=(yj1,yj2,yj3,yj4,yj5)T, yjIndicate j-th of secondary personality's scoring, yj1-
yj5Indicate the audio scoring of 5 kinds of personalities respectively in j-th of video data;
The synthesis personality rating matrix PersonalityMatrix of the user is determined according to following formula:
Wherein, p is the number of the first personality rating matrix, and q is the number of secondary personality's rating matrix.
7. a kind of computer readable storage medium, the computer-readable recording medium storage has computer program, feature to exist
In realizing that customer investment type is really as described in any one of claim 1 to 6 when the computer program is executed by processor
The step of determining method.
8. a kind of server, including memory, processor and it is stored in the memory and can transports on the processor
Capable computer program, which is characterized in that the processor realizes following steps when executing the computer program:
Obtain audio-video document of user during face is examined;
Video data and audio data are isolated from the audio-video document;
The characteristic image that the user is extracted according to the video data extracts the voice of the user according to the audio data
Feature;
The characteristic image of the user of extraction is input to image personality model, obtains the characteristic image for being directed to the user
First personality scores;
The phonetic feature of the user of extraction is input to audio personality model, obtains second of the phonetic feature for the user
Personality scores;
First personality scoring is merged with secondary personality scoring, determines that the user's is comprehensive according to fusion results
Close personality scoring;
The investment types of the user are assessed according to the scoring of the synthesis personality of the determining user.
9. server as claimed in claim 7, which is characterized in that described to isolate video data from the audio-video document
And the step of audio data, including:
The audio-video document is demultiplexed, video stream file and audio stream file are obtained;
The video stream file is decoded, and denoising is filtered to decoded video stream file, obtains video data;
The audio stream file is decoded, and is filtered denoising to decoded audio stream file, obtains audio data.
10. such as claim 8 to 9 any one of them server, which is characterized in that will first personality scoring and described the
The scoring of two personalities is merged, and the step of synthesis personality of the user scores is determined according to fusion results, including:
Obtain the first personality scoring of multiple characteristic images;
Obtain secondary personality's scoring of multistage phonetic feature;
It is scored according to the first personality of multiple characteristic images and establishes the first personality rating matrix based on five-factor model personality;
It is scored according to the secondary personality of the multistage phonetic feature and establishes secondary personality's rating matrix based on five-factor model personality;
The first personality rating matrix and secondary personality's rating matrix are merged, determined according to fusion results described in
The synthesis personality of user scores.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810495325.XA CN108776932A (en) | 2018-05-22 | 2018-05-22 | Determination method, storage medium and the server of customer investment type |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810495325.XA CN108776932A (en) | 2018-05-22 | 2018-05-22 | Determination method, storage medium and the server of customer investment type |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108776932A true CN108776932A (en) | 2018-11-09 |
Family
ID=64027445
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810495325.XA Pending CN108776932A (en) | 2018-05-22 | 2018-05-22 | Determination method, storage medium and the server of customer investment type |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108776932A (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109729383A (en) * | 2019-01-04 | 2019-05-07 | 深圳壹账通智能科技有限公司 | Double record video quality detection methods, device, computer equipment and storage medium |
CN109766419A (en) * | 2018-12-14 | 2019-05-17 | 深圳壹账通智能科技有限公司 | Products Show method, apparatus, equipment and storage medium based on speech analysis |
CN110008376A (en) * | 2019-03-22 | 2019-07-12 | 广州新视展投资咨询有限公司 | User's portrait vector generation method and device |
CN110111810A (en) * | 2019-04-29 | 2019-08-09 | 华院数据技术(上海)有限公司 | Voice personality prediction technique based on convolutional neural networks |
CN110147926A (en) * | 2019-04-12 | 2019-08-20 | 深圳壹账通智能科技有限公司 | A kind of risk class calculation method, storage medium and the terminal device of type of service |
CN110825824A (en) * | 2019-10-16 | 2020-02-21 | 天津大学 | User relation portrayal method based on semantic visual/non-visual user character expression |
CN117217807A (en) * | 2023-11-08 | 2023-12-12 | 四川智筹科技有限公司 | Bad asset valuation algorithm based on multi-mode high-dimensional characteristics |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140172751A1 (en) * | 2012-12-15 | 2014-06-19 | Greenwood Research, Llc | Method, system and software for social-financial investment risk avoidance, opportunity identification, and data visualization |
CN107798607A (en) * | 2017-07-25 | 2018-03-13 | 上海壹账通金融科技有限公司 | Asset Allocation strategy acquisition methods, device, computer equipment and storage medium |
CN107977864A (en) * | 2017-12-07 | 2018-05-01 | 北京贝塔智投科技有限公司 | A kind of customer insight method and system suitable for financial scenario |
CN108038413A (en) * | 2017-11-02 | 2018-05-15 | 平安科技(深圳)有限公司 | Cheat probability analysis method, apparatus and storage medium |
-
2018
- 2018-05-22 CN CN201810495325.XA patent/CN108776932A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140172751A1 (en) * | 2012-12-15 | 2014-06-19 | Greenwood Research, Llc | Method, system and software for social-financial investment risk avoidance, opportunity identification, and data visualization |
CN107798607A (en) * | 2017-07-25 | 2018-03-13 | 上海壹账通金融科技有限公司 | Asset Allocation strategy acquisition methods, device, computer equipment and storage medium |
CN108038413A (en) * | 2017-11-02 | 2018-05-15 | 平安科技(深圳)有限公司 | Cheat probability analysis method, apparatus and storage medium |
CN107977864A (en) * | 2017-12-07 | 2018-05-01 | 北京贝塔智投科技有限公司 | A kind of customer insight method and system suitable for financial scenario |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109766419A (en) * | 2018-12-14 | 2019-05-17 | 深圳壹账通智能科技有限公司 | Products Show method, apparatus, equipment and storage medium based on speech analysis |
CN109729383A (en) * | 2019-01-04 | 2019-05-07 | 深圳壹账通智能科技有限公司 | Double record video quality detection methods, device, computer equipment and storage medium |
CN110008376A (en) * | 2019-03-22 | 2019-07-12 | 广州新视展投资咨询有限公司 | User's portrait vector generation method and device |
CN110147926A (en) * | 2019-04-12 | 2019-08-20 | 深圳壹账通智能科技有限公司 | A kind of risk class calculation method, storage medium and the terminal device of type of service |
CN110111810A (en) * | 2019-04-29 | 2019-08-09 | 华院数据技术(上海)有限公司 | Voice personality prediction technique based on convolutional neural networks |
CN110111810B (en) * | 2019-04-29 | 2020-12-18 | 华院数据技术(上海)有限公司 | Voice personality prediction method based on convolutional neural network |
CN110825824A (en) * | 2019-10-16 | 2020-02-21 | 天津大学 | User relation portrayal method based on semantic visual/non-visual user character expression |
CN117217807A (en) * | 2023-11-08 | 2023-12-12 | 四川智筹科技有限公司 | Bad asset valuation algorithm based on multi-mode high-dimensional characteristics |
CN117217807B (en) * | 2023-11-08 | 2024-01-26 | 四川智筹科技有限公司 | Bad asset estimation method based on multi-mode high-dimensional characteristics |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108776932A (en) | Determination method, storage medium and the server of customer investment type | |
CN110147726B (en) | Service quality inspection method and device, storage medium and electronic device | |
Stappen et al. | The MuSe 2021 multimodal sentiment analysis challenge: sentiment, emotion, physiological-emotion, and stress | |
Li et al. | An automated assessment framework for atypical prosody and stereotyped idiosyncratic phrases related to autism spectrum disorder | |
US20150142446A1 (en) | Credit Risk Decision Management System And Method Using Voice Analytics | |
CN108665159A (en) | A kind of methods of risk assessment, device, terminal device and storage medium | |
CN110443692A (en) | Enterprise's credit authorization method, apparatus, equipment and computer readable storage medium | |
CN104538035B (en) | A kind of method for distinguishing speek person and system based on Fisher super vectors | |
CN113628627B (en) | Electric power industry customer service quality inspection system based on structured voice analysis | |
CN108615532B (en) | Classification method and device applied to sound scene | |
CN102623009A (en) | Abnormal emotion automatic detection and extraction method and system on basis of short-time analysis | |
CN108091326A (en) | A kind of method for recognizing sound-groove and system based on linear regression | |
Koutras et al. | Predicting audio-visual salient events based on visual, audio and text modalities for movie summarization | |
CN110046345A (en) | A kind of data extraction method and device | |
GB2521050A (en) | Credit risk decision management system and method using voice analytics | |
CN115713257A (en) | Anchor expressive force evaluation method and device based on multi-mode fusion and computing equipment | |
CN104167211B (en) | Multi-source scene sound abstracting method based on hierarchical event detection and context model | |
Bhandari et al. | Implementation of transformer-based deep learning architecture for the development of surface roughness classifier using sound and cutting force signals | |
Song et al. | A compact and discriminative feature based on auditory summary statistics for acoustic scene classification | |
Singh | A text independent speaker identification system using ANN, RNN, and CNN classification technique | |
CN110232927A (en) | Speaker verification's anti-spoofing method and apparatus | |
CN105427171A (en) | Data processing method of Internet lending platform rating | |
Pathonsuwan et al. | RS-MSConvNet: A novel end-to-end pathological voice detection model | |
CN115423600B (en) | Data screening method, device, medium and electronic equipment | |
Warule et al. | Hilbert-Huang Transform-Based Time-Frequency Analysis of Speech Signals for the Identification of Common Cold |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
REG | Reference to a national code |
Ref country code: HK Ref legal event code: DE Ref document number: 1257319 Country of ref document: HK |
|
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
AD01 | Patent right deemed abandoned |
Effective date of abandoning: 20231208 |
|
AD01 | Patent right deemed abandoned |