CN110121026A - Intelligent capture apparatus and its scene generating method based on living things feature recognition - Google Patents

Intelligent capture apparatus and its scene generating method based on living things feature recognition Download PDF

Info

Publication number
CN110121026A
CN110121026A CN201910333424.2A CN201910333424A CN110121026A CN 110121026 A CN110121026 A CN 110121026A CN 201910333424 A CN201910333424 A CN 201910333424A CN 110121026 A CN110121026 A CN 110121026A
Authority
CN
China
Prior art keywords
target object
scene
information
living things
biological information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910333424.2A
Other languages
Chinese (zh)
Inventor
陶旖婷
周凡贻
彭植远
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Transsion Holdings Co Ltd
Original Assignee
Shenzhen Transsion Holdings Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Transsion Holdings Co Ltd filed Critical Shenzhen Transsion Holdings Co Ltd
Priority to CN201910333424.2A priority Critical patent/CN110121026A/en
Publication of CN110121026A publication Critical patent/CN110121026A/en
Priority to PCT/CN2019/105237 priority patent/WO2020215590A1/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/27Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique
    • G10L25/30Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique using neural networks
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • G10L25/63Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for estimating an emotional state
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body

Abstract

The application provides a kind of intelligent capture apparatus and its scene generating method based on living things feature recognition, intelligent capture apparatus obtains the biological information of target object, the emotional change information of the target object is identified according to the biological information, according to the corresponding scenery control strategy of the emotional change acquisition of information, the photographed scene to match with the emotional change information is obtained according to the scenery control strategy generating.The various biological informations that the application can make full use of human body carry out comprehensive comprehensive mood to target object and analyze, to obtain accurately emotional change information, and the scene of shooting is adjusted, it exactly matches it with emotional change information, improves user experience.

Description

Intelligent capture apparatus and its scene generating method based on living things feature recognition
Technical field
This application involves technical field of information processing, and in particular to a kind of scene generation side based on living things feature recognition Method, and the intelligent capture apparatus of the application scene generating method based on living things feature recognition.
Background technique
With the high speed development of individual mobile terminal technology, the shooting function of the smart machines such as mobile phone, tablet computer is also got over Come more perfect, various self-timers, live streaming platform also promote technique for taking gradually to move towards personalized development trend.
In some shooting functions, only simply after recognizing face, specific content is superimposed for the prior art To in image, after the variation for detecting facial expression, with the variation of facial expression, specific content follows variation to anti- Mirror the current emotional state of user.But this shooting can not become according to current people's dynamic emotional change, potential mood Change, enhance the atmosphere of background and space, and be confined to the specific content in current picture, mood may only be reflected in a certain On specific content, the atmosphere of the background and realistic space in picture can not be enhanced, in addition, the judgement to personage's current mood state Also it is only simply obtained from character face's expression.
Insufficient for the various aspects of the prior art, present inventor proposes a kind of intelligence shooting by further investigation Equipment and its scene generating method based on living things feature recognition.
Summary of the invention
The purpose of the application is, provides a kind of intelligent capture apparatus and its scene generation side based on living things feature recognition Method, the various biological informations that can make full use of human body carry out comprehensive comprehensive mood to target object and analyze, with To accurately emotional change information, and the scene of shooting is adjusted, exactly match it with emotional change information, improves user Experience.
In order to solve the above technical problems, the application provides a kind of scene generating method based on living things feature recognition, as One of embodiment, the scene generating method based on living things feature recognition comprising steps of
Intelligent capture apparatus obtains the biological information of target object;
The emotional change information of the target object is identified according to the biological information;
According to the corresponding scenery control strategy of the emotional change acquisition of information;
The photographed scene to match with the emotional change information is obtained according to the scenery control strategy generating.
As one of embodiment, it is described intelligence capture apparatus obtain target object biological information the step of Before, further includes:
The a variety of data informations for the facial expression that all ages and classes are segmented, gender segmentation is corresponding under different moods are acquired, And the emotional characteristics library of user is pre-established using facial expression as one of biological information, to store all ages and classes point Section, gender segmentation biological information corresponding under different moods.
As one of embodiment, the intelligence capture apparatus obtains the step of the biological information of target object Suddenly, comprising:
The biological information of the face of target object is obtained by camera;
It is transferred from the emotional characteristics library corresponding with the biological information of face of acquisition including the age point The data information that section, gender are segmented to judge age, the gender of the target object, and then matches and determines the target object Affiliated age-sex group.
As one of embodiment, the intelligence capture apparatus obtains the step of the biological information of target object Suddenly, it specifically includes:
Intelligent capture apparatus obtains the real-time heart rate of the target object;
The real-time heart rate is handled using as biological information.
As one of embodiment, the intelligence capture apparatus obtains the step of the real-time heart rate of the target object Suddenly, it specifically includes:
The real-time heart rate of the target object is obtained using the biological power technology of movement, and according to the real-time heart rate to target The emotional change information of object is tentatively judged.
As one of embodiment, the intelligence capture apparatus by included sensor and/or by network from The external sensor being connected obtains the real-time heart rate of the target object.
As one of embodiment, the intelligence capture apparatus obtains the step of the biological information of target object Suddenly, further includes:
Intelligent capture apparatus obtains the voice messaging of the target object;
Using the real-time heart rate of the target object and the voice messaging as biological information.
As one of embodiment, the step of the voice messaging for obtaining the target object, specifically include:
It is obtained in such a way that the intelligent Neural Network technology of the small assistant of voice and the target object interact described Voice messaging, further to judge emotional change information according to the voice of target object, word speed and/or intonation.
As one of embodiment, the intelligence capture apparatus obtains the step of the biological information of target object Suddenly, further includes:
The intelligence capture apparatus obtains the facial feature image of the target object by camera;
Using the facial feature image of the target object, real-time heart rate and voice messaging as the biological information.
It is described to identify that the mood of the target object becomes according to the biological information as one of embodiment The step of changing information, specifically includes:
Life is formed by according to the facial feature image and its muscle variation, the real-time heart rate and the voice messaging Object characteristic information obtains the emotional change information of Corresponding matching from preset emotional characteristics library.
It is described to obtain believing with the emotional change according to the scenery control strategy generating as one of embodiment It the step of manner of breathing matched photographed scene, specifically includes:
According to age-sex group belonging to the target object and the corresponding mood of the scenery control strategy Change information adjusts the photographed scene using AR show technology in real time, becomes the photographed scene ultimately generated and the mood Change information to match.
As one of embodiment, the described the step of photographed scene is adjusted using AR show technology in real time, tool Body includes:
Animation head portrait paster in photographed scene is adjusted using AR show technology in real time, is adjusted to the compassion with target object Hurt the photographed scene of corresponding sobbing animation head portrait or flame animation head portrait corresponding with the indignation of target object.
It is described to obtain believing with the emotional change according to the scenery control strategy generating as one of embodiment The step of manner of breathing matched photographed scene, further includes:
Using augmented reality, the surrounding enviroment according to locating for target object and the emotional change information, enhancing are worked as The background atmosphere of front space.
In order to solve the above technical problems, the application also provides a kind of intelligent capture apparatus, as one of embodiment, The intelligence capture apparatus includes processor, and the processor is based on for executing computer program with realizing as described above The scene generating method of living things feature recognition.
Intelligence capture apparatus provided by the present application and its scene generating method based on living things feature recognition, intelligence shooting are set The standby biological information for obtaining target object identifies that the emotional change of the target object is believed according to the biological information Breath, according to the corresponding scenery control strategy of the emotional change acquisition of information, according to the scenery control strategy generating obtain with The photographed scene that the emotional change information matches.The application can make full use of the various biological informations of human body to mesh Mark object carries out comprehensive comprehensive mood analysis, to obtain accurately emotional change information, and adjusts the scene of shooting, It exactly matches it with emotional change information, improves user experience.
Above description is only the general introduction of technical scheme, in order to better understand the technological means of the application, And it can be implemented in accordance with the contents of the specification, and in order to allow the above and other objects, features and advantages of the application can It is clearer and more comprehensible, it is special below to lift preferred embodiment, and cooperate attached drawing, detailed description are as follows.
Detailed description of the invention
Fig. 1 is the flow diagram of scene generating method one embodiment of the application based on living things feature recognition.
Fig. 2, Fig. 3 and Fig. 4 are the effect picture of the scene generating method based on living things feature recognition of the application;
Fig. 5 is the module diagram of one embodiment of the application intelligence capture apparatus.
Specific embodiment
Further to illustrate that the application is the technical means and efficacy reaching predetermined application purpose and being taken, below in conjunction with Attached drawing and preferred embodiment, to the application, detailed description are as follows.
By the explanation of specific embodiment, when can be to reach the technological means and effect that predetermined purpose is taken to the application Fruit be able to more deeply and it is specific understand, however institute's accompanying drawings are only to provide reference and description is used, and are not used to this Shen It please limit.
Referring to Fig. 1, the process that Fig. 1 is scene generating method one embodiment of the application based on living things feature recognition is shown It is intended to, present embodiment can be applied to mobile phone, laptop, plate electricity based on the scene generating method of living things feature recognition On the intelligence capture apparatus such as brain or wearable device.
It should be noted that as shown in Figure 1, based on the scene generating method of living things feature recognition described in present embodiment It can include but is not limited to the following steps.
Step S101, intelligent capture apparatus obtain the biological information of target object.
Step S102 identifies the emotional change information of the target object according to the biological information.
Step S103, according to the corresponding scenery control strategy of the emotional change acquisition of information.
Step S104 obtains the shooting field to match with the emotional change information according to the scenery control strategy generating Scape.
In the present embodiment, the intelligent capture apparatus of the S101 obtains the step of biological information of target object It before, can also include: a variety of numbers for acquiring the facial expression that all ages and classes are segmented, gender segmentation is corresponding under different moods It is believed that ceasing, and the emotional characteristics library of user is pre-established using facial expression as one of biological information, to store not Same age segmentations, gender segmentation biological information corresponding under different moods.
Specifically, for example, record storage different groups user mouth, eyebrow, eyes, facial muscles under various moods The data of the biological informations such as variation, if the expression of user under happy state is that mouth parts a little, the corners of the mouth raises up.
Furthermore, present embodiment can establish between the biological information of user's expression and emotional characteristics library Mapping relations, and complete to establish the emotional characteristics library of user.
Then, in S101 described in present embodiment intelligent capture apparatus obtain target object biological information step It suddenly, may include: the biological information that the face of target object is obtained by camera;From the emotional characteristics library transfer with The corresponding data information being segmented including age segmentations, gender of the biological information of the face of acquisition, to judge Age, the gender of target object are stated, and then matches and determines age-sex group belonging to the target object.
Wherein, age-sex group belonging to present embodiment target object, is specifically as follows women/male children, female The groups such as property/young male people, women/male a middle-aged person, women/male senile patient.
It should be noted that intelligent capture apparatus obtains the biological characteristic letter of target object in S101 described in present embodiment The step of breath can specifically include: intelligent capture apparatus obtains the real-time heart rate of the target object;To the real-time heart rate into Row processing is using as biological information.
Furthermore, the real-time heart rate is handled using the step as biological information described in present embodiment Suddenly, it can specifically include: obtaining the real-time heart rate of the target object using the biological power technology of movement, and according to the real-time heart Rate tentatively judges the emotional change information of target object.
It should be noted that intelligence capture apparatus described in present embodiment is by included sensor and/or passes through network The real-time heart rate of the target object is obtained from the external sensor being connected.Wherein, the network can for 3G communication network, 4G communication network, 5G communication network, blueteeth network or other Internet of Things networks, are not limited thereto.
For example, present embodiment can obtain real-time heart rate from the wearable device being connected by network, the wearing is set It is standby heart rate to be detected by biological human body direct impedance, finally by bio-electrical impedance sensor with biological power technology Detect heart rate data, realization makes preliminary judgement to the mood mood of current target object, for example, rapid heart beat be it is nervous, Excitement, indignation etc..
It should be noted that intelligent capture apparatus obtains the biological characteristic letter of target object in S101 described in present embodiment The step of breath specifically can also include: the voice messaging that intelligent capture apparatus obtains the target object;By the target object Real-time heart rate and the voice messaging as biological information.
For example, target object can directly carry out voice input " in high mood recently " when using equipment, this When, present embodiment can obtain " in high mood recently " as biological information by microphone.
Accordingly, the mood of the target object is identified in S102 described in present embodiment according to the biological information The step of change information, can specifically include: using the small assistant of voice intelligent Neural Network technology and the target object into The mode of row interaction obtains the voice messaging, further to judge mood according to the voice of target object, word speed and/or intonation Change information.
It should be noted that intelligent capture apparatus obtains the biological characteristic letter of target object in S101 described in present embodiment The step of breath can specifically include: the intelligence capture apparatus obtains the facial characteristics figure of the target object by camera Picture;Using the facial feature image of the target object, real-time heart rate and voice messaging as the biological information.
Accordingly, the mood of the target object is identified in S102 described in present embodiment according to the biological information The step of change information, can specifically include: according to the facial feature image and its muscle variation, the real-time heart rate and institute It states voice messaging and is formed by the emotional change information that biological information obtains Corresponding matching from preset emotional characteristics library.
It is noted that being obtained and the feelings in S104 described in present embodiment according to the scenery control strategy generating It the step of photographed scene that thread change information matches, can specifically include: according to age-sex belonging to the target object Group and the corresponding emotional change information of the scenery control strategy, using AR show (the audio-visual figure of ARshow company As processing software) technology adjusts the photographed scene in real time, make the photographed scene ultimately generated and the emotional change information phase Matching.
Specifically, the step of photographed scene is adjusted using AR show technology in real time described in present embodiment, specifically It may include: the animation head portrait paster adjusted in real time using AR show technology in photographed scene, be adjusted to the compassion with target object Hurt the photographed scene of corresponding sobbing animation head portrait or flame animation head portrait corresponding with the indignation of target object.
It should be strongly noted that being obtained and the mood described in present embodiment according to the scenery control strategy generating The step of photographed scene that change information matches can also include: using augmented reality, according to locating for target object Surrounding enviroment and the emotional change information, enhance the background atmosphere of current spatial.
Specifically, present embodiment can also use augmented reality, the ambient enviroment according to locating for target object With current mood states, enhance the background atmosphere of current spatial, if current target object is if young girl, mood is happy pleased Fastly, then current background environment can enhancing be emphatically the atmosphere with pink colour, cartoon and pleasure, while play easily cheerful and light-hearted Nursery rhymes;Current target object is elderly men, and mood is sad state, and background environment enhancing is sad gloomy atmosphere, and Play dreary melody, such as urheen.
It should be noted that the figure that present embodiment is finally shot based on the scene generating method of living things feature recognition Piece, video can be sent to third party by APP third-party application and carry out communication interaction, also can store in local, cloud.
It in one embodiment, may include following multiple steps using mobile phone as intelligent capture apparatus.
Step 1, start;
Step 2, classified according to the age segmentations of user, gender segmentation, facial expression etc., emotional characteristics are established on mobile phone backstage Library records different groups user data such as mouth, eyebrow, eyes, facial muscles variation under various moods, as under happy state The expression of user is that mouth parts a little, the corners of the mouth raises up;
Step 3, the mapping relations between facial feature image and emotional characteristics library are established;
Step 4, it completes to establish emotional characteristics library;
Step 5, the camera for opening mobile phone acquires the facial feature image of target object;
Step 6, the face feature point of current target object is identified;
Step 7, emotional characteristics library is transferred, judges the data such as the gender of target object, age in current picture;
Step 8, according to the Sex, Age of target object, be matched to corresponding group, as female/male children, female/male are green Year people, female/male a middle-aged person, female/male senile patient etc.;
Step 9, real-time heart rate is obtained according to the wearable device and its sensor connecting with cell phone network, to current goal pair The mood of elephant makes preliminary judgement, for example, rapid heart beat is anxiety, excitement, indignation etc.;Wherein, wearable device can be with life Object power technology detects heart rate by biological human body direct impedance by bio-electrical impedance sensor;
Step 10, it is linked up using between the small assistant of intelligent sound and target object, wherein the small assistant of intelligent sound can To use intelligent Neural Network technology, current mood feelings are further judged by the voice word speed intonation of target object dialogue Not-ready status;
Step 11, changed by the facial muscles of target object, in conjunction with the word speed of real-time heart rate data and dialogic voice, language It adjusts, the emotional characteristics library on backstage is transferred in matching;
Step 12, group and current mood states locating for combining target object uses AR show technology, animation The mood variation that head portrait paster goes out according to current matching provides reaction, such as target object current mood is sadness, animation head portrait It crys, current mood is indignation, then animation head portrait flame;
Step 13, using augmented reality, the ambient enviroment according to locating for target object and current mood states increase The background atmosphere of strong current spatial.
Furthermore, it is understood that incorporated by reference to Fig. 1, Fig. 2-Fig. 4, scene generating method of the present embodiment based on living things feature recognition It may include following operating procedure.
Preparation process prestores corresponding emotional characteristics library in advance;
The first step opens camera, selects AR CAM mode, identifies the data such as age, gender of user in current picture, With group corresponding to personage in current picture, Fig. 2 is by taking a male children as an example;
Second step, the real-time heart rate by monitoring user tentatively judge the current emotional of user;
Second step is linked up between the small assistant of intelligent sound and user, by the voice word speed intonation of user session come Further judge the current mood of user;
Third step is judged by the variation of user's face muscle in conjunction with the voice word speed intonation of real-time heart rate, dialogue The mood states of user in current picture;
4th step, using AR show technology, the mood variation that animation head portrait paster goes out according to current matching provides reaction, Effect is as shown in Figure 3;
5th step enhances current spatial by group locating for user and current mood using augmented reality Atmosphere, such as 4 be currently male children, and mood is happy happy, then the atmosphere enhancing of current background is the atmosphere with cartoon, pleasure It encloses, and plays easily cheerful and light-hearted nursery rhymes.
The interest of shooting can be enhanced in the embodiment of the present application, makes camera more intelligent, more understands the mood of personage, improves and uses Family experience.
Please referring next to Fig. 5, present embodiment intelligence capture apparatus can for mobile phone, laptop, tablet computer or Person's wearable device.
The intelligence capture apparatus can specifically include processor 51, and the processor 51 is used to execute computer program, To realize the scene generating method based on living things feature recognition as described in Fig. 1 and embodiments thereof.
Specifically, the processor 51 is used to obtain the biological information of target object.
The processor 51 is used to identify the emotional change information of the target object according to the biological information.
The processor 51 is used for according to the corresponding scenery control strategy of the emotional change acquisition of information.
The processor 51 according to the scenery control strategy generating for obtaining matching with the emotional change information Photographed scene.
It should be noted that processor 51 described in present embodiment is used to obtain the biological information of target object, tool Body may include: the real-time heart rate that the processor 51 is used to obtain the target object;The real-time heart rate is handled Using as biological information.
Furthermore, processor 51 described in present embodiment is used to handle using as biology the real-time heart rate Characteristic information can specifically include: the processor 51 is used to obtain the reality of the target object using the biological power technology of movement When heart rate.
It should be noted that intelligence capture apparatus described in present embodiment is by included sensor and/or passes through network The real-time heart rate of the target object is obtained from the external sensor being connected.Wherein, the network can for 3G communication network, 4G communication network, 5G communication network, blueteeth network or other Internet of Things networks, are not limited thereto.
It should be noted that processor 51 described in present embodiment is used to obtain the biological information of target object, tool Body can also include: the voice messaging that the processor 51 is used to obtain the target object;By the real-time of the target object Heart rate and the voice messaging are as biological information.
For example, target object can directly carry out voice input " in high mood recently " when using equipment, this When, present embodiment can obtain " in high mood recently " as biological information by microphone.
Accordingly, processor 51 described in present embodiment is used to identify the target object according to the biological information Emotional change information, can specifically include: the processor 51 is used to handed over the target object using voice assistant Mutual mode obtains the voice messaging.
It should be noted that processor 51 described in present embodiment is used to obtain the biological information of target object, tool Body may include: the facial feature image that the processor 51 is used to obtain the target object by camera;By the mesh Facial feature image, real-time heart rate and the voice messaging of object are marked as the biological information.
Accordingly, processor 51 described in present embodiment is used to identify the target object according to the biological information Emotional change information, can specifically include: the processor 51 be used for according to the facial feature image and its muscle variation, The real-time heart rate and the voice messaging are formed by biological information and obtain corresponding from preset emotional characteristics library The emotional change information matched.
It is noted that processor 51 described in present embodiment be used for according to the scenery control strategy generating obtain with The photographed scene that the emotional change information matches, can specifically include: the processor 51 is used for according to the scene control System strategy adjusts the photographed scene using AR show technology in real time, makes the photographed scene ultimately generated and the emotional change Information matches.
It is noted that present embodiment can also use augmented reality, according to locating for target object around Environment and current mood states enhance the background atmosphere of current spatial, and if current target object is if young girl, mood is happy Happiness, then current background environment can enhancing be emphatically the atmosphere with pink colour, cartoon and pleasure, while play easily joyous Fast nursery rhymes;Current target object is elderly men, and mood is sad state, and background environment enhancing is sad gloomy atmosphere, And play dreary melody, such as urheen.
It should be noted that the figure that present embodiment is finally shot based on the scene generating method of living things feature recognition Piece, video can be sent to third party by APP third-party application and carry out communication interaction, also can store in local, cloud.
In addition, the application also provides a kind of storage medium, the storage medium is stored with computer program, the computer Program is when being executed by processor, for realizing the scene generation side described in embodiment as above based on living things feature recognition Method.
The above is only the preferred embodiment of the application, not makes any form of restriction to the application, though Right the application has been disclosed in a preferred embodiment above, however is not limited to the application, any technology people for being familiar with this profession Member, is not departing within the scope of technical scheme, when the technology contents using the disclosure above make a little change or modification For the equivalent embodiment of equivalent variations, but all technical spirits pair without departing from technical scheme content, according to the application Any simple modification, equivalent change and modification made by above embodiments, in the range of still falling within technical scheme.

Claims (14)

1. a kind of scene generating method based on living things feature recognition, which is characterized in that the field based on living things feature recognition Scape generation method comprising steps of
Intelligent capture apparatus obtains the biological information of target object;
The emotional change information of the target object is identified according to the biological information;
According to the corresponding scenery control strategy of the emotional change acquisition of information;
The photographed scene to match with the emotional change information is obtained according to the scenery control strategy generating.
2. the scene generating method according to claim 1 based on living things feature recognition, which is characterized in that the intelligence is clapped It takes the photograph before the step of equipment obtains the biological information of target object, further includes:
A variety of data informations of the segmentation of acquisition all ages and classes, gender segmentation facial expression corresponding under different moods, and with Facial expression pre-establishes the emotional characteristics library of user as one of biological information, be segmented with storing all ages and classes, Gender segmentation biological information corresponding under different moods.
3. the scene generating method according to claim 2 based on living things feature recognition, which is characterized in that the intelligence is clapped Take the photograph the step of equipment obtains the biological information of target object, comprising:
The biological information of the face of target object is obtained by camera;
Transferring corresponding with the biological information of face of acquisition from the emotional characteristics library includes age segmentations, property The data information not being segmented to judge age, the gender of the target object, and then is matched and is determined belonging to the target object Age-sex group.
4. the scene generating method according to claim 1-3 based on living things feature recognition, which is characterized in that institute The step of intelligent capture apparatus obtains the biological information of target object is stated, is specifically included:
Intelligent capture apparatus obtains the real-time heart rate of the target object;
The real-time heart rate is handled using as biological information.
5. the scene generating method according to claim 4 based on living things feature recognition, which is characterized in that the intelligence is clapped The step of equipment obtains the real-time heart rate of the target object is taken the photograph, is specifically included:
The real-time heart rate of the target object is obtained using the biological power technology of movement, and according to the real-time heart rate to target object Emotional change information tentatively judged.
6. the scene generating method according to claim 5 based on living things feature recognition, which is characterized in that the intelligence is clapped Equipment is taken the photograph by included sensor and/or the real-time of the target object is obtained from the external sensor being connected by network Heart rate.
7. the scene generating method according to claim 1-3 based on living things feature recognition, which is characterized in that institute State the step of intelligent capture apparatus obtains the biological information of target object, further includes:
Intelligent capture apparatus obtains the voice messaging of the target object;
Using the real-time heart rate of the target object and the voice messaging as biological information.
8. the scene generating method according to claim 7 based on living things feature recognition, which is characterized in that the acquisition institute The step of stating the voice messaging of target object, specifically includes:
The voice is obtained in such a way that the intelligent Neural Network technology of the small assistant of voice and the target object interact Information, further to judge emotional change information according to the voice of target object, word speed and/or intonation.
9. the scene generating method according to claim 1-3 based on living things feature recognition, which is characterized in that institute State the step of intelligent capture apparatus obtains the biological information of target object, further includes:
The intelligence capture apparatus obtains the facial feature image of the target object by camera;
Using the facial feature image of the target object, real-time heart rate and voice messaging as the biological information.
10. the scene generating method according to claim 9 based on living things feature recognition, which is characterized in that the basis The biological information identifies the step of emotional change information of the target object, specifically includes:
Biological spy is formed by according to the facial feature image and its muscle variation, the real-time heart rate and the voice messaging Reference breath obtains the emotional change information of Corresponding matching from preset emotional characteristics library.
11. the scene generating method according to claim 1-3 based on living things feature recognition, which is characterized in that The described the step of photographed scene to match with the emotional change information is obtained according to the scenery control strategy generating, specifically Include:
According to age-sex group belonging to the target object and the corresponding emotional change of the scenery control strategy Information adjusts the photographed scene using AR show technology in real time, believes the photographed scene ultimately generated and the emotional change Manner of breathing matching.
12. the scene generating method according to claim 11 based on living things feature recognition, which is characterized in that the use The step of AR show technology adjusts the photographed scene in real time, specifically includes:
Animation head portrait paster in photographed scene is adjusted using AR show technology in real time, it is right with the sadness of target object to be adjusted to The photographed scene of the sobbing animation head portrait or flame animation head portrait corresponding with the indignation of target object answered.
13. the scene generating method according to claim 1-3 based on living things feature recognition, which is characterized in that It the described the step of photographed scene to match with the emotional change information is obtained according to the scenery control strategy generating, also wraps It includes:
Using augmented reality, the surrounding enviroment according to locating for target object and the emotional change information enhance current empty Between background atmosphere.
14. a kind of intelligence capture apparatus, which is characterized in that the intelligence capture apparatus includes processor, and the processor is used for Computer program is executed, to realize such as the described in any item scene generation sides based on living things feature recognition claim 1-13 Method.
CN201910333424.2A 2019-04-24 2019-04-24 Intelligent capture apparatus and its scene generating method based on living things feature recognition Pending CN110121026A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201910333424.2A CN110121026A (en) 2019-04-24 2019-04-24 Intelligent capture apparatus and its scene generating method based on living things feature recognition
PCT/CN2019/105237 WO2020215590A1 (en) 2019-04-24 2019-09-10 Intelligent shooting device and biometric recognition-based scene generation method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910333424.2A CN110121026A (en) 2019-04-24 2019-04-24 Intelligent capture apparatus and its scene generating method based on living things feature recognition

Publications (1)

Publication Number Publication Date
CN110121026A true CN110121026A (en) 2019-08-13

Family

ID=67521351

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910333424.2A Pending CN110121026A (en) 2019-04-24 2019-04-24 Intelligent capture apparatus and its scene generating method based on living things feature recognition

Country Status (2)

Country Link
CN (1) CN110121026A (en)
WO (1) WO2020215590A1 (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111214691A (en) * 2020-03-09 2020-06-02 中国美术学院 Intelligent aromatherapy machine
CN111225151A (en) * 2020-01-20 2020-06-02 深圳传音控股股份有限公司 Intelligent terminal, shooting control method and computer-readable storage medium
WO2020215590A1 (en) * 2019-04-24 2020-10-29 深圳传音控股股份有限公司 Intelligent shooting device and biometric recognition-based scene generation method thereof
CN112561113A (en) * 2019-09-25 2021-03-26 华为技术有限公司 Dangerous scene early warning method and terminal equipment
CN112843731A (en) * 2020-12-31 2021-05-28 上海米哈游天命科技有限公司 Shooting method, device, equipment and storage medium
CN112843709A (en) * 2020-12-31 2021-05-28 上海米哈游天命科技有限公司 Shooting method, device, equipment and storage medium
CN112947764A (en) * 2021-04-06 2021-06-11 首都医科大学附属北京同仁医院 Scene matching method and device for relieving emotion
CN113780546A (en) * 2020-05-21 2021-12-10 华为技术有限公司 Method for evaluating female emotion and related device and equipment

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114666493B (en) * 2021-12-22 2024-01-26 杭州易现先进科技有限公司 AR sightseeing service system and terminal

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104091153A (en) * 2014-07-03 2014-10-08 苏州工业职业技术学院 Emotion judgment method applied to chatting robot
CN106341608A (en) * 2016-10-28 2017-01-18 维沃移动通信有限公司 Emotion based shooting method and mobile terminal
CN108307037A (en) * 2017-12-15 2018-07-20 努比亚技术有限公司 Terminal control method, terminal and computer readable storage medium
CN108764010A (en) * 2018-03-23 2018-11-06 姜涵予 Emotional state determines method and device
CN109240488A (en) * 2018-07-27 2019-01-18 重庆柚瓣家科技有限公司 A kind of implementation method of AI scene engine of positioning
CN109240786A (en) * 2018-09-04 2019-01-18 广东小天才科技有限公司 A kind of subject replacement method and electronic equipment

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010181461A (en) * 2009-02-03 2010-08-19 Olympus Corp Digital photograph frame, information processing system, program, and information storage medium
CN105615902A (en) * 2014-11-06 2016-06-01 北京三星通信技术研究有限公司 Emotion monitoring method and device
CN105249975A (en) * 2015-11-10 2016-01-20 广景视睿科技(深圳)有限公司 Method and system for conditioning mood state
US20170351330A1 (en) * 2016-06-06 2017-12-07 John C. Gordon Communicating Information Via A Computer-Implemented Agent
CN110121026A (en) * 2019-04-24 2019-08-13 深圳传音控股股份有限公司 Intelligent capture apparatus and its scene generating method based on living things feature recognition

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104091153A (en) * 2014-07-03 2014-10-08 苏州工业职业技术学院 Emotion judgment method applied to chatting robot
CN106341608A (en) * 2016-10-28 2017-01-18 维沃移动通信有限公司 Emotion based shooting method and mobile terminal
CN108307037A (en) * 2017-12-15 2018-07-20 努比亚技术有限公司 Terminal control method, terminal and computer readable storage medium
CN108764010A (en) * 2018-03-23 2018-11-06 姜涵予 Emotional state determines method and device
CN109240488A (en) * 2018-07-27 2019-01-18 重庆柚瓣家科技有限公司 A kind of implementation method of AI scene engine of positioning
CN109240786A (en) * 2018-09-04 2019-01-18 广东小天才科技有限公司 A kind of subject replacement method and electronic equipment

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020215590A1 (en) * 2019-04-24 2020-10-29 深圳传音控股股份有限公司 Intelligent shooting device and biometric recognition-based scene generation method thereof
CN112561113A (en) * 2019-09-25 2021-03-26 华为技术有限公司 Dangerous scene early warning method and terminal equipment
CN111225151A (en) * 2020-01-20 2020-06-02 深圳传音控股股份有限公司 Intelligent terminal, shooting control method and computer-readable storage medium
CN111225151B (en) * 2020-01-20 2024-04-30 深圳传音控股股份有限公司 Intelligent terminal, shooting control method and computer readable storage medium
CN111214691A (en) * 2020-03-09 2020-06-02 中国美术学院 Intelligent aromatherapy machine
CN113780546A (en) * 2020-05-21 2021-12-10 华为技术有限公司 Method for evaluating female emotion and related device and equipment
CN112843731A (en) * 2020-12-31 2021-05-28 上海米哈游天命科技有限公司 Shooting method, device, equipment and storage medium
CN112843709A (en) * 2020-12-31 2021-05-28 上海米哈游天命科技有限公司 Shooting method, device, equipment and storage medium
CN112843709B (en) * 2020-12-31 2023-05-26 上海米哈游天命科技有限公司 Shooting method, shooting device, shooting equipment and storage medium
CN112947764A (en) * 2021-04-06 2021-06-11 首都医科大学附属北京同仁医院 Scene matching method and device for relieving emotion
CN112947764B (en) * 2021-04-06 2022-04-01 首都医科大学附属北京同仁医院 Scene matching method and device for relieving emotion

Also Published As

Publication number Publication date
WO2020215590A1 (en) 2020-10-29

Similar Documents

Publication Publication Date Title
CN110121026A (en) Intelligent capture apparatus and its scene generating method based on living things feature recognition
US11783524B2 (en) Producing realistic talking face with expression using images text and voice
JP2024028390A (en) An electronic device that generates an image including a 3D avatar that reflects facial movements using a 3D avatar that corresponds to the face.
CN110390705B (en) Method and device for generating virtual image
CN105378742B (en) The biometric identity managed
US9031293B2 (en) Multi-modal sensor based emotion recognition and emotional interface
WO2016177290A1 (en) Method and system for generating and using expression for virtual image created through free combination
US11017575B2 (en) Method and system for generating data to provide an animated visual representation
TWI255141B (en) Method and system for real-time interactive video
CN106648071A (en) Social implementation system for virtual reality
CN110418095B (en) Virtual scene processing method and device, electronic equipment and storage medium
KR20060064553A (en) Method, apparatus, and computer program for processing image
CN102470273A (en) Visual representation expression based on player expression
KR20130032620A (en) Method and apparatus for providing moving picture using 3d user avatar
CN113760101B (en) Virtual character control method and device, computer equipment and storage medium
KR101913811B1 (en) A method for analysing face information, and an appratus for analysing face information to present faces, identify mental status or compensate it
US11127181B2 (en) Avatar facial expression generating system and method of avatar facial expression generation
WO2010133661A1 (en) Identifying facial expressions in acquired digital images
CN108052250A (en) Virtual idol deductive data processing method and system based on multi-modal interaction
CN106649712B (en) Method and device for inputting expression information
JP2005346471A (en) Information processing method and apparatus
JP2023103335A (en) Computer program, server device, terminal device, and display method
EP3809236A1 (en) Avatar facial expression generating system and method of avatar facial expression generation
KR20180118669A (en) Intelligent chat based on digital communication network
JP5349238B2 (en) Facial expression recognition device, interpersonal emotion estimation device, facial expression recognition method, interpersonal emotion estimation method, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20190813

RJ01 Rejection of invention patent application after publication