CN105183147A - Head-mounted smart device and method thereof for modeling three-dimensional virtual limb - Google Patents

Head-mounted smart device and method thereof for modeling three-dimensional virtual limb Download PDF

Info

Publication number
CN105183147A
CN105183147A CN201510468561.9A CN201510468561A CN105183147A CN 105183147 A CN105183147 A CN 105183147A CN 201510468561 A CN201510468561 A CN 201510468561A CN 105183147 A CN105183147 A CN 105183147A
Authority
CN
China
Prior art keywords
user
head
intelligent equipment
type intelligent
wearing type
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201510468561.9A
Other languages
Chinese (zh)
Inventor
刘俊峰
姬正桥
刘兆龙
黄思宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vision (beijing) Technology Co Ltd
Original Assignee
Vision (beijing) Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vision (beijing) Technology Co Ltd filed Critical Vision (beijing) Technology Co Ltd
Priority to CN201510468561.9A priority Critical patent/CN105183147A/en
Publication of CN105183147A publication Critical patent/CN105183147A/en
Pending legal-status Critical Current

Links

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

The present invention provides a head-mounted smart device and a method thereof for modeling a three-dimensional virtual limb. The head-mounted smart device comprises: a body; a user motion capturer disposed at a lower part of the body, for capturing limb movements of a user to generate a set of three-dimensional images of the limb movements that continuously change over time; a user motion modeler connected with the user motion capturer, for modeling the three-dimensional virtual limb synchronously moving with the user limb according to the three-dimensional images of the limb movements; and a screen connected to the user motion modeler, for instantly displaying the three-dimensional virtual limb to the user. According to the head-mounted smart device and the method thereof for modeling a three-dimensional virtual limb provided by the present invention, the user can use the head-mounted smart device to interact with other users in a virtual scene, which is more intelligent and more convenient.

Description

The method of head-wearing type intelligent equipment and modeling three-dimensional limbs thereof
Technical field
The present invention relates to Intelligent wearable field, particularly relate to a kind of head-wearing type intelligent equipment, is exactly a kind of method of head-wearing type intelligent equipment and modeling three-dimensional limbs thereof specifically.
Background technology
In recent years, along with emerging in large numbers of a large amount of wearable intelligent equipment, such as, association glasses, Google's glasses etc., allow user without the need to make with the hands or one hand operates, can accessing Internet whenever and wherever possible.But existing wear-type helmet glasses merely provide hologram functional, can not monitor the using state of user, can not carry out perception to the action of user, thus, what existing wearable device can not be more intelligent provides service, poor user experience.
Therefore, this area needs one badly can use user interactions with other, provides the head-wearing type intelligent equipment of more intimate service according to user's request.
Summary of the invention
The invention provides a kind of method of head-wearing type intelligent equipment and modeling three-dimensional limbs thereof, the limb action being caught user by user action catcher produces one group of continually varying limb motion 3-D view in time, user action modeling device goes out the three-dimensional limbs be synchronized with the movement with user's limbs according to the modeling of described limb motion 3-D view, and be presented in the virtual scene of user's viewing, for oneself viewing, or receive the three-dimensional limbs data of other user, and be modeled as three-dimensional personage, so that user and this three-dimensional personage are mutual in the virtual scene that display screen shows, utilize user's collector of expressing one's feelings to gather user's face expression information, infer the use mood of user, thus more heart to heart to user's Push Service, add interactive mode, improve Consumer's Experience, built-in camera is utilized to obtain the eye information of user, can automatic image position adjustment, obtain the most clearly picture, the fatigue exponent of user can also be detected, corresponding operating is carried out according to testing result, can also identify user identity, provide different services so that more targeted to different user.Solve existing Wearable and user interactions is few, intelligence degree is not high, the problem of poor user experience.
A kind of head-wearing type intelligent equipment of the present invention, comprising: body; User action catcher, is arranged at described body lower part, and the limb action for catching user produces one group of continually varying limb motion 3-D view in time; User action modeling device, is connected with described user action catcher, for going out the three-dimensional limbs be synchronized with the movement with user's limbs according to the modeling of described limb motion 3-D view; Display screen, is connected with described user action modeling device, for immediately showing described three-dimensional limbs to user.
A kind of method utilizing head-wearing type intelligent equipment modeling three-dimensional limbs of the present invention, comprising: the limb motion catching user produces one group of continuous print limb motion 3-D view; The three-dimensional limbs be synchronized with the movement with user's limbs are gone out according to the modeling of described limb motion 3-D view; By display screen to three-dimensional limbs described in user's real-time exhibition.
The invention provides a kind of method of head-wearing type intelligent equipment and modeling three-dimensional limbs thereof, the limb action being caught user by user action catcher produces one group of continually varying limb motion 3-D view in time, user action modeling device goes out the three-dimensional limbs be synchronized with the movement with user's limbs according to the modeling of described limb motion 3-D view, and be presented in the virtual scene of user's viewing, for oneself viewing, or receive the three-dimensional limbs data of other user, and be modeled as three-dimensional personage, so that user and this three-dimensional personage are mutual in the virtual scene that display screen shows, utilize user's collector of expressing one's feelings to gather user's face expression information, infer the use mood of user, thus more heart to heart to user's Push Service, add interactive mode, improve Consumer's Experience, built-in camera is utilized to obtain the eye information of user, can automatic image position adjustment, obtain the most clearly picture, the fatigue exponent of user can also be detected, corresponding operating is carried out according to testing result, can also identify user identity, provide different services so that more targeted to different user.Camera can carry out the detection of environment road conditions etc., by traffic information Real-Time Sharing; The first-class sensor of shooting can be set in head-wearing type intelligent equipment, for record a video, taking a picture, AR application, and by the identification hand of user, limbs or face (comprising face), for system interaction.
It is to be understood that above-mentioned general description and following embodiment are only exemplary and illustrative, its can not limit the present invention for advocate scope.
Accompanying drawing explanation
Appended accompanying drawing is below a part for instructions of the present invention, and it depicts example embodiment of the present invention, and appended accompanying drawing is used for principle of the present invention is described together with the description of instructions.
The structured flowchart of the embodiment one of a kind of head-wearing type intelligent equipment that Fig. 1 provides for embodiment of the present invention;
Fig. 2 is the application schematic diagram of the head-wearing type intelligent equipment shown in Fig. 1;
The structured flowchart of the embodiment two of a kind of head-wearing type intelligent equipment that Fig. 3 provides for embodiment of the present invention;
The structured flowchart of the embodiment three of a kind of head-wearing type intelligent equipment that Fig. 4 provides for embodiment of the present invention;
The structured flowchart of the embodiment four of a kind of head-wearing type intelligent equipment that Fig. 5 provides for embodiment of the present invention; Fig. 6 is the application schematic diagram of the head-wearing type intelligent equipment shown in Fig. 5;
The structured flowchart of the embodiment four of a kind of head-wearing type intelligent equipment that Fig. 7 provides for embodiment of the present invention;
Fig. 8 is the application schematic diagram of the embodiment one of the head-wearing type intelligent equipment shown in Fig. 7;
Fig. 9 is the application schematic diagram of the embodiment two of the head-wearing type intelligent equipment shown in Fig. 7;
A kind of process flow diagram utilizing the embodiment one of the method for head-wearing type intelligent equipment modeling three-dimensional limbs that Figure 10 provides for the embodiment of the present invention;
A kind of process flow diagram utilizing the embodiment two of the method for head-wearing type intelligent equipment modeling three-dimensional limbs that Figure 11 provides for the embodiment of the present invention.
Symbol description:
10 body 20 user action catchers
30 user action modeling device 40 display screens
50 receiver 60 visual angle effect devices
70 transmitter 80 users express one's feelings collector
90 users express one's feelings collector 100 the machine processing enter
110 built-in camera modules
111 infrared camera module 112 infrared light supplies
S101 ~ S105 step
Embodiment
Clearly understand for making the object of the embodiment of the present invention, technical scheme and advantage, below by with accompanying drawing and describe the spirit clearly demonstrating disclosed content in detail, any art technician is after the embodiment understanding content of the present invention, when can by the technology of content institute of the present invention teaching, be changed and modify, it does not depart from spirit and the scope of content of the present invention.
Schematic description and description of the present invention is for explaining the present invention, but not as a limitation of the invention.In addition, in drawings and the embodiments use element/component that is identical or like numerals will to be used to represent identical or similar portions.
About " first " used herein, " second " ... Deng, the not special meaning of censuring order or cis-position, is also not used to limit the present invention, and it is only in order to distinguish the element or operation that describe with constructed term.
About direction used herein term, such as: upper and lower, left and right, front or rear etc., be only the direction with reference to accompanying drawing.Therefore, the direction term of use is used to illustrate and is not used for limiting this creation.
About " comprising " used herein, " comprising ", " having ", " containing " etc., be open term, namely mean including but not limited to.
About used herein " and/or ", comprise the arbitrary of described things or all combine.
About term used herein " roughly ", " about " etc., in order to modify any can the quantity of microvariations or error, but this slight variations or error can't change its essence.Generally speaking, the scope of the microvariations that this type of term is modified or error can be 20% in some embodiments, can be 10% in some embodiments, can be 5% or other numerical value in some embodiments.It will be understood by those skilled in the art that the aforementioned numerical value mentioned can adjust according to actual demand, not as limit.
Some in order to the word that describes the application by lower or discuss in the other places of this instructions, to provide those skilled in the art about guiding extra in the description of the application.
The structured flowchart of the embodiment one of a kind of head-wearing type intelligent equipment that Fig. 1 provides for the embodiment of the present invention, as shown in Figure 1, described head-wearing type intelligent equipment comprises: body 10, user action catcher 20, user action modeling device 30 and display screen 40, wherein, user action catcher 20 is arranged at described body 10 bottom, and user action catcher 20 produces one group of continually varying limb motion 3-D view in time for the limb action catching user; User action modeling device 30 is connected with described user action catcher 20, and user action modeling device 30 is for going out the three-dimensional limbs be synchronized with the movement with user's limbs according to the modeling of described limb motion 3-D view; Display screen 40 is connected with described user action modeling device 30, and display screen 40 is for immediately showing described three-dimensional limbs to user.In an embodiment of the present invention, described user action catcher 20 is three-dimensional camera module, and three-dimensional camera module comprises can depth measurement degree (TOF) camera, structured light camera and three-dimensional dual camera.
Fig. 2 is the application schematic diagram of the head-wearing type intelligent equipment shown in Fig. 1, and with reference to Fig. 1, Fig. 2, body 10 can be a housing or be glasses or be a helmet, and the present invention is not as limit.The limb action being caught user by user action catcher produces one group of continually varying limb motion 3-D view in time, user action modeling device goes out the three-dimensional limbs (as shown in Figure 2) be synchronized with the movement with user's limbs according to the modeling of described limb motion 3-D view, and be presented in the virtual scene of user's viewing, for oneself viewing.Such as, by head-wearing type intelligent equipment of the present invention, user can carry out various operation in virtual waste Yezhong, user carries out any action, three-dimensional limbs are also corresponding carries out various operation, as personage in CS game, can see and oneself stretch out one's hand, kick, squat down, if looked up, because limbs are not in the catching range of user action catcher 20, at this moment can't see the limbs of oneself, but can see upper space, allow user have kind of vision, a tactile experience on the spot in person, improve the Experience Degree of user.
The structured flowchart of the embodiment two of a kind of head-wearing type intelligent equipment that Fig. 3 provides for embodiment of the present invention; As shown in Figure 3, described head-wearing type intelligent equipment also comprises receiver 50, wherein, receiver 50 is connected with described display screen 40, the three-dimensional limbs data that receiver 50 produces for the head-wearing type intelligent equipment of other user of real-time reception, and be modeled as three-dimensional personage in case user in the scene that described display screen 40 shows with other user interaction.
See Fig. 3, user by head-wearing type intelligent equipment can with other user interaction, such as, user can carry out on virtual sea level with the three-dimensional personage of other user fight each other (user may be that a people carries out some actions of fighting with other user thousands of miles away in the room of oneself alone in fact), the action of three-dimensional limbs is synchronous with the action of oneself, oneself become the master of martial arts in swordsman's TV seemingly quickly, allow user have kind of vision, a tactile experience on the spot in person, user experience is high.
The structured flowchart of the embodiment three of a kind of head-wearing type intelligent equipment that Fig. 4 provides for embodiment of the present invention, as shown in Figure 4, described head-wearing type intelligent equipment also comprises visual angle effect device 60 and transmitter 70, wherein, visual angle effect device 60 is connected with described user action modeling device 30, and visual angle effect device 60 is for converting the correction three-dimensional limbs watched from other user perspective to by described three-dimensional limbs; Transmitter 70 is connected with described visual angle effect device 60, transmitter 70 revises three-dimensional limbs data, so that described correction three-dimensional limbs data modeling is become three-dimensional personage by the head-wearing type intelligent equipment of other user for sending to the head-wearing type intelligent equipment of other user.
See Fig. 4, the head-wearing type intelligent equipment of user observes the angle of oneself limb action, the angle of observing at their head-wearing type intelligent equipment with other user is different, oneself health is observed and health that others observes oneself is different just as a people, oneself can only observe the part of oneself health, and other people can observe the whole of oneself health from different perspectives, therefore, user invents the three-dimensional limbs data of oneself again to other user before, need to revise these data, the three-dimensional limbs allowing other user receive are with the three-dimensional personage observed by themselves angle, thus mutual more vivid between user.Above-mentioned just a specific embodiment of the present invention, in other embodiments of the present invention, the correction of three-dimensional limbs also can have been come by take over party, or automatically three-dimensional limbs are revised according to user both sides' change in location by server, now visual angle effect device may be arranged in server, and the present invention is not as limit.
The structured flowchart of the embodiment four of a kind of head-wearing type intelligent equipment that Fig. 5 provides for embodiment of the present invention, as shown in Figure 5, described head-wearing type intelligent equipment also comprises user and to express one's feelings collector 80, data processor 90, the machine processing enter 100, wherein, user expresses one's feelings, and to be arranged at described body 10 inner or outside for collector 80, and user expresses one's feelings collector 80 for gathering the facial expression information of user; Data processor 90 and described user collector 80 of expressing one's feelings is connected, and data processor 90 uses mood information for generating user according to described facial expression information; The machine processing enter 100 is connected with described data processor 90, transmitter 70 and described display screen 40, and the machine processing enter 100 is for the content that uses mood information to adjust to present in described display screen 40 according to described user and send described user by described transmitter 70 to other user and use mood information.
With reference to Fig. 5, body 10 can be a housing or be glasses or be a helmet (as shown in Figure 7), the present invention is not as limit, user's collector 80 of expressing one's feelings can gather the facial expression information of user, for convenience of explanation, suppose that body 10 is for glasses, user's collector 80 of expressing one's feelings is one group of facial pressure sensing unit, facial pressure sensing unit scattering device is in the soft formation that glasses contact with eye socket, each facial pressure sensing unit changes the pressure change thus the pressure changing information data of generation one group of each pressure transducer that cause by sensing facial muscles, pressure changing information data compare with pressure data in expression storehouse by data processor 90, thus infer the expression of user, namely data processor 90 can infer the facial expression change of user by generating pressure changing information data, thus infer the mood user, the change of mood, the machine processing enter 100 uses mood information to adjust the content presented in described display screen 40 according to described user, such as partly or entirely change the content that display screen is play, and send described user by described transmitter 70 to other user and use mood information, allow other user also can understand oneself mood now in real time, thus the interaction realized between user, improve the Experience Degree of user.Multiple facial pressure sensing unit can scattering device in body 10 with in the soft formation (such as froth bed) of facial contact, each pressure sensitive unit passes through the size of pressure sensor thus the pressure changing information data of generation one group of each pressure sensitive unit, pressure changing information data compare with the pressure data stored in expression storehouse by data processor 90, thus infer the expression of user, namely data processor 90 can infer the facial expression change of user by generating pressure changing information data, thus infers and the mood of user, the change of mood; In the present invention one specific embodiment, the pressure changing information data that each pressure sensitive unit can export by data processor 90 are done to compare one by one, result will be compared compare with corresponding data in expression storehouse, thus infer the expression of user, such head-wearing type intelligent equipment can adapt to different users, the pressure that can not produce due to user's difference is different, and causes metrical error.
Fig. 6 is the application schematic diagram of the head-wearing type intelligent equipment shown in Fig. 5, as shown in Figure 6, described user expresses one's feelings collector 80 can for camera, wherein, described camera is arranged to take the mouth picture of user bottom described body 10, and described data processor 90 generates user according to described mouth picture and uses mood information.
In an embodiment of the present invention, described camera is infrared camera or is infrared special camera, and infrared camera or all there is an infrared emission light source for infrared special camera, wherein, namely infrared special camera can be taken visible ray, can take again to infrared light.Camera is arranged at below body 10 usually, when being multiple camera, can be positioned at antimere, also can be asymmetric.Utilize camera can increase the various interactive modes such as lip, nostril, tongue, tooth, camera can shooting angle be 60 °, 90 °, 120 ° etc. arbitrarily angled, its observable scope can be any position of face, also only can observe partial portion.Therefore, the placement location of camera can be that body 10 lower outer is visible, also can be hidden in body 10 invisible.In addition, camera can be movable or be fixedly installed, and when camera is movable, it is arbitrarily angled that camera is adjustable, and now its scope that can take can be partial portion, also may be all sites, and now, camera can be automatic, also can be manual; When camera is not movable, camera can have the angle of the number of degrees arbitrarily such as 5 °, 10 ° with vertical interface, and the position of camera and angle are fixing, cannot change.In a specific embodiment of the present invention, camera can also provide warning against danger when there being potential safety hazard, can provide well lid information, information theft etc., and the present invention is not as limit.
The structured flowchart of the embodiment three of a kind of head-wearing type intelligent equipment that Fig. 7 provides for embodiment of the present invention, as shown in Figure 7, described head-wearing type intelligent equipment also comprises built-in camera module 110, built-in camera module 110 is connected with described data processor 90, and built-in camera module 110 is for taking the eye information of user.In an embodiment of the present invention, described built-in camera module 110 can comprise infrared camera module 111 and provide the infrared light supply 112 (as shown in Figure 8, Figure 9) of infrared light for the shooting of described infrared camera module 111, infrared light is invisible, user can not be had influence on and normally watch display screen, improve user experience.
Fig. 8 is the application schematic diagram of the embodiment one of the head-wearing type intelligent equipment shown in Fig. 7, Fig. 9 is the application schematic diagram of the embodiment two of the head-wearing type intelligent equipment shown in Fig. 7, built-in camera module 110 can directly be observed (as shown in Figure 8) human eye, built-in camera module 110 can also be observed (as shown in Figure 9) human eye by infrared semi-transparent eyeglass, infrared semi-transparent eyeglass is exactly a kind of reflects infrared light, and the eyeglass of visible ray can be had an X-rayed, infrared semi-transparent eyeglass is utilized not affect user's normal observation display screen, and ensure that infrared camera module 111 has splendid shooting angle, in a specific embodiment of the present invention, described eye information comprises interpupillary distance, described data processor 90 is according to described interpupillary distance adjustment image position, interpupillary distance can be adjusted by machinery, interpupillary distance can also be adjusted by algorithm, machinery adjustment interpupillary distance is by manual or automatic mode, carry out the adjustment of interpupillary distance, the angle of the most clearly picture and the most comfortable is selected to carry out the use of head-wearing type intelligent equipment voluntarily by user, algorithm adjustment interpupillary distance is observed by built-in camera module 110 pairs of human eyes, can carry out user behavior analysis, automatically can detect interpupillary distance simultaneously, thus adjustment image position.
In a specific embodiment of the present invention, described eye information comprises user's frequency of wink, and described data processor 90 is also for judging the fatigue exponent of user according to described user's frequency of wink.Namely detected by people's number of winks in 110 pairs of unit interval of built-in camera module, judging the fatigue exponent of human eye, when there being potential safety hazard, warning against danger can be provided, make user have better experience.
In another specific embodiment of the present invention, described eye information comprises eyes of user iris, and described data processor 90 is also for going out the identity information of user according to described eyes of user iris recognition.By built-in camera module 110 pairs of eye iris detection, identify the identity of user, this kind of situation is applicable to the situation that many people share same head-wearing type intelligent equipment, identify user identity, thus provide personalized service for different user, also can prevent others from usurping head-wearing type intelligent equipment simultaneously, by setting, can lock part or repertoire according to identification result.
In another specific embodiment of the present invention, described eye information comprises user's pupil visual angle, described data processor 90 also for according to described user's pupil visual angle track user viewpoint, to carry out display effect deep processing on described display screen 40 centered by user's viewpoint.Namely, can the direction of observation of track user by built-in camera module 110, carry out emphasis data with the place observed user to provide, such as, if user sees toward the display screen upper left corner, just play up at display screen upper left corner emphasis, with high definition display frame, the then normal display frame at other position of display screen, thus accomplish to shoot the arrow at the target, improve user experience.
In another specific embodiment of the present invention, described eye information comprises user's eye picture, and described data processor 90 also uses mood information for generating user according to described user's eye picture.Such as, when user smiles, eyes are into a line, time frightened, can open eyes wide, thus judge the use mood of user, can push personalized service to user.
A kind of process flow diagram utilizing the embodiment one of the method for head-wearing type intelligent equipment modeling three-dimensional limbs that Figure 10 provides for embodiment of the present invention; As shown in Figure 10, the method for head-wearing type intelligent equipment modeling three-dimensional limbs is utilized to comprise:
S101: the limb motion catching user produces one group of continuous print limb motion 3-D view;
S102: go out the three-dimensional limbs be synchronized with the movement with user's limbs according to the modeling of described limb motion 3-D view; And
S103: by display screen to three-dimensional limbs described in user's real-time exhibition.
With reference to Figure 10, the limb action being caught user by user action catcher produces one group of continually varying limb motion 3-D view in time, user action modeling device goes out the three-dimensional limbs be synchronized with the movement with user's limbs according to the modeling of described limb motion 3-D view, and be presented in the virtual scene of user's viewing, for oneself viewing.Such as, by head-wearing type intelligent equipment of the present invention, user can carry out various operation in virtual waste Yezhong, (essence user just may carry out corresponding action in bedroom) such as such as running, squat down, stretch out one's hand, user carries out any action, three-dimensional limbs are also corresponding carries out various operation, allows user have kind of vision, a tactile experience on the spot in person, improves the Experience Degree of user.
A kind of process flow diagram utilizing the embodiment two of the method for head-wearing type intelligent equipment modeling three-dimensional limbs that Figure 11 provides for embodiment of the present invention; As shown in figure 11, the described method of head-wearing type intelligent equipment modeling three-dimensional limbs that utilizes also comprises:
S104: the head-wearing type intelligent equipment in real time to other user sends three-dimensional limbs data; And
S105: the three-dimensional limbs data of the head-wearing type intelligent equipment generation of other user of real-time reception, and the three-dimensional limbs data modeling of other user is become three-dimensional personage, so that the three-dimensional personage of user and other user carries out interaction in the scene that display screen shows.
With reference to Figure 11, user by head-wearing type intelligent equipment can with other user interaction, such as, user can carry out on virtual sea level with the three-dimensional personage of other user fight each other (user may be that a people carries out some actions of fighting in the room of oneself alone in fact), the action of three-dimensional limbs is synchronous with the action of oneself, oneself become the master of martial arts in swordsman's TV seemingly quickly, allow user have kind of vision, a tactile experience on the spot in person, user experience is high.
The invention provides a kind of method of head-wearing type intelligent equipment and modeling three-dimensional limbs thereof, the limb action being caught user by user action catcher produces one group of continually varying limb motion 3-D view in time, user action modeling device goes out the three-dimensional limbs be synchronized with the movement with user's limbs according to the modeling of described limb motion 3-D view, and be presented in the virtual scene of user's viewing, for oneself viewing, or receive the three-dimensional limbs data of other user, and be modeled as three-dimensional personage, so that user and this three-dimensional personage are mutual in the virtual scene that display screen shows, utilize user's collector of expressing one's feelings to gather user's face expression information, infer the use mood of user, thus more heart to heart to user's Push Service, add interactive mode, improve Consumer's Experience, built-in camera is utilized to obtain the eye information of user, can automatic image position adjustment, obtain the most clearly picture, the fatigue exponent of user can also be detected, corresponding operating is carried out according to testing result, can also identify user identity, provide different services so that more targeted to different user.Camera can carry out the detection of environment road conditions etc., by traffic information Real-Time Sharing; The first-class sensor of shooting can be set in head-wearing type intelligent equipment, for record a video, taking a picture, AR application, and by the identification hand of user, limbs or face (comprising face), for system interaction.
The present invention also at least has following beneficial effect:
1) the present invention can carry out gesture identification, and the gesture operation that user is carried out in any position by the identification of user action catcher, can add interactive mode, add convenience;
2) user action catcher of the present invention can also carry out the identification of user's limbs, detects the action of limbs, increases interactive mode, more convenient;
3) camera of the present invention can identify expression, by gathering the expression of user, infers the use mood of user, thus more intimate provides the directed service pushed for user;
4) the present invention can judge the motion state of user, gathers ground reference substance, and the size by reference to thing judges the displacement of user, thus positions more accurately and the collection of distance;
5) the present invention can carry out the detection of environment road conditions etc., by traffic information Real-Time Sharing;
6) camera of the present invention can provide warning against danger when there being potential safety hazard, can provide well lid information, information theft etc.;
7) the present invention can insert ground plan picture below the visual field, is convenient to the comprehensive observation to environment, is also convenient to the effect of building picture-in-picture;
8) human limb and face can be reverted to the 3D model of standard by the present invention, thus carry out Internet Transmission;
9) the present invention is by the video acquisition to limbs and face, by anti-inference method reconstructed images;
10) the present invention is by the collection to surrounding enviroment, judges the scene residing for user, thus to display brightness, sound etc. automatically adjust, and makes user have better experience.
The above-mentioned embodiment of the present invention can be implemented in various hardware, Software Coding or both combinations.Such as, embodiments of the invention also can be the program code of the execution said procedure performed in data signal processor (DigitalSignalProcessor, DSP).The present invention also can relate to the several functions that computer processor, digital signal processor, microprocessor or field programmable gate array (FieldProgrammableGateArray, FPGA) perform.Can configure above-mentioned processor according to the present invention and perform particular task, it has been come by the machine-readable software code or firmware code performing the ad hoc approach defining the present invention's announcement.Software code or firmware code can be developed into different program languages and different forms or form.Also can in order to different target platform composing software codes.But the different code pattern of the software code of executing the task according to the present invention and other types configuration code, type and language do not depart from spirit of the present invention and scope.
The foregoing is only the schematic embodiment of the present invention, under the prerequisite not departing from design of the present invention and principle, the equivalent variations that any those skilled in the art makes and amendment, all should belong to the scope of protection of the invention.

Claims (17)

1. a head-wearing type intelligent equipment, is characterized in that, described head-wearing type intelligent equipment comprises:
One body (10);
One user action catcher (20), is arranged at described body (10) bottom, and the limb action for catching user produces one group of continually varying limb motion 3-D view in time;
One user action modeling device (30), is connected with described user action catcher (20), for going out the three-dimensional limbs be synchronized with the movement with user's limbs according to the modeling of described limb motion 3-D view; And
One display screen (40), is connected with described user action modeling device (30), for immediately showing described three-dimensional limbs to user.
2. head-wearing type intelligent equipment as claimed in claim 1, it is characterized in that, described head-wearing type intelligent equipment also comprises:
One receiver (50), be connected with described display screen (40), for the three-dimensional limbs data that the head-wearing type intelligent equipment of other user of real-time reception produces, and be modeled as three-dimensional personage in case user in the scene that described display screen (40) shows with other user interaction.
3. head-wearing type intelligent equipment as claimed in claim 1, it is characterized in that, described head-wearing type intelligent equipment also comprises:
One visual angle effect device (60), is connected with described user action modeling device (30), for described three-dimensional limbs being converted to the correction three-dimensional limbs watched from other user perspective;
One transmitter (70), be connected with described visual angle effect device (60), three-dimensional limbs data are revised, so that described correction three-dimensional limbs data modeling is become three-dimensional personage by the head-wearing type intelligent equipment of other user for sending to the head-wearing type intelligent equipment of other user.
4. head-wearing type intelligent equipment as claimed in claim 3, it is characterized in that, described head-wearing type intelligent equipment also comprises:
One user expresses one's feelings collector (80), is arranged at described body (10) inner or outside, for gathering the facial expression information of user;
One data processor (90), is connected with described user collector (80) of expressing one's feelings, and uses mood information for generating user according to described facial expression information; And
One the machine processing enter (100), be connected with described data processor (90), transmitter (70) and described display screen (40), adjust the content presented in described display screen (40) for using mood information according to described user and send described user by described transmitter (70) to other user and use mood information.
5. head-wearing type intelligent equipment as claimed in claim 4, is characterized in that, described user collector (80) of expressing one's feelings is multiple facial pressure sensing units of the facial muscles pressure changing information for gathering user.
6. head-wearing type intelligent equipment as claimed in claim 4, is characterized in that, described user collector (80) of expressing one's feelings is camera for taking user's mouth picture.
7. head-wearing type intelligent equipment as claimed in claim 6, is characterized in that, described camera is infrared camera or is infrared special camera.
8. head-wearing type intelligent equipment as claimed in claim 4, it is characterized in that, described head-wearing type intelligent equipment also comprises:
One built-in camera module (110), is connected with described data processor (90), for taking the eye information of user.
9. head-wearing type intelligent equipment as claimed in claim 8, it is characterized in that, described eye information comprises interpupillary distance, and described data processor (90) is also for adjusting image position according to described interpupillary distance.
10. head-wearing type intelligent equipment as claimed in claim 8, it is characterized in that, described eye information comprises user's frequency of wink, and described data processor (90) is also for judging the fatigue exponent of user according to described user's frequency of wink.
11. head-wearing type intelligent equipment as claimed in claim 8, it is characterized in that, described eye information comprises eyes of user iris, and described data processor (90) is also for going out the identity information of user according to described eyes of user iris recognition.
12. head-wearing type intelligent equipment as claimed in claim 8, it is characterized in that, described eye information comprises user's pupil visual angle, described data processor (90) also for according to described user's pupil visual angle track user viewpoint, to carry out display effect deep processing on described display screen (40) centered by user's viewpoint.
13. head-wearing type intelligent equipment as claimed in claim 8, it is characterized in that, described eye information comprises user's eye picture, and described data processor (90) also uses mood information for generating user according to described user's eye picture.
14. head-wearing type intelligent equipment as claimed in claim 8, is characterized in that, described built-in camera module (110) comprises infrared camera module (111) and is the infrared light supply (112) that the shooting of described infrared camera module (111) provides infrared light.
15. head-wearing type intelligent equipment as claimed in claim 1, is characterized in that, described user action catcher (20) is three-dimensional camera module.
16. 1 kinds of methods utilizing head-wearing type intelligent equipment modeling three-dimensional limbs, is characterized in that, utilize the method for head-wearing type intelligent equipment modeling three-dimensional limbs to comprise:
The limb motion catching user produces one group of continuous print limb motion 3-D view;
The three-dimensional limbs be synchronized with the movement with user's limbs are gone out according to the modeling of described limb motion 3-D view; And
By display screen to three-dimensional limbs described in user's real-time exhibition.
17. methods utilizing head-wearing type intelligent equipment modeling three-dimensional limbs as claimed in claim 16, it is characterized in that, the described method of head-wearing type intelligent equipment modeling three-dimensional limbs that utilizes also comprises:
Head-wearing type intelligent equipment in real time to other user sends three-dimensional limbs data; And
The three-dimensional limbs data of the head-wearing type intelligent equipment generation of other user of real-time reception, and the three-dimensional limbs data modeling of other user is become three-dimensional personage, so that the three-dimensional personage of user and other user carries out interaction in the scene that display screen shows.
CN201510468561.9A 2015-08-03 2015-08-03 Head-mounted smart device and method thereof for modeling three-dimensional virtual limb Pending CN105183147A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510468561.9A CN105183147A (en) 2015-08-03 2015-08-03 Head-mounted smart device and method thereof for modeling three-dimensional virtual limb

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510468561.9A CN105183147A (en) 2015-08-03 2015-08-03 Head-mounted smart device and method thereof for modeling three-dimensional virtual limb

Publications (1)

Publication Number Publication Date
CN105183147A true CN105183147A (en) 2015-12-23

Family

ID=54905274

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510468561.9A Pending CN105183147A (en) 2015-08-03 2015-08-03 Head-mounted smart device and method thereof for modeling three-dimensional virtual limb

Country Status (1)

Country Link
CN (1) CN105183147A (en)

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105739688A (en) * 2016-01-21 2016-07-06 北京光年无限科技有限公司 Man-machine interaction method and device based on emotion system, and man-machine interaction system
CN105867617A (en) * 2016-03-25 2016-08-17 京东方科技集团股份有限公司 Augmented reality device and system and image processing method and device
CN105912102A (en) * 2016-03-31 2016-08-31 联想(北京)有限公司 Information processing method and electronic equipment
CN106097787A (en) * 2016-08-18 2016-11-09 四川以太原力科技有限公司 Limbs teaching method based on virtual reality and teaching system
CN106128174A (en) * 2016-08-18 2016-11-16 四川以太原力科技有限公司 Limbs teaching method based on virtual reality and teaching system
CN106445176A (en) * 2016-12-06 2017-02-22 腾讯科技(深圳)有限公司 Man-machine interaction system and interaction method based on virtual reality technique
CN106502388A (en) * 2016-09-26 2017-03-15 惠州Tcl移动通信有限公司 A kind of interactive movement technique and head-wearing type intelligent equipment
CN106774935A (en) * 2017-01-09 2017-05-31 京东方科技集团股份有限公司 A kind of display device
CN107122642A (en) * 2017-03-15 2017-09-01 阿里巴巴集团控股有限公司 Identity identifying method and device based on reality environment
CN107123157A (en) * 2017-03-14 2017-09-01 华南理工大学 A kind of 3 d modeling system and method based on motion capture
CN107608513A (en) * 2017-09-18 2018-01-19 联想(北京)有限公司 A kind of Wearable and data processing method
CN107595301A (en) * 2017-08-25 2018-01-19 英华达(上海)科技有限公司 Intelligent glasses and the method based on Emotion identification PUSH message
CN108604291A (en) * 2016-01-13 2018-09-28 Fove股份有限公司 Expression identification system, expression discrimination method and expression identification program
CN108670275A (en) * 2018-05-22 2018-10-19 Oppo广东移动通信有限公司 Signal processing method and related product
CN109407825A (en) * 2018-08-30 2019-03-01 百度在线网络技术(北京)有限公司 Interactive approach and device based on virtual objects
CN112416125A (en) * 2020-11-17 2021-02-26 青岛小鸟看看科技有限公司 VR head-mounted all-in-one machine
US11348369B2 (en) 2016-11-29 2022-05-31 Advanced New Technologies Co., Ltd. Service control and user identity authentication based on virtual reality
CN115047979A (en) * 2022-08-15 2022-09-13 歌尔股份有限公司 Head-mounted display equipment control system and interaction method
CN116382487A (en) * 2023-05-09 2023-07-04 北京维艾狄尔信息科技有限公司 Interaction system is felt to wall body that runs accompany
US11768379B2 (en) 2020-03-17 2023-09-26 Apple Inc. Electronic device with facial sensors

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101256673A (en) * 2008-03-18 2008-09-03 中国计量学院 Method for tracing arm motion in real time video tracking system
CN102541259A (en) * 2011-12-26 2012-07-04 鸿富锦精密工业(深圳)有限公司 Electronic equipment and method for same to provide mood service according to facial expression
CN103366618A (en) * 2013-07-18 2013-10-23 梁亚楠 Scene device for Chinese learning training based on artificial intelligence and virtual reality
CN104011788A (en) * 2011-10-28 2014-08-27 奇跃公司 System And Method For Augmented And Virtual Reality
CN104076510A (en) * 2013-03-27 2014-10-01 聚晶半导体股份有限公司 Method of adaptively adjusting head-mounted display and head-mounted display
CN104090371A (en) * 2014-06-19 2014-10-08 京东方科技集团股份有限公司 3D glasses and 3D display system
CN104407766A (en) * 2014-08-28 2015-03-11 联想(北京)有限公司 Information processing method and wearable electronic equipment
US20150116199A1 (en) * 2013-10-25 2015-04-30 Quanta Computer Inc. Head mounted display and imaging method thereof
US20150177842A1 (en) * 2013-12-23 2015-06-25 Yuliya Rudenko 3D Gesture Based User Authorization and Device Control Methods

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101256673A (en) * 2008-03-18 2008-09-03 中国计量学院 Method for tracing arm motion in real time video tracking system
CN104011788A (en) * 2011-10-28 2014-08-27 奇跃公司 System And Method For Augmented And Virtual Reality
CN102541259A (en) * 2011-12-26 2012-07-04 鸿富锦精密工业(深圳)有限公司 Electronic equipment and method for same to provide mood service according to facial expression
CN104076510A (en) * 2013-03-27 2014-10-01 聚晶半导体股份有限公司 Method of adaptively adjusting head-mounted display and head-mounted display
CN103366618A (en) * 2013-07-18 2013-10-23 梁亚楠 Scene device for Chinese learning training based on artificial intelligence and virtual reality
US20150116199A1 (en) * 2013-10-25 2015-04-30 Quanta Computer Inc. Head mounted display and imaging method thereof
US20150177842A1 (en) * 2013-12-23 2015-06-25 Yuliya Rudenko 3D Gesture Based User Authorization and Device Control Methods
CN104090371A (en) * 2014-06-19 2014-10-08 京东方科技集团股份有限公司 3D glasses and 3D display system
CN104407766A (en) * 2014-08-28 2015-03-11 联想(北京)有限公司 Information processing method and wearable electronic equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
HAO LI,ET AL: "Facial Performance Sensing Head-Mounted Display", 《ACM TRANSACTIONS ON GRAPHICS》 *

Cited By (35)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108604291A (en) * 2016-01-13 2018-09-28 Fove股份有限公司 Expression identification system, expression discrimination method and expression identification program
CN105739688A (en) * 2016-01-21 2016-07-06 北京光年无限科技有限公司 Man-machine interaction method and device based on emotion system, and man-machine interaction system
CN105867617A (en) * 2016-03-25 2016-08-17 京东方科技集团股份有限公司 Augmented reality device and system and image processing method and device
CN105867617B (en) * 2016-03-25 2018-12-25 京东方科技集团股份有限公司 Augmented reality equipment, system, image processing method and device
US10665021B2 (en) 2016-03-25 2020-05-26 Boe Technology Group Co., Ltd. Augmented reality apparatus and system, as well as image processing method and device
CN105912102A (en) * 2016-03-31 2016-08-31 联想(北京)有限公司 Information processing method and electronic equipment
CN105912102B (en) * 2016-03-31 2019-02-05 联想(北京)有限公司 A kind of information processing method and electronic equipment
CN106128174A (en) * 2016-08-18 2016-11-16 四川以太原力科技有限公司 Limbs teaching method based on virtual reality and teaching system
CN106097787A (en) * 2016-08-18 2016-11-09 四川以太原力科技有限公司 Limbs teaching method based on virtual reality and teaching system
WO2018054056A1 (en) * 2016-09-26 2018-03-29 惠州Tcl移动通信有限公司 Interactive exercise method and smart head-mounted device
CN106502388B (en) * 2016-09-26 2020-06-02 惠州Tcl移动通信有限公司 Interactive motion method and head-mounted intelligent equipment
CN106502388A (en) * 2016-09-26 2017-03-15 惠州Tcl移动通信有限公司 A kind of interactive movement technique and head-wearing type intelligent equipment
US11783632B2 (en) 2016-11-29 2023-10-10 Advanced New Technologies Co., Ltd. Service control and user identity authentication based on virtual reality
US11348369B2 (en) 2016-11-29 2022-05-31 Advanced New Technologies Co., Ltd. Service control and user identity authentication based on virtual reality
CN106445176A (en) * 2016-12-06 2017-02-22 腾讯科技(深圳)有限公司 Man-machine interaction system and interaction method based on virtual reality technique
CN106774935A (en) * 2017-01-09 2017-05-31 京东方科技集团股份有限公司 A kind of display device
CN106774935B (en) * 2017-01-09 2020-03-31 京东方科技集团股份有限公司 Display device
CN107123157A (en) * 2017-03-14 2017-09-01 华南理工大学 A kind of 3 d modeling system and method based on motion capture
CN107122642A (en) * 2017-03-15 2017-09-01 阿里巴巴集团控股有限公司 Identity identifying method and device based on reality environment
KR102151898B1 (en) 2017-03-15 2020-09-03 알리바바 그룹 홀딩 리미티드 Identity authentication method and device based on virtual reality environment
EP3528156A4 (en) * 2017-03-15 2019-10-30 Alibaba Group Holding Limited Virtual reality environment-based identity authentication method and apparatus
KR20190076990A (en) * 2017-03-15 2019-07-02 알리바바 그룹 홀딩 리미티드 Virtual Reality Environment-based Identity Authentication Method and Apparatus
US10846388B2 (en) 2017-03-15 2020-11-24 Advanced New Technologies Co., Ltd. Virtual reality environment-based identity authentication method and apparatus
WO2018166456A1 (en) * 2017-03-15 2018-09-20 阿里巴巴集团控股有限公司 Virtual reality environment-based identity authentication method and apparatus
CN107595301A (en) * 2017-08-25 2018-01-19 英华达(上海)科技有限公司 Intelligent glasses and the method based on Emotion identification PUSH message
CN107608513A (en) * 2017-09-18 2018-01-19 联想(北京)有限公司 A kind of Wearable and data processing method
CN108670275A (en) * 2018-05-22 2018-10-19 Oppo广东移动通信有限公司 Signal processing method and related product
CN109407825A (en) * 2018-08-30 2019-03-01 百度在线网络技术(北京)有限公司 Interactive approach and device based on virtual objects
US11768379B2 (en) 2020-03-17 2023-09-26 Apple Inc. Electronic device with facial sensors
CN112416125A (en) * 2020-11-17 2021-02-26 青岛小鸟看看科技有限公司 VR head-mounted all-in-one machine
US11941167B2 (en) 2020-11-17 2024-03-26 Qingdao Pico Technology Co., Ltd Head-mounted VR all-in-one machine
CN115047979A (en) * 2022-08-15 2022-09-13 歌尔股份有限公司 Head-mounted display equipment control system and interaction method
CN115047979B (en) * 2022-08-15 2022-11-01 歌尔股份有限公司 Head-mounted display equipment control system and interaction method
CN116382487A (en) * 2023-05-09 2023-07-04 北京维艾狄尔信息科技有限公司 Interaction system is felt to wall body that runs accompany
CN116382487B (en) * 2023-05-09 2023-12-12 北京维艾狄尔信息科技有限公司 Interaction system is felt to wall body that runs accompany

Similar Documents

Publication Publication Date Title
CN105183147A (en) Head-mounted smart device and method thereof for modeling three-dimensional virtual limb
CN205103761U (en) Head -wearing type intelligent device
US11238568B2 (en) Method and system for reconstructing obstructed face portions for virtual reality environment
US9842433B2 (en) Method, apparatus, and smart wearable device for fusing augmented reality and virtual reality
CN109086726B (en) Local image identification method and system based on AR intelligent glasses
CN106873778B (en) Application operation control method and device and virtual reality equipment
CN106462233B (en) The method and apparatus attracted for showing equipment viewer's sight
CN106462733B (en) A kind of method and calculating equipment for line-of-sight detection calibration
US20120200667A1 (en) Systems and methods to facilitate interactions with virtual content
JP7423683B2 (en) image display system
JP2012141965A (en) Scene profiles for non-tactile user interfaces
KR101892735B1 (en) Apparatus and Method for Intuitive Interaction
CN102981616A (en) Identification method and identification system and computer capable of enhancing reality objects
US11422626B2 (en) Information processing device, and information processing method, for outputting sensory stimulation to a user
CN111007939A (en) Virtual reality system space positioning method based on depth perception
CN107708819A (en) Response formula animation for virtual reality
EP3659153A1 (en) Head-mountable apparatus and methods
CN114026606A (en) Fast hand meshing for dynamic occlusion
JP6775669B2 (en) Information processing device
EP3493541B1 (en) Selecting an omnidirectional image for display
US10642349B2 (en) Information processing apparatus
WO2018173206A1 (en) Information processing device
US11579690B2 (en) Gaze tracking apparatus and systems
WO2023237023A1 (en) Image processing method and apparatus, storage medium, and head-mounted display device
WO2019041352A1 (en) Panum's area measurement method and apparatus, and wearable display device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20151223

RJ01 Rejection of invention patent application after publication