CN110930843A - Control method for simulating eye action and simulated eye - Google Patents

Control method for simulating eye action and simulated eye Download PDF

Info

Publication number
CN110930843A
CN110930843A CN201911042744.9A CN201911042744A CN110930843A CN 110930843 A CN110930843 A CN 110930843A CN 201911042744 A CN201911042744 A CN 201911042744A CN 110930843 A CN110930843 A CN 110930843A
Authority
CN
China
Prior art keywords
instruction
state
sensing information
eyelid
eyeball
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911042744.9A
Other languages
Chinese (zh)
Inventor
姚琤
白涛
刘子玉
杨威威
江波
李硕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Mengqi Education Consulting Co Ltd
Original Assignee
Hangzhou Mengqi Education Consulting Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Mengqi Education Consulting Co Ltd filed Critical Hangzhou Mengqi Education Consulting Co Ltd
Priority to CN201911042744.9A priority Critical patent/CN110930843A/en
Publication of CN110930843A publication Critical patent/CN110930843A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B23/00Models for scientific, medical, or mathematical purposes, e.g. full-sized devices for demonstration purposes
    • G09B23/28Models for scientific, medical, or mathematical purposes, e.g. full-sized devices for demonstration purposes for medicine

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Medicinal Chemistry (AREA)
  • General Health & Medical Sciences (AREA)
  • Algebra (AREA)
  • Health & Medical Sciences (AREA)
  • Chemical & Material Sciences (AREA)
  • Medical Informatics (AREA)
  • Mathematical Optimization (AREA)
  • Mathematical Physics (AREA)
  • Pure & Applied Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • Theoretical Computer Science (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

The invention discloses a control method of simulated eye action and a simulated eye, wherein the simulated eye comprises the following steps: the eyelid part and the eyeball part are used for acquiring first sensing information and generating a first instruction and a second instruction according to the first sensing information; the eyelid component operates in a first state corresponding to the first instruction; the eyeball part runs a second state corresponding to the second instruction; the operation of this first state and this second state forms first action, and the control that emulation eyes divide the part to carry out the state change of induction type has solved emulation eyes and has used scene control singleness, and user experience is not abundant problem, and this emulation eyes can satisfy more scene demands that have the interaction requirement, has richened different user experience's product.

Description

Control method for simulating eye action and simulated eye
Technical Field
The invention relates to the field of automatic control, in particular to a control method for simulating eye action and a simulated eye.
Background
Let us say that the eyes are the window of heart spirit. This is because the eyes can convey specific emotions, which in turn convey certain information in person-to-person communications. And because people can obtain the information conveyed by other people from the change of eyes, the eyes with specific actions change the image of living being, and objects with eye entities are easier to appear in the cognition of people in the image of 'living bodies'. These "living creatures" have the potential to convey emotions, i.e., to perform certain information-expressing activities in specific situations.
In the creation of non-physical images such as plane works and cartoon, there are many examples of using eye movement changes to model life bodies and transmit information, but these images only exist in virtual world, change according to established movement scripts, and there is no interactive mechanism in the process, and users have no sense of participation. Most of the applications of software that can generate interaction and involve eye movement are simple movement simulation, and lack the "feedback ability" of eyes in conveying information such as emotion.
In some physical products for simulating eye movements, the movement implementation is usually programmed through a predetermined movement script, or the user directly controls the eye movement. Because the control logic has no interaction relation under specific conditions, the relation between the user and the eye movement simulation entity is reduced to the relation between people and products, the 'life body' image of the eye movement simulation entity is greatly weakened, information cannot be transmitted through emotion simulation, the selection of corresponding products on use scenes is more limited, the use scene change of more activities cannot be adapted, and the use experience of the user of the products and the expansion of the product categories are influenced.
Aiming at the problems that the application scene of the simulated eyes is single in control and the user experience is not rich in the related technology, an effective solution is not provided at present.
Disclosure of Invention
Aiming at the problems that the application scene of the simulated eyes is single in control and the user experience is not rich in the related technology, the invention provides a control method of the action of the simulated eyes and the simulated eyes, so as to at least solve the problems.
According to an aspect of the present invention, there is provided a control method of simulating an eye action, the simulated eye including: a eyelid member and an eyeball member, the method comprising:
acquiring first sensing information, and generating a first instruction and a second instruction according to the first sensing information;
the eyelid component operating in a first state corresponding to the first instruction;
the eyeball part runs a second state corresponding to the second instruction;
the operation of the first state and the second state forms a first action.
In one embodiment, the ocular component further comprises a pupil component, the method comprising:
generating a third instruction according to the first sensing information;
the pupil component operates a third state corresponding to the third instruction;
the operation of the first state, the second state, and the third state forms a second action.
In one embodiment, after the eyeball part executes the second state corresponding to the second instruction, the method comprises the following steps:
acquiring electric quantity information of a system, and generating a fourth instruction and a fifth instruction according to the electric quantity information;
the eyelid component operates in a fourth state corresponding to the fourth instruction;
the eyeball part runs a fifth state corresponding to the fifth instruction;
operation of the fourth state and the fifth state forms a third action.
In one embodiment, after the eyeball part executes the second state corresponding to the second instruction, the method comprises the following steps:
acquiring second sensing information, and determining change information between the first sensing information and the second sensing information in a preset first time period;
generating a sixth instruction, a seventh instruction, an eighth instruction and a ninth instruction according to the change information;
the sixth instructions instruct the eyelid components to maintain operation in the first state;
the seventh instruction instructs the eyelid component to transition from the first state of operation to a sixth state corresponding to the seventh instruction;
the eighth instruction instructs the eyeball component to maintain the second state of operation;
the ninth instructions instruct the eyelid block to transition from the first state of operation to a seventh state corresponding to the ninth instructions.
In one embodiment, the obtaining first sensing information and the generating a first instruction and a second instruction according to the first sensing information includes:
the method comprises the steps of obtaining first sensing information of an acceleration sensor, and generating a first instruction and a second instruction according to the first sensing information.
According to an aspect of the present invention, there is also provided a simulated eye comprising: sensor, drive arrangement, processor, eyelid part and eyeball part:
the sensor acquires first sensing information and sends the first sensing information to the processor;
the processor generates a first instruction and a second instruction according to first sensing information, and sends the first instruction and the second instruction to the driving device;
the driving device drives the eyelid part to operate in a first state according to the first instruction;
the driving device drives the eyeball part to operate in a second state according to the second instruction, wherein the first state and the second state form a first action.
In one embodiment, the driving device further comprises a first driving unit and a second driving unit;
the first driving unit drives the eyelid part to operate in a first state according to the first instruction;
the second driving unit drives the eyeball part to operate in a second state according to the second instruction.
In one embodiment, the eyeball component further comprises a pupil component and a display component, the display component is arranged on the pupil component:
the processor generates a third instruction according to the first sensing information;
the display component of the pupil component operates a third state corresponding to the third instruction, and the operation of the first state, the second state, and the third state forms a second action.
In one embodiment, the simulated eye further comprises: electric quantity detection device:
the electric quantity detection device acquires electric quantity information of a system, and the processor generates a fourth instruction and a fifth instruction according to the electric quantity information;
the driving device drives the eyelid part to operate in a fourth state according to the fourth instruction;
the driving device drives the eyeball part to operate in a fifth state according to the fifth instruction;
operation of the fourth state and the fifth state forms a third action.
In one embodiment, after the sensor acquires first sensing information, the sensor acquires second sensing information and sends the second sensing information to the processor, and the processor determines change information between the first sensing information and the second sensing information within a preset first time period;
the processor generates a sixth instruction, a seventh instruction, an eighth instruction and a ninth instruction according to the change information;
the driving device maintains the eyelid part to operate in the first state according to the sixth instruction;
the driving device drives the eyelid part to be converted from the first running state into a sixth state corresponding to the seventh instruction according to the seventh instruction;
the driving device maintains the eyeball part to operate the second state according to the eighth instruction;
the driving device drives the eyelid part to be converted from the first running state into a seventh state corresponding to the ninth instruction according to the ninth instruction.
By the present invention, there is provided a control method of a simulated eye action, the simulated eye comprising: the eyelid part and the eyeball part are used for acquiring first sensing information and generating a first instruction and a second instruction according to the first sensing information; the eyelid component operates in a first state corresponding to the first instruction; the eyeball part runs a second state corresponding to the second instruction; the operation of this first state and this second state forms first action, and the control that emulation eyes divide the part to carry out the state change of induction type has solved emulation eyes and has used scene control singleness, and user experience is not abundant problem, and this emulation eyes can satisfy more scene demands that have the interaction requirement, has richened different user experience's product.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
FIG. 1 is a block diagram of a first configuration of a simulated eye according to an embodiment of the invention;
FIG. 2 is a schematic diagram of a simulated eye in accordance with an embodiment of the invention;
FIG. 3 is a block diagram of a simulated eye according to an embodiment of the invention;
FIG. 4 is a block diagram of a simulated eye according to an embodiment of the invention;
FIG. 5 is a control flow diagram of a simulated eye movement implementation according to an embodiment of the invention;
FIG. 6 is a block diagram of a simulated eye according to an embodiment of the invention;
FIG. 7 is a first flowchart of a control method for simulating eye movements according to an embodiment of the present invention;
FIG. 8 is a flowchart II of a control method for simulating eye movements according to an embodiment of the present invention;
FIG. 9 is a first schematic diagram of a simulated eye application scenario in accordance with an embodiment of the present invention;
FIG. 10 is a first schematic diagram illustrating a state change of a simulated eye according to an embodiment of the invention;
FIG. 11 is a second schematic diagram of a simulated eye application scenario in accordance with an embodiment of the present invention;
fig. 12 is a schematic diagram of a state change of a simulated eye according to an embodiment of the invention.
Detailed Description
The invention will be described in detail hereinafter with reference to the accompanying drawings in conjunction with embodiments. It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict.
An embodiment of the present invention provides a simulated eye, fig. 1 is a block diagram of a structure of a simulated eye according to an embodiment of the present invention, as shown in fig. 1, the simulated eye 10 includes: a sensor 11, a drive 12, a processor 13, an eyelid part 14, and an eyeball part 15;
the sensor 11 obtains first sensing information and sends the first sensing information to the processor 13, where the sensor 11 obtains environment information where the simulated eye 10 is located and dynamic information of a product where the simulated eye 10 is located, for example, the sensor 11 may be an acceleration sensor or an attitude sensor, obtains dynamic information of the simulated eye 10, and the sensor 11 may be an image sensor, obtains image information of visible light or image information of infrared light, and can be used to determine a change in the environment around the simulated eye 10, for example, whether a person gazes at or blocks the simulated eye 10;
the processor 13 generates a first command and a second command according to the first sensing information, and sends the first command and the second command to the driving device 12, the processor 13 generates control commands for different components of the simulated eye 10, the operation of each component of the simulated eye 10 is driven by the power of the driving device 12, the driving device 12 drives the eyelid part 14 to operate in a first state according to the first command, the driving device 12 drives the eyeball part 15 to operate in a second state according to the second command, wherein the operation of the first state and the second state forms a first action, the eyelid part 14 and the eyeball part 15 can simulate a real eye through the relative positions of the two, the different states of each component can be defined by the difference of the spatial positions and the sizes of the eyelid part 14 and the eyeball part 15, and the different states form the state changes of each component in time, the change in state of the various components constitutes a different action of the simulated eye 10.
Fig. 2 is a schematic diagram showing a state of a simulated eye according to an embodiment of the present invention, where the state of the eyelid parts 14 includes a state where the inner eyeball parts 15 are completely hidden visually, the inner eyeball parts 15 are completely exposed, and the eyeball parts 15 are in all positions between the two states, as shown in fig. 2. The change in state of eyelid members 14 includes a transition between the state maintenance described above and any state. When the action is generated, the current state can be kept unchanged or corresponding position state change can be generated according to the signal instruction.
The state of the eyeball part 15 includes a state at all positions within the visual region where the eyeball part 15 is most visible, with the pupil identification on the eyeball part 15 as a coordinate point. The state change of the eyeball part includes the transition between the state maintenance as described above and an arbitrary state. When the action is generated, the current state can be kept unchanged or corresponding spatial position change can be generated according to the signal instruction.
The sensor 11 is used for acquiring the sensing information of the simulation eye 10, the processor 13 respectively generates different control instructions, and different parts of the simulation eye 10 are controlled through the driving device 12, so that different states are formed, different eye actions are formed by the combination of the states, the problems that the simulation eye is single in application scene control and not rich in user experience are solved, more scene requirements with interaction requirements can be met by the simulation eye, and products with different user experiences are enriched.
In one embodiment, fig. 3 is a block diagram of a structure of a simulated eye according to an embodiment of the invention, as shown in fig. 3, the driving device 12 further includes a first driving unit 21 and a second driving unit 22; the first driving unit drives the eyelid part 14 to operate in a first state according to the first instruction; the second driving unit 22 drives the eyeball part 15 to operate in the second state according to the second instruction, the eyelid part 14 and the eyeball part 15 are independently subjected to power control to form corresponding state changes, and in the expression form of the simulated eye 10, because the eyelid part 14 and the eyeball part 15 are independently subjected to power control, the state changes of the two parts occur simultaneously, and the eye actions which can be understood by people can be formed in a combined manner.
In an embodiment, fig. 4 is a block diagram of a structure of a simulated eye according to an embodiment of the invention, as shown in fig. 4, the eyeball part 15 further includes a pupil part 31, and the processor 13 generates a third instruction according to the first sensing information; the pupil section 31 operates in a third state corresponding to the third command, the first state, the second state, and the third state operate to form a second action, the pupil section 31 may display a difference in the size and the surface pattern of the pupil, the state of the pupil section 31 includes various states between filling the maximum visible region of the entire eyeball section 15 after enlarging the area and disappearing in the maximum visible region of the entire eyeball section 15 after narrowing the area, and the state of the pupil section 31 further includes a pattern of the surface pattern of the pupil and a different changing state of the light and shade, for example, the pupil section 31 may be realized by an image display panel, and the state change of the pupil section 31 includes a transition between the above-described state maintenance and an arbitrary state. When the simulated eye 10 generates the action, the current state can be kept unchanged or corresponding spatial position change can be generated according to the instruction sent by the processor 13, and the simultaneous change combination of the states among the three components of the simulated eye 10 forms richer eye action.
In the embodiment of the present invention, fig. 5 is a schematic control flow diagram for realizing the simulated eye movement according to the embodiment of the present invention, as shown in fig. 5, the simulated eye 10 may be composed of two parts, namely, an eyelid part 14 and an eyeball part 15, or may be composed of three parts, namely, an eyelid part 14, an eyeball part 15 and a pupil part 31, the spatial position relationship of the simulated eye 10 corresponds to the relationship of the corresponding parts of the real eye, each part may independently receive power control to form corresponding state changes, and in the expression form, the state changes of each part occur simultaneously to combine to form the eye movement that can be understood by human. The core of the process is that the sensors 11 of the signal sensors 1 to N collect sensing signals, the sensing signals are processed by the processor 13 and then converted into independent signal instructions, the driving device 12 controls each part of the simulated eye 10 to make corresponding state changes, and the combination of the state changes is represented as a plurality of eye movements.
In one embodiment, after the sensor 11 acquires the first sensing information, the sensor 11 acquires the second sensing information and sends the second sensing information to the processor 13, and the processor 13 determines the change information between the first sensing information and the second sensing information within a preset first time period; the processor 13 generates a sixth instruction, a seventh instruction, an eighth instruction and a ninth instruction according to the change information; the driving device 12 maintains the first state of the eyelid unit 14 according to the sixth command; the driving device 12 drives the eyelid component 14 according to the seventh instruction to convert from the first state of operation to a sixth state corresponding to the seventh instruction; the driving device 12 maintains the second state of the eyeball part 15 according to the eighth command; the driving device 12 drives the eyelid component 14 to be converted from the first operating state to a seventh operating state corresponding to the ninth command according to the ninth command, and the sensor 11 collects the sensing information of the simulated eye 10 for many times to complete the state change of each component in different time periods, wherein the state of each component of the simulated eye 10 refers to the different differences of the component in the spatial position, size and surface pattern affecting the final expression form, and the state change refers to the change of the spatial position, size and expression pattern of the component in the time dimension. The state change can be divided into two types, namely "maintaining the current state unchanged" and "converting the current state into another state" and the "maintaining the current state unchanged" includes maintaining the state which can be converted unchanged and also includes a state which is formed by a visual appearance and is formed by the fact that the component does not have the state conversion. For example, when the pupil section 31 on the eyeball section 14 is formed of a pattern, and a change in size is not likely to occur, it also belongs to "maintaining the current state unchanged". "transition to another state from the present state" is a transition process in which the present state and the next state are time start and end points. For example, when the eyelid 14 is transformed from a state in which a large area of the eyeball part is exposed to a state in which the eyeball part is completely covered, that is, the state change of closing the eyelid 14 is completed, and the eyeball part 15 and the pupil part 31 include the state change not limited to "keep the current state" in the combination process, the whole process completes the action of "closing the eye".
In an embodiment, fig. 6 is a block diagram of a structure of a simulated eye according to an embodiment of the invention, as shown in fig. 6, the simulated eye 10 further includes: electric quantity detection device 51 and battery 52:
the electric quantity detection device 51 acquires electric quantity information of a battery 52 of the system, and the processor 13 generates a fourth instruction and a fifth instruction according to the electric quantity information; the driving device 12 drives the eyelid component 14 to operate in a fourth state according to the fourth command; the driving device 12 drives the eyeball part 15 to operate in a fifth state according to the fifth instruction; the operation of the fourth state and the fifth state forms a third action. The electric quantity information includes: the information of the sufficient amount of electricity, the emergency of electricity, and the exhaustion of electricity, for example, in the case of exhaustion of electricity, the simulated eye 10 performs the eye-closing action through the eyelid 14 and the eyeball 15. The user can be reminded to charge the simulation eyes 10 in time by realizing the actions.
In one embodiment of the present invention, the simulated eye 10 comprises: the eyelid part 14 and the eyeball part 15 provide a control method for simulating eye movement, fig. 7 is a first flowchart of a control method for simulating eye movement according to an embodiment of the present invention, as shown in fig. 7, the method includes the following steps:
step S601, acquiring first sensing information, and generating a first instruction and a second instruction according to the first sensing information;
step S602, the eyelid component 14 operates a first state corresponding to the first instruction; the eyeball part 15 operates in a second state corresponding to the second instruction; the operation of the first state and the second state forms a first action.
Through the steps S601 and S602, the simulated eye 10 is controlled by the sub-components to perform the inductive state change, and the action simulation is realized through the combination of the state changes of the components, so that the simulated eye 10 can have more change possibilities during the eye action simulation, and the simulation information conveys more complex emotion, thereby meeting more scene requirements with interaction requirements in practical application, and further providing important support for the product category expansion of related products in corresponding scenes.
In one embodiment, the simulated eye 10 further includes a pupil component 31, fig. 8 is a flowchart of a control method for simulating eye actions according to an embodiment of the present invention, as shown in fig. 8, the method includes the following steps:
step S701, acquiring first sensing information, and generating a first instruction, a second instruction and a third instruction according to the first sensing information;
step S702, the eyelid component 14 operates a first state corresponding to the first instruction; the eyeball part 15 operates in a second state corresponding to the second instruction; the first state and the second state form a first action, the pupil component operates a third state corresponding to the third command, and the first state, the second state and the third state form a second action.
Through the above steps 701 and 702, the simultaneous change combination of the states among the three components of the simulated eye 10 forms a richer eye action.
The present invention will be described in detail with reference to specific examples for practical applications.
Fig. 9 is a first schematic diagram of a simulated eye application scenario according to an embodiment of the present invention, fig. 10 is a first schematic diagram of a state change of a simulated eye according to an embodiment of the present invention, as shown in fig. 9 and fig. 10, the simulated eye 10 exists on the back of a backpack 80, and includes an acceleration sensor 81, an eyeball component 15, a processor 13, three steering engines 82, and two upper and lower eyelid components 14, a circuit of the processor 13 controls the three steering engines 82, and the eyeball component 15 and the eyelid component 14 of the physical eye 10 are caused to generate corresponding action change and action switching through action cooperation of a mechanical structure.
At the moment that the user lifts up the schoolbag, the acceleration sensor 71 acquires the acceleration information of the schoolbag, the simulated eyes 10 make the action full of panic emotion, the action lasts for 6 seconds, and the realization process is as follows:
step 1, fully opening eyelid part 14 for 400ms, delaying for 500 ms;
step 2, the eyelid part 14 and the eyeball part 15 move simultaneously for 200ms, and delay is 300 ms;
step 3, the eyeball part 15 rotates from left to right for 200ms, and the delay is 700 ms;
step 4, the eyelid parts 14 are fully opened to be closed for 200ms, and 200ms is delayed;
step 5, opening the eyelid parts 14 for 400ms, and delaying for 600 ms;
step 6, the eyeball part 15 turns to the middle-up 400ms from the right limit and delays for 600 ms;
step 7, the eyeball part 15 is turned from the middle to the middle for 200ms, and the 200ms is delayed;
step 8, eyelid components 14 are closed for 200ms, delayed for 500 ms.
The contents of the panic emotion expression of the simulation eyes 10 are that the emotion filled with the panic is suddenly lifted from a stable environment, the emotion quickly looks for the lifted original figurine, and the emotion is gradually settled after the emotion adapts to the current condition.
In another embodiment, fig. 11 is a schematic diagram of a simulated eye application scenario according to an embodiment of the present invention, fig. 12 is a schematic diagram of a state change of a simulated eye according to an embodiment of the present invention, as shown in fig. 11 and fig. 12, the simulated eye 10 further includes an electric quantity detection device 51 and a battery 52, and in a case that the electric quantity detection device 51 detects that the electric quantity of the battery 52 is lower than a preset threshold, the simulated eye 10 performs an action of making an electric quantity prompt, which lasts for 6 seconds, and the following process is implemented:
step 1, opening the eyelid part 14 for 200 ms;
step 2, transferring the eyeball part 15 to the middle right for 1200 ms;
step 3, the eyeball part 15 deflects from the right middle to the middle for 900 ms;
step 4, the middle of the eyeball part 15 is turned up to the lower left for 600 ms;
step 5, turning the eyeball part 15 left down to the right middle for 500 ms;
step 6, turning the eyeball part 15 to the right and the middle and the upper 400 ms;
step 7, eyelid assembly 14 and eyeball assembly 15 are rotated simultaneously for 400ms, with a delay of 800 ms.
The content expressed by the electric quantity of the simulated eyes 10 is that the owner is accompanied for a period of time, so that the owner is too tired and happy, the energy is exhausted, and the syncope action is generated, and at the moment, the attention and the care of the owner are needed, namely the electric quantity is helped to be stored as soon as possible, and the past state is recovered.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above examples only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A control method for simulating an eye movement, the simulated eye comprising: a eyelid member and an eyeball member, the method comprising:
acquiring first sensing information, and generating a first instruction and a second instruction according to the first sensing information;
the eyelid component operating in a first state corresponding to the first instruction;
the eyeball part runs a second state corresponding to the second instruction;
the operation of the first state and the second state forms a first action.
2. The method of claim 1, wherein the ocular component further comprises a pupil component, the method comprising:
generating a third instruction according to the first sensing information;
the pupil component operates a third state corresponding to the third instruction;
the operation of the first state, the second state, and the third state forms a second action.
3. The method of claim 1, wherein after the eyeball component executes a second state corresponding to the second instruction, the method comprises:
acquiring electric quantity information of a system, and generating a fourth instruction and a fifth instruction according to the electric quantity information;
the eyelid component operates in a fourth state corresponding to the fourth instruction;
the eyeball part runs a fifth state corresponding to the fifth instruction;
operation of the fourth state and the fifth state forms a third action.
4. The method of claim 1, wherein after the eyeball component executes a second state corresponding to the second instruction, the method comprises:
acquiring second sensing information, and determining change information between the first sensing information and the second sensing information in a preset first time period;
generating a sixth instruction, a seventh instruction, an eighth instruction and a ninth instruction according to the change information;
the sixth instructions instruct the eyelid components to maintain operation in the first state;
the seventh instruction instructs the eyelid component to transition from the first state of operation to a sixth state corresponding to the seventh instruction;
the eighth instruction instructs the eyeball component to maintain the second state of operation;
the ninth instructions instruct the eyelid block to transition from the first state of operation to a seventh state corresponding to the ninth instructions.
5. The method of any one of claims 1 to 4, wherein the obtaining the first sensing information and the generating the first instruction and the second instruction according to the first sensing information comprises:
the method comprises the steps of obtaining first sensing information of an acceleration sensor, and generating a first instruction and a second instruction according to the first sensing information.
6. A simulated eye, said simulated eye comprising: sensor, drive arrangement, processor, eyelid part and eyeball part:
the sensor acquires first sensing information and sends the first sensing information to the processor;
the processor generates a first instruction and a second instruction according to first sensing information, and sends the first instruction and the second instruction to the driving device;
the driving device drives the eyelid part to operate in a first state according to the first instruction;
the driving device drives the eyeball part to operate in a second state according to the second instruction, wherein the first state and the second state form a first action.
7. The simulated eye of claim 6 wherein said drive means further comprises a first drive unit and a second drive unit;
the first driving unit drives the eyelid part to operate in a first state according to the first instruction;
the second driving unit drives the eyeball part to operate in a second state according to the second instruction.
8. The simulated eye of claim 6, wherein said eyeball component further comprises a pupil component:
the processor generates a third instruction according to the first sensing information;
and the pupil part operates a third state corresponding to the third instruction, and the operation of the first state, the second state and the third state forms a second action.
9. The simulated eye of claim 6 wherein said simulated eye further comprises: electric quantity detection device:
the electric quantity detection device acquires electric quantity information of a system, and the processor generates a fourth instruction and a fifth instruction according to the electric quantity information;
the driving device drives the eyelid part to operate in a fourth state according to the fourth instruction;
the driving device drives the eyeball part to operate in a fifth state according to the fifth instruction;
operation of the fourth state and the fifth state forms a third action.
10. The simulated eye of claim 6,
after the sensor acquires first sensing information, the sensor acquires second sensing information and sends the second sensing information to the processor, and the processor determines change information between the first sensing information and the second sensing information within a preset first time period;
the processor generates a sixth instruction, a seventh instruction, an eighth instruction and a ninth instruction according to the change information;
the driving device maintains the eyelid part to operate in the first state according to the sixth instruction;
the driving device drives the eyelid part to be converted from the first running state into a sixth state corresponding to the seventh instruction according to the seventh instruction;
the driving device maintains the eyeball part to operate the second state according to the eighth instruction;
the driving device drives the eyelid part to be converted from the first running state into a seventh state corresponding to the ninth instruction according to the ninth instruction.
CN201911042744.9A 2019-10-30 2019-10-30 Control method for simulating eye action and simulated eye Pending CN110930843A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911042744.9A CN110930843A (en) 2019-10-30 2019-10-30 Control method for simulating eye action and simulated eye

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911042744.9A CN110930843A (en) 2019-10-30 2019-10-30 Control method for simulating eye action and simulated eye

Publications (1)

Publication Number Publication Date
CN110930843A true CN110930843A (en) 2020-03-27

Family

ID=69849838

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911042744.9A Pending CN110930843A (en) 2019-10-30 2019-10-30 Control method for simulating eye action and simulated eye

Country Status (1)

Country Link
CN (1) CN110930843A (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1335800A (en) * 1999-10-29 2002-02-13 索尼公司 Robot system, robot device, and its cover
CN2613893Y (en) * 2003-02-24 2004-04-28 光宝科技股份有限公司 Electricity charger capable of indicating charging state
US20070099538A1 (en) * 2005-10-31 2007-05-03 Les Friedland Toy doll
CN101474481A (en) * 2009-01-12 2009-07-08 北京科技大学 Emotional robot system
CN102698442A (en) * 2012-05-23 2012-10-03 上海交通大学 Artificial eye for medical simulator
CN103677276A (en) * 2013-12-31 2014-03-26 广州视声电子科技有限公司 Action sensing device
US20150165336A1 (en) * 2013-12-12 2015-06-18 Beatbots, LLC Robot
TW201733651A (en) * 2016-03-17 2017-10-01 乙太光電科技有限公司 Simulated eyeball apparatus
CN206934741U (en) * 2017-06-05 2018-01-30 奥飞娱乐股份有限公司 Induction toy
WO2019050034A1 (en) * 2017-09-11 2019-03-14 Groove X株式会社 Autonomously acting robot staring at partner
CN109526208A (en) * 2016-07-11 2019-03-26 Groove X 株式会社 The autonomous humanoid robot of the controlled behavior of activity

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1335800A (en) * 1999-10-29 2002-02-13 索尼公司 Robot system, robot device, and its cover
CN2613893Y (en) * 2003-02-24 2004-04-28 光宝科技股份有限公司 Electricity charger capable of indicating charging state
US20070099538A1 (en) * 2005-10-31 2007-05-03 Les Friedland Toy doll
CN101474481A (en) * 2009-01-12 2009-07-08 北京科技大学 Emotional robot system
CN102698442A (en) * 2012-05-23 2012-10-03 上海交通大学 Artificial eye for medical simulator
US20150165336A1 (en) * 2013-12-12 2015-06-18 Beatbots, LLC Robot
CN103677276A (en) * 2013-12-31 2014-03-26 广州视声电子科技有限公司 Action sensing device
TW201733651A (en) * 2016-03-17 2017-10-01 乙太光電科技有限公司 Simulated eyeball apparatus
CN109526208A (en) * 2016-07-11 2019-03-26 Groove X 株式会社 The autonomous humanoid robot of the controlled behavior of activity
CN206934741U (en) * 2017-06-05 2018-01-30 奥飞娱乐股份有限公司 Induction toy
WO2019050034A1 (en) * 2017-09-11 2019-03-14 Groove X株式会社 Autonomously acting robot staring at partner

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
白石: "让机器人动情 ", 《知识就是力量》 *

Similar Documents

Publication Publication Date Title
US10755465B2 (en) System for neurobehaviorual animation
US6657628B1 (en) Method and apparatus for specification, control and modulation of social primitives in animated characters
Vernon et al. A roadmap for cognitive development in humanoid robots
Shapiro Building a character animation system
JP7177497B2 (en) Autonomous action robot that looks at the opponent
Sagar et al. Creating connection with autonomous facial animation
CN108942919A (en) A kind of exchange method and system based on visual human
KR102400398B1 (en) Animated Character Head Systems and Methods
CN109086860A (en) A kind of exchange method and system based on visual human
CN104268921A (en) 3D face expression control method and system
KR100639068B1 (en) apparatus and method of emotional expression for a robot
CN114519758A (en) Method and device for driving virtual image and server
CN110930843A (en) Control method for simulating eye action and simulated eye
CN112596611A (en) Virtual reality role synchronous control method and control device based on somatosensory positioning
KR20200019296A (en) Apparatus and method for generating recognition model of facial expression and computer recordable medium storing computer program thereof
Oh et al. Automatic emotional expression of a face robot by using a reactive behavior decision model
Sebanz The emergence of self: Sensing agency through joint action
JP7128591B2 (en) Shooting system, shooting method, shooting program, and stuffed animal
Lee et al. Development of therapeutic expression for a cat robot in the treatment of autism spectrum disorders
CN111667906A (en) Eyeball structure virtual teaching system and digital model establishing method thereof
Park et al. An efficient virtual patient image model: interview training in pharmacy
JP3519537B2 (en) Body motion generation device and body motion control method
Yin et al. Research on interactive design of social interaction training app for children with autistic spectrum disorder (asd) based on multi-modal interaction
Cuayáhuitl et al. Training an interactive humanoid robot using multimodal deep reinforcement learning
JP2024062266A (en) EMPATHY ASSISTANCE SYSTEM, EMPATHY ASSISTANCE DEVICE, AND EMPATHY ASSISTANCE PROGRAM

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200327

RJ01 Rejection of invention patent application after publication