CN117234325A - Image processing method, device, storage medium and head display equipment - Google Patents

Image processing method, device, storage medium and head display equipment Download PDF

Info

Publication number
CN117234325A
CN117234325A CN202210639122.XA CN202210639122A CN117234325A CN 117234325 A CN117234325 A CN 117234325A CN 202210639122 A CN202210639122 A CN 202210639122A CN 117234325 A CN117234325 A CN 117234325A
Authority
CN
China
Prior art keywords
eye
image
user
display device
head
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210639122.XA
Other languages
Chinese (zh)
Inventor
陈才
李伟哲
潘定龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Shiyuan Electronics Thecnology Co Ltd
Guangzhou Shiyuan Artificial Intelligence Innovation Research Institute Co Ltd
Original Assignee
Guangzhou Shiyuan Electronics Thecnology Co Ltd
Guangzhou Shiyuan Artificial Intelligence Innovation Research Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Shiyuan Electronics Thecnology Co Ltd, Guangzhou Shiyuan Artificial Intelligence Innovation Research Institute Co Ltd filed Critical Guangzhou Shiyuan Electronics Thecnology Co Ltd
Priority to CN202210639122.XA priority Critical patent/CN117234325A/en
Priority to PCT/CN2023/098980 priority patent/WO2023237023A1/en
Publication of CN117234325A publication Critical patent/CN117234325A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris

Abstract

The embodiment of the application discloses an image processing method, an image processing device, a storage medium and head display equipment, wherein the method comprises the following steps: acquiring an eye image of a user wearing the head display device at a predetermined frequency through an eye tracking module; extracting eye features of a user from an eye image; performing stylized rendering based on the eye features and the images of the preset styles, and generating stylized eye images; the stylized eye image is displayed on an external display module. According to the technical scheme provided by the embodiment of the application, the influence of the head display equipment on the interaction between the wearing user and the common user can be reduced, and the personalized visual output of the user is realized.

Description

Image processing method, device, storage medium and head display equipment
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image processing method, an image processing device, a storage medium, and a head display device.
Background
With the development of VR (Virtual Reality) and AR (Augmented Reality) technologies, the use of head-mounted display devices, i.e., head-mounted devices, is becoming more and more widespread.
In one aspect, a user obtains VR or AR content, such as VR video content or game content, by wearing a head-mounted device. However, in this technical solution, since the head-display device shields the eyes of the wearing user, interaction between the wearing user wearing the head-display device and the normal user not wearing the head-display device is affected.
Therefore, how to reduce the influence of the head display device on the interaction between the wearing user and the common user becomes a technical problem to be solved urgently.
Disclosure of Invention
The embodiment of the application provides an image processing method, an image processing device, a storage medium and head display equipment, which can reduce the influence of the head display equipment on the interaction between a wearing user and a common user. The technical scheme is as follows:
in a first aspect, an embodiment of the present application provides an image processing method, which is applied to a head display device, where the head display device includes an external display module and an eye tracking module, and the method includes:
acquiring an eye image of a user wearing the head display device at a predetermined frequency through the eye movement tracking module;
extracting an eye feature of the user from the eye image;
performing stylized rendering based on the eye features and the predetermined style image to generate a stylized eye image;
displaying the stylized eye image on the external display module.
In a second aspect, an embodiment of the present application provides an image processing apparatus, which is applied to a head display device, where the head display device includes an external display module and an eye tracking module, and the method includes:
An image acquisition module for acquiring an eye image of a user wearing the head display device at a predetermined frequency through the eye movement tracking module;
a feature extraction unit for extracting an eye feature of the user from the eye image;
the stylized rendering module is used for performing stylized rendering based on the eye features and the images of the preset styles, and generating stylized eye images;
and the display module is used for displaying the stylized eye image on the external display module.
In a third aspect, embodiments of the present application provide a computer storage medium storing a plurality of instructions adapted to be loaded by a processor and to perform the steps of the method described above.
In a fourth aspect, an embodiment of the present application provides a head display apparatus, including:
a processor and a memory;
an eye tracking module, which is in communication connection with the processor, and is used for acquiring an eye image of a user wearing the head display device at a preset frequency;
the processor is configured to: extracting eye features of the user from the eye image, performing stylized rendering based on the eye features and a predetermined style image, and generating a stylized eye image;
And the external display module is in communication connection with the processor and is used for displaying the stylized eye image.
In a fifth aspect, an embodiment of the present application provides a head display apparatus, including: a processor and a memory; wherein the memory stores a computer program adapted to be loaded by the processor and to perform the steps of the method described above.
The technical scheme provided by the embodiments of the application has the beneficial effects that at least:
on the one hand, an eye image of a wearing user wearing the head display device is obtained through the eye tracking module, the eye image is subjected to stylized rendering and then displayed on the external display module, so that a common user can perform richer interaction with the wearing user of the head display device through the external display screen, and the influence of the head display device on the interaction between the wearing user and the common user is reduced; on the other hand, the wearing user can be subjected to stylized rendering based on the eye characteristics of the wearing user, so that personalized visual output of the user can be realized.
Drawings
In order to more clearly illustrate the embodiments of the application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 shows a schematic diagram of an application scenario of an image processing method according to an embodiment of the present application;
fig. 2 illustrates a schematic structural diagram of a head display device according to an embodiment of the present application;
FIG. 3 illustrates a flow diagram of an image processing method provided in accordance with some embodiments of the application;
FIG. 4 illustrates a schematic diagram after stylized rendering provided in accordance with some embodiments of the application;
FIG. 5 shows a flow diagram of an image processing method provided in accordance with further embodiments of the present application;
FIG. 6 is a flow chart of an image processing method according to other embodiments of the present application;
FIG. 7 illustrates a schematic view of an eye image acquired by an eye tracking module provided in accordance with some embodiments of the application;
FIG. 8 illustrates a schematic diagram of identifying eye shapes provided in accordance with some embodiments of the application;
FIG. 9 illustrates a schematic diagram of gaze point locations provided in accordance with some embodiments of the present application;
fig. 10 is a schematic diagram showing the structure of an image processing apparatus according to an embodiment of the present application;
fig. 11 shows a schematic structural diagram of a head display device according to an embodiment of the present application;
fig. 12 is a schematic structural diagram of another head display device according to an embodiment of the present application;
Fig. 13 is a schematic structural view of still another head display device according to an embodiment of the present application;
fig. 14 is a schematic structural diagram of an eye tracking module according to an embodiment of the present application;
fig. 15 is a schematic structural view of still another head display device according to an embodiment of the present application;
fig. 16 shows a schematic structural diagram of another head display device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
First, terms related to the embodiments of the present application are schematically explained and described.
Head display equipment: a head-mounted display device such as AR glasses.
Stylized rendering: may also be referred to as style migration, which refers to a rendering process that converts the style of a style image into the style of the image to be rendered.
Type of eye movement behavior: referring to the type of eye movement of a user wearing the head-mounted display device, the eye movement types may include eye closure, blinking, and eye opening, and the eye opening may include eye movement such as downward looking, upward looking, leftward looking, rightward looking, forward looking, and squinting, eye gazing, eyebrow picking, and the like, in combination with the structured light sensor.
Eye movement special effect: eye special effects corresponding to the type of eye movement behavior.
The following describes in detail the technical scheme of the image processing method according to the embodiment of the present application with reference to the accompanying drawings.
Fig. 1 shows a schematic diagram of an application scenario of an image processing method according to an embodiment of the present application.
Referring to fig. 1, the image processing method may be applied to a wearable head-display device 100, where the head-display device 100 includes an external display module 110 disposed outside the head-display device 100, the external display module 110 may be a display screen 110, the display screen 110 may be a plane or a curved surface, and one or more display areas may be formed on the display screen 110 of the head-display device 110, and the areas may be rectangular, circular, or other shapes. The display screen 110 may be a display screen for display, such as an OLED (Organic Light-Emitting Diode), an LCD (Liquid Crystal Display ), a Micro led (Micro-Light-Emitting Diode), a naked-eye 3D screen, and the technical implementation scheme of the naked-eye 3D screen includes a Light barrier and a lenticular lens. Alternatively, the display 110 may be a touch screen supporting touch operations, through which more interactive functions can be implemented.
It should be noted that, the logic of the image processing method according to the embodiment of the present application may be implemented by installing software and an application program in a memory used in cooperation with the head display device, or by writing a program in a corresponding device inside the head display device.
Fig. 2 illustrates a schematic structural diagram of a head display device according to an embodiment of the present application.
Referring to fig. 2, the head display device 200 includes a sensor 205, a processor 210 and a memory 215, an external display module 220 and an internal display module 225. In addition, the head display device 200 may further include other modules or units at appropriate points, such as a power supply, a support structure, or an input/output unit.
The sensor 205 is configured to collect data related to a user wearing the head display device 200 or an environment surrounding the head display device 200, where the sensor 205 may include a plurality of sensors, the sensor 205 may be located on the head display device 200 or may be located outside the head display device 200, and the sensor 205 sends the collected raw data or the processed data to the processor 210 of the head display device 200 in a wired or wireless manner. The plurality of sensors may comprise: an eye tracking module for collecting eye characteristics of a user comprises a MEMS (Micro-Electro-Mechanical System) sensor or an event camera and other sensors. The eye tracking module comprises a transmitting end and a receiving end, for example, if the eye tracking module is an MEMS sensor, the transmitting end can be an infrared laser transmitting end or other laser transmitting ends with proper colors, and the receiving end is a receiver corresponding to a light source, and can be selected as an infrared laser receiving end; if the eye movement tracking module is an event camera, the transmitting end can be an infrared transmitting end, and the receiving end is an event camera.
Taking the transmitting end as an infrared transmitting end as an example, the infrared transmitting end transmits infrared light at a preset frequency to irradiate eyes of a user wearing the head display device 200, and the infrared receiving device recovers reflection information of the eyes to generate an eye image of the user. By using the infrared emission end and the infrared receiving end, interference of a visible light wave band can be eliminated, and eye tracking can be realized in a dark environment because infrared light is invisible.
Further, in an example embodiment, the sensor 205 further comprises a three-dimensional image sensor for acquiring three-dimensional eye images, such as eye images with z-axis information, which may be a structured light sensor or a binocular stereo depth camera. Taking a three-dimensional image sensor as an example of a structural light sensor, wherein the structural light sensor comprises an emitting end and a receiving end, speckle laser is scattered by the emitting end, and a lattice of one surface is formed after the speckle laser is copied and diffused by a diffraction optical element (DOE, diffractive Optical Elements); the dot matrix projected by the infrared emission end is received by the receiving end after being reflected by the 3D outline of the eye; the receiving end performs feature comparison on the received reflected signals to obtain a three-dimensional contour near the eyes. Further, an eye image with 3-dimensional information can be generated by feature point matching and reconstruction.
In addition, the sensor 205 may further include: a physiological data collection sensor for collecting physiological data of the user, such as a sensor for body temperature, heart beat, blood pressure or blood sugar, the physiological data collection sensor being located at a portion of the outer frame of the head display device in contact with the skin of the user; sensors for sensing the movement trend of the head display device 200, such as an accelerometer, a magnetometer, an IMU (Inertial Measurement Unit ) sensor, etc.; sensors for sensing the peripheral environment of the head display device 200, such as cameras, depth cameras, millimeter wave, ultrasonic wave, and the like.
The processor 210 and the memory 215 are devices for storing and processing signals of the respective sensors, the processor 210 and the memory 215 may be located in the head display device 200, and part or all of operations of the processor 210 and the memory 215 may be located in the cloud, and the processor 210 and the memory 215 are communicatively connected to the head display device 200 through wired or wireless means.
The external display module 220 is configured to display the eye image of the user wearing the head display device 200 after being stylized and rendered, the external display module 220 may be a display screen, and the display content on the external display module 220 may be a still picture or a dynamic video, and if the external display module 220 includes two display screens, the same content can be displayed through the two display screens, or different content can be displayed respectively.
The internal display module 225 is used to display AR content or VR content, such as VR video content or VR game content, for viewing by a user wearing the head-mounted device 200.
Further, in an example embodiment, the sensor 205 includes an eye-tracking module, such as an infrared camera, by which eye images of a user wearing the head-display device 200 are acquired at a predetermined frequency; extracting eye features of a user from an eye image; performing stylized rendering based on the eye features and the images of the preset styles to obtain stylized eye images; the stylized eye image is displayed on the external display module 220, for example, the external display module 220 renders a cartoon eye, eyebrow, etc. style displayed on the external display screen for the external display screen. According to the technical scheme provided by the embodiment of the application, the user outside the head display device can perform richer interaction with the user wearing the head display device through the external display screen, so that the influence of the head display device on the interaction between the user wearing the head display device and the common user is reduced.
Fig. 3 illustrates a flow diagram of an image processing method provided in accordance with some embodiments of the present application. The execution subject of the image processing method may be a computing device having a computing processing function, such as the processor of the head display device described above. The image processing method includes steps S310 to S340, and the image processing method in the exemplary embodiment is described in detail below with reference to the drawings.
Referring to fig. 3, in step S310, an eye image of a user wearing the head-mounted display device is acquired at a predetermined frequency by an eye-movement tracking module.
In an example embodiment, the head-display device includes an eye-movement tracking module, the eye-movement tracking module being an infrared eye-movement tracking module, the infrared eye-movement tracking module including an infrared emission end and an infrared receiving end, the infrared emission end emitting infrared light at a predetermined frequency to illuminate an eye of a user wearing the head-display device; and receiving the reflected light of the eyes of the user through the infrared receiving end to generate an eye image of the user.
In an example embodiment, the predetermined frequency may be 50Hz or other suitable frequency. Further, the predetermined frequency may be adaptively adjusted according to different scenarios, for example, the predetermined frequency may be increased in a scenario where the wearing user is interacting with the general user; in a scenario where the wearing user is not interacting, the predetermined frequency is reduced. Further, the value of the predetermined frequency may also be set according to the processing performance of the display device.
It should be noted that, although the infrared eye tracking module is described as an example, it should be understood by those skilled in the art that the eye tracking module may be other suitable optical sensors, such as a near infrared sensor, an RGBD camera with depth information, or a camera with a light source.
In step S320, an eye feature of the user is extracted from the eye image.
In example embodiments, the ocular features may include one or more of gaze point location features, pupil features, eyebrow features, and eye shape features. Wherein the gaze point location feature may represent a current gaze direction of the user; pupil characteristics include pupil diameter, and if the pupil is smaller than a certain threshold, the pupil is judged to be closed; eye shape features are used to represent eye shapes of different users.
Further, the eye features of the user, such as pupil state and gaze point position of the user, are extracted from the eye image by a feature extraction operation. For example, the feature extraction operation may include: for example, if the filtering algorithm is taken as a kalman filtering algorithm as an example, features of the purkinje image of the eye are extracted through the kalman filtering algorithm, so as to obtain the features of the eye of the user.
Further, taking a structured light extraction algorithm as an example, after the emission end emits the speckle laser with specific codes, the speckle laser is reflected by the surface of an object, and after the receiving end receives a reflected signal, the speckle laser is subjected to signal processing such as filtering to form a speckle pattern (including information such as spatial phase and light intensity); and then carrying out local or global feature comparison through feature comparison operation, finding out the position of the feature point in the space of the eye image and/or comparing the feature point with the previous frame of eye image or the previous frames of eye images, thereby obtaining the contour and the changed image of the contour near the eyes.
It should be noted that, although the feature extraction is described by taking the filtering algorithm and the structured light extraction algorithm as examples, it should be understood by those skilled in the art that the feature extraction operation may be other suitable feature extraction models, such as the object detection model or the recurrent neural network model, and the like, which is also within the scope of the embodiments of the present application.
In step S330, a stylized rendering is performed based on the eye feature and the predetermined style image, and a stylized eye image is generated.
In an example embodiment, stylized rendering refers to a rendering process that converts the style of a style image into the style of the image to be rendered. For example, the style of the cartoon eye image is converted to the style of the eye image of the user wearing the head-mounted device, i.e., the style of the cartoon eye image is migrated to the eye image of the user wearing the head-mounted device. The styles of the predetermined style images include, but are not limited to: cartoon style, anthropomorphic style, pictorial style, or exaggerate style, and the like. The cartoon style may include a style of cartoon eyes and the anthropomorphic style may include a style of eyes of a virtual character. The predetermined style image to be converted may be set in advance, for example, in response to a style selection operation by the user, the predetermined style image of the corresponding eye image is set.
In some embodiments, the stylized eye image is generated by adjusting and rendering the predetermined style image based on eye features of an eye image of a user wearing the head-mounted device, which may include one or more of gaze point location features, pupil features, iris location features, eyebrow features, eyelid features, and eye shape. For example, the eye feature includes an eye shape and a pupil diameter, and the predetermined style image is adjusted and rendered based on the eye shape and the pupil diameter of the eye image of the user wearing the head-mounted display device, to generate the stylized eye image.
Further, in other embodiments, the ocular image may be stylized rendered by a stylized transformation neural network model for implementing multiple styles of transformations of the ocular image. For example, a pre-selected predetermined style image and an acquired eye image of a user are input to a style conversion neural network, and the style of the style image is transferred to the eye image of the user to generate a stylized eye image.
In an example embodiment, the style transformation neural network includes a style inference network for acquiring corresponding style features from the style image and a style migration network for delivering the acquired style features to the eye image.
In step S340, the generated stylized eye image is displayed on the external display module.
In an example embodiment, after the stylized rendered eye image is acquired, the stylized rendered eye image is displayed on an external display module, such as an external display screen. FIG. 4 illustrates a schematic diagram after stylized rendering provided in accordance with some embodiments of the application. Referring to fig. 4, the external display module is an external display screen on which 6 stylized rendered eye images are displayed.
According to the technical scheme in the example embodiment of fig. 3, on one hand, an eye image of a user wearing the head display device is obtained through the eye tracking module, the eye image is subjected to stylized rendering and then displayed on the external display module, so that external personnel can perform richer interaction with the user wearing the head display device through the external display screen, and the influence of the head display device on the interaction between the user wearing the head display device and a common user is reduced; on the other hand, the wearing user can be subjected to stylized rendering based on the eye characteristics of the wearing user, so that personalized visual output of the user can be realized.
Furthermore, in an example embodiment, the eye feature includes an eye angle of the user, and the image processing method further includes: collecting an external image of the head display device through an image collecting module; if the external image comprises an external person, identifying the face position of the external person; the eye angle of the user is dynamically adjusted based on the facial position of the external person. For example, an external image of the head-mounted device is acquired by a camera, the position of an external person in the external image is identified, the face position of the external person is determined, and the eye angle of a user wearing the head-mounted device is dynamically adjusted based on the face position of the external person. For external personnel of different angles, 2D rendered eye images with eye angle information may be displayed; if the display screen is a 3D display screen, such as a naked eye 3D display screen, 3D special effect eye images with different angles can be rendered.
According to the technical scheme of the embodiment, the eye angle of the wearing user wearing the head display device is adjusted based on the facial position of the external person, and the eye angle of the wearing user can be adjusted according to the external environment, so that the external person can intuitively know whether the wearing user communicates with the external person.
Furthermore, in an example embodiment, the head-mounted device further comprises a physiological data acquisition sensor comprising a sensor measuring physiological data such as body temperature, heart beat, blood pressure or blood glucose. The physiological data acquisition sensor is located at a portion of the exterior of the head display device, which is in contact with the skin of the user, for example, the physiological data acquisition sensor may be located at a portion of the exterior frame of the head display device, which is in contact with the skin of the user, and the physiological data stimulation sensor may also be a wearable device, such as a bracelet, or the like, which is external to the head display device. The image processing method further includes: collecting physiological data of a user through a physiological data collecting sensor; and extracting physiological characteristics of the user from the collected physiological data of the user, and adjusting and rendering the image of the preset style based on the eye characteristics and the physiological characteristics.
For example, if the physiological data includes heartbeat data, the physiological features include a heartbeat rate, and the predetermined style image may be adjusted and rendered according to the heartbeat data and the eye features, for example, if the heartbeat is fast, eyes in the predetermined style image may be opened.
According to the technical scheme in the embodiment, personalized visual output of the user can be further realized by combining physiological characteristics and eye characteristics to perform stylized rendering.
Further, in an example embodiment, the head display apparatus further includes: a motion sensor, which may include an accelerometer, magnetometer, IMU sensor, etc., the image processing method further comprising: acquiring motion data of the head display device through a motion sensor; and determining the position offset of the head display equipment based on the motion data of the head display equipment points, and adjusting and rendering the image of the preset style based on the eye features and the position offset.
For example, if it is determined that the head display device and the eyes have a certain position deviation according to the collected motion data, the position deviation and the eye characteristics are combined to adjust and render the image of the predetermined style. For example, if the user removes the head-display device and brings the head-display device again, the head-display device and the previous eye position have a certain positional shift, and the position of the rendered stylized image is adjusted based on the positional shift.
According to the technical scheme in the embodiment, the preset style image is adjusted and rendered by combining the motion data of the head display device, and the stability and consistency of the stylized image displayed on the external display module are not affected even if the head display device of a wearer is subjected to position deviation or repeatedly taken off.
Fig. 5 shows a flow diagram of an image processing method according to further embodiments of the present application.
Referring to fig. 5, in step S510, an eye feature of a user is extracted from an eye image of the user wearing the head-display device.
In the exemplary embodiment, the implementation process and implementation effect of step S510 and step S320 are substantially similar, and will not be described herein.
In step S520, the type of eye movement behavior of the user is determined based on the eye characteristics of the user.
In an example embodiment, the eye movement behavior types include a blink type and an open eye type, and the eye movement behavior type of the user may be determined through a neural network model, for example, the neural network model is set as a classification network model, eye features of the user are input to the classification network model, and the eye movement behavior type of the user is determined.
Further, in some example embodiments, the ocular features include pupil features, and the closed-eye image in the ocular image is determined based on the pupil features of the wearing user; counting eye-closing images in a plurality of eye images acquired in a preset time to obtain the number of the eye-closing images; if the number of the eye-closing images is greater than or equal to a preset threshold value, determining that the eye movement behavior type of the user is the eye-closing type; and if the number of the closed-eye images is smaller than the preset threshold value, determining that the eye movement behavior type of the user is an open-eye type. Further, if the eye movement behavior type of the user is a closed eye type, acquiring a next eye image; and if the next eye image is an open eye image, determining that the eye movement behavior type of the user is a blink type. That is, when the number of closed-eye images is greater than or equal to a predetermined threshold, eyelid closure is determined, and after blink closure, eyelid closure and eyelid opening are determined to be open until the next open-eye image is acquired, and blink is formed together.
For example, let the predetermined time be 1s, record the number of closed-eye images of the user in the closed-eye state in the buffer as LD, when the closed-eye image appears for the first time, LD is 1, if the new eye image is still the closed-eye image, LD is added with 1. If the number of closed-eye images exceeds or reaches a certain threshold, for example LD >50 or ld=50. Note that the LD records the number of times of continuous eye closure, and if the eyes are opened in the middle, the LD is cleared.
In step S530, the predetermined style image is adjusted and rendered based on the eye movement behavior type and the eye feature, and a stylized eye image is generated.
In an example embodiment, if the user eye movement behavior type is a blink type, determining a blink duration for the user based on the number of closed-eye images; and adjusting and rendering the images with the preset styles based on the blink duration and the eye characteristics to generate blink animation. If the eye movement behavior type of the user is the eye opening type, determining the gaze point position of the user; determining an eye movement direction of the user based on the gaze point position of the user; based on the eye movement direction and the pupil diameter of the user, the predetermined style image is adjusted and rendered, and a stylized eye image is generated.
In an example embodiment, eye movement directions include, but are not limited to, looking down, looking up, looking left, looking right, looking forward, and the like. Fig. 4 shows stylized eye images corresponding to different eye movement directions.
In step S540, the stylized rendered eye image is displayed on the external display module.
In the exemplary embodiment, the implementation process and implementation effect of step S540 and step S340 are substantially similar, and will not be described herein.
According to the technical scheme in the example embodiment of fig. 5, the eye images of the user wearing the head display device are acquired through the eye movement tracking module, the images of the preset styles are adjusted and rendered according to the eye movement behavior types and the eye characteristics and then displayed on the external display module, the stylized eye images corresponding to different eye movement behavior types can be displayed, and therefore external personnel can interact with the user wearing the head display device more abundantly through the external display screen.
Further, in an example embodiment, the image processing method further includes: according to the eye movement behavior type and the preset corresponding relation, the eye movement special effect corresponding to the eye movement behavior type is determined, the preset corresponding relation is the corresponding relation between the eye movement behavior type and the eye movement special effect, and the image of the preset style is adjusted and rendered based on the eye characteristics, the eye movement behavior type and the corresponding eye movement special effect. For example, a correspondence between the eye movement behavior type and the eye movement special effects is stored in a database in advance, if the eye movement behavior type is a blink type, the corresponding blink special effects are obtained, and the predetermined style image is adjusted and rendered based on the eye image and the blink special effects corresponding to the blink type.
According to the technical scheme in the above-mentioned example embodiment, by combining the eye movement special effects corresponding to the eye movement behavior types, the expression enhancement of the eye image can be realized, and the exaggerated eye special effects are output.
Fig. 6 shows a flow diagram of an image processing method according to further embodiments of the present application.
Referring to fig. 6, in step S605, the last acquired eye image is acquired.
In an example embodiment, the last acquired eye image is acquired by an eye tracking module acquiring an eye image of a user wearing the head-mounted device at a predetermined frequency, e.g., 50 Hz. The number of closed-eye images LD is stored in the buffer memory of the memory, and is 0 in the initial state. Fig. 7 illustrates a schematic view of an eye image acquired by an eye tracking module provided in accordance with some embodiments of the application.
In step S610, it is determined whether the acquired eye image is a closed-eye image.
In an example embodiment, it is determined whether the captured eye image is a closed-eye image based on pupil data of the eye image, such as pupil diameter. If the acquired eye image is the eye-closing image, that is, pupil data is not acquired, adding 1 to the number of eye-closing images, returning to step S605, and continuously acquiring the eye-closing image acquired last time; if the acquired eye image is not the closed-eye image, the process proceeds to step S615.
In step S615, it is determined whether the number of closed-eye images LD is greater than or equal to a predetermined number.
In an exemplary embodiment, if the number of eye-closing images LD is greater than or equal to a predetermined number, for example 50, it indicates that the user has closed eyes for a period of time, i.e., the user has blinked, proceeding to step S620; if the number of closed-eye images LD is smaller than the predetermined number, the process proceeds to step S635.
It should be noted that, the predetermined number may be adjusted according to the hardware processing capability of the head display device, for example, if the hardware processing capability is strong, the predetermined number may be set to be large; if the hardware processing power is weak, the predetermined number may be set smaller.
In step S620, an eye-closing period is calculated, and a stylized blink animation is generated.
In an example embodiment, the eye-closing time period is calculated based on the number of eye-closing images LD, and the eye images are stylized and rendered according to different eye shapes identified in the eye images in the blinking process, so that a stylized blinking animation is generated.
In step S625, a blink animation is displayed through the external display module.
In an example embodiment, a blink animation is displayed on an external display screen of the head display device, thereby outputting a blink special effect close to a real eye state.
In step S630, LD is set to 0.
In an example embodiment, the number of closed-eye images LD stored in the cache of the memory is set to 0.
In step S635, an eye shape is identified.
In an example embodiment, an eye shape is identified from an eye image of a user, for example, eye shape features of the user are extracted from the eye image by a feature extraction operation. Fig. 8 illustrates a schematic diagram of identifying eye shapes provided in accordance with some embodiments of the application. Referring to fig. 8, an eye shape, an iris shape, and a pupil shape are represented by three selection boxes, respectively, wherein an oval box represents an eye selection box of the eye shape, a large circle box represents an iris selection box of the iris shape, and a small circle box represents a pupil selection box of the pupil shape.
In step S640, the gaze point position and the pupil diameter are calculated.
In an example embodiment, the gaze point location of the user and the pupil diameter are calculated based on the eye shape features of the wearing user. Referring to fig. 8 and 9, the gaze point position and the pupil diameter of the user are calculated based on the three selection frames of the eye shape in fig. 8, the pupil diameter is calculated based on the size of the pupil selection frame, and the gaze point position of the user is determined based on the position of the pupil selection frame in the eye selection frame. For example, the pitch angle pitch of the pupil selection box of fig. 9 is determined to be 15 degrees and the heading angle yaw is determined to be 9 degrees.
In step S645, stylized rendering is performed.
In an example embodiment, the predetermined style image is adjusted and rendered based on eye features of an eye image of a user wearing the head-mounted device. For example, the eye feature includes an eye shape, a gaze point position, and a pupil diameter, and the predetermined style image is adjusted and rendered based on the eye shape, the gaze point position, and the pupil diameter of the eye image of the user wearing the head-mounted display device.
In step S650, a stylized rendered eye image is output.
According to the technical scheme in the example embodiment of fig. 6, by identifying the eye features, such as the eye geometry, the gaze point position and the pupil diameter, of the user wearing the head display device point and performing stylized rendering based on the eye features and the stylized image, personalized visual output can be achieved, and the effects presented by each user are different, so that the personalized display effect of thousands of people and thousands of faces is achieved.
The following are examples of the apparatus of the present application that may be used to perform the method embodiments of the present application. For details not disclosed in the embodiments of the apparatus of the present application, please refer to the embodiments of the method of the present application.
Fig. 10 is a schematic diagram showing the structure of an image processing apparatus according to an exemplary embodiment of the present application.
Referring to fig. 10, the image processing apparatus 1000 may be implemented as all or a part of an apparatus by software, hardware, or a combination of both, and the image processing apparatus 1000 is applied to a head-display device including an external display module and an eye-tracking module. The image processing apparatus 1000 includes an image acquisition module 1010, a feature extraction module 1020, a stylized rendering module 1030, and a display module 1040. Wherein:
an image acquisition module 1010 for acquiring, by the eye-movement tracking module, an eye image of a user wearing the head-display device at a predetermined frequency;
a feature extraction module 1020 for extracting an eye feature of the user from the eye image;
a stylized rendering module 1030 configured to perform stylized rendering based on the eye feature and the predetermined style image, to generate a stylized eye image;
a display module 1040 for displaying the stylized eye image on the external display module.
In some example embodiments, based on the above-described aspects, the head display device further includes a physiological data acquisition sensor, and the apparatus 1000 further includes:
the physiological data acquisition module is used for acquiring physiological data of the user through the physiological data acquisition sensor;
A physiological characteristic extraction module for extracting physiological characteristics of the user from the collected physiological data of the user,
the stylized rendering module 1030 is further configured to:
the predetermined style image is adjusted and rendered based on the ocular feature and the physiological feature.
In some example embodiments, based on the above-described aspects, the head display apparatus further includes: motion sensor, the device 1000 further comprising:
the motion data acquisition module is used for acquiring motion data of the head display equipment through the motion sensor;
a position deviation determining module for determining the position deviation of the head display equipment based on the motion data of the head display equipment point,
the stylized rendering module 1030 is further configured to:
and adjusting and rendering the predetermined style image based on the eye feature and the position offset.
In some example embodiments, based on the above scheme, the stylized rendering module 1030 includes: and the wind migration unit is used for adjusting and rendering the image with the preset style based on the eye characteristics.
In some example embodiments, based on the above-described scheme, the style migration unit includes:
An eye behavior type determining unit configured to determine an eye movement behavior type of the user based on the eye feature;
and the adjusting and rendering unit is used for adjusting and rendering the image of the preset style based on the eye movement behavior type and the eye characteristics.
In some example embodiments, based on the above-described aspects, the ocular feature includes a pupil feature, and the eye behavior type determination unit is configured to:
determining whether the ocular image is a closed-eye image based on the pupil characteristics;
counting the eye-closing images in a plurality of eye images acquired in a preset time to obtain the number of the eye-closing images;
if the number of the eye-closing images is greater than or equal to a preset threshold value, determining that the eye movement behavior type of the user is an eye-closing type;
and if the number of the closed-eye images is smaller than the preset threshold value, determining that the eye movement behavior type of the user is an open-eye type.
In some example embodiments, based on the above-described scheme, the eye behavior type determining unit is further configured to:
if the eye movement behavior type of the user is a closed eye type, acquiring a next eye image;
and if the next eye image is an open eye image, determining that the eye movement behavior type of the user is a blink type.
In some example embodiments, based on the above-described scheme, the adjusting and rendering unit is configured to:
if the eye movement behavior type of the user is a blink type, determining blink duration of the user based on the eye-closing image quantity;
adjusting and rendering the predetermined style image based on the blink duration and the eye features, generating blink animation,
the display module 1040 is configured to:
displaying the blink animation on the external display module.
In some example embodiments, based on the above-described scheme, the apparatus 1000 further includes:
the gaze point determining module is used for determining the gaze point position of the user based on the eye feature if the eye movement behavior type of the user is an open eye type;
and the eye movement direction determining module is used for determining the eye movement direction of the user based on the gaze point position of the user.
In some example embodiments, based on the above-described scheme, the adjusting and rendering unit is configured to:
determining a pupil diameter of the eye image based on the pupil characteristics;
the predetermined style image is adjusted and rendered based on the eye movement direction and the pupil diameter.
In some example embodiments, based on the above-described aspects, the apparatus further includes:
a feature determination module, configured to determine an eye movement special effect corresponding to the eye movement behavior type according to a preset correspondence between the eye movement behavior type and the eye movement special effect,
the adjustment and rendering unit is configured to:
and adjusting and rendering the image of the preset style based on the eye feature, the eye movement behavior type and the corresponding eye movement special effect.
In some example embodiments, based on the above, the eye tracking module includes a transmitting end and a receiving end, and the image acquisition module 1010 is configured to:
emitting incident light at a predetermined frequency through the emitting end to illuminate eyes of a user wearing the head display device;
and receiving the reflected light of the eyes of the user through a receiving end to generate an eye image of the user.
In some example embodiments, based on the above, the head display device further includes an image acquisition module, the eye feature includes an eye angle of the user, and the apparatus further includes:
the external image acquisition module is used for acquiring external images of the head display device through the image acquisition module;
The face position identification module is used for identifying the face position of the external person if the external person is included in the external image;
and the position adjustment module is used for adjusting the eye angle of the user based on the face position of the external person.
It should be noted that, in the image processing apparatus provided in the foregoing embodiment, when the image processing method is executed, only the division of the foregoing functional modules is used as an example, in practical application, the foregoing functional allocation may be performed by different functional modules, that is, the internal structure of the device is divided into different functional modules, so as to complete all or part of the functions described above.
In addition, the image processing apparatus and the image processing method provided in the foregoing embodiments belong to the same concept, which embody the implementation process and the implementation effect in detail with respect to the method embodiment, and are not repeated here.
Fig. 11 shows a schematic structural diagram of a head display device according to an embodiment of the present application.
Referring to fig. 11, the head display apparatus 1100 includes: a processor 1110, memory, an eye-tracking module 1120, and an external display module 1130. Wherein,
an eye tracking module 1120 communicatively coupled to the processor 1110 for acquiring an eye image of a user wearing the head-mounted device at a predetermined frequency;
The processor 1110 is configured to: extracting eye features of the user from the eye image, performing stylized rendering based on the eye features and a predetermined style image, and generating a stylized eye image;
an external display module 1130, in communication with the processor 1110, is configured to display the stylized eye image.
In some example embodiments, referring to fig. 12, the head display device 1100 further includes:
a physiological data acquisition sensor 1210, communicatively coupled to the processor 1110, for acquiring physiological data of the user,
the processor 1110 is further configured to: and extracting physiological characteristics of the user from the acquired physiological data of the user, and adjusting and rendering the image of the preset style based on the eye characteristics and the physiological characteristics.
In some example embodiments, based on the above-described aspects, the physiological data acquisition sensor is located at a portion of the outer frame of the head display device that is in contact with the skin of the user.
In some example embodiments, referring to fig. 13, the head display device 1100 further includes:
the motion sensor 1310 is in communication connection with the processor and is used for acquiring motion data of the head display device;
The processor 1110 is further configured to: and determining the position offset of the head display equipment based on the motion data of the head display equipment points, and adjusting and rendering the image of the preset style based on the eye features and the position offset.
In some example embodiments, referring to fig. 14, the eye tracking module 1120 includes:
a transmitting end 1410 for emitting incident light at a predetermined frequency to illuminate eyes of a user wearing the head display device;
and a receiving end 1420, configured to receive the reflected light of the eyes of the user and generate an eye image of the user.
In some example embodiments, referring to fig. 15, the head display device 1100 further includes:
an external image acquisition module 1510, communicatively connected to the processor 1110, for acquiring external images of the head display device;
the processor 1110 is further configured to: if the external image comprises an external person, identifying the face position of the external person; the eye angle of the user is dynamically adjusted based on the facial position of the external person.
In some example embodiments, based on the above-described scheme, the processor 1110, when performing the stylized rendering based on the eye feature and the predetermined style image, specifically performs the following operations:
Adjusting and rendering the predetermined style image based on the ocular feature; or,
based on the ocular feature, a style of the predetermined style image is migrated to the ocular image.
In some example embodiments, based on the above-described scheme, the processor 1110, when performing the adjusting and rendering of the predetermined style image based on the eye feature, specifically performs the following operations:
determining an eye movement behavior type of the user based on the eye feature;
and adjusting and rendering the image of the preset style based on the eye movement behavior type and the eye characteristics.
In some example embodiments, based on the above-described aspects, the ocular feature includes a pupil feature, and the processor 1110, when executing the determining the type of eye movement behavior of the user based on the ocular feature, specifically performs the following operations:
determining whether the ocular image is a closed-eye image based on the pupil characteristics;
counting the eye-closing images in a plurality of eye images acquired in a preset time to obtain the number of the eye-closing images;
if the number of the eye-closing images is greater than or equal to a preset threshold value, determining that the eye movement behavior type of the user is an eye-closing type;
And if the number of the closed-eye images is smaller than the preset threshold value, determining that the eye movement behavior type of the user is an open-eye type.
In some example embodiments, based on the above, the processor 1110 is further configured to:
if the eye movement behavior type of the user is a closed eye type, acquiring a next eye image;
and if the next eye image is an open eye image, determining that the eye movement behavior type of the user is a blink type.
In some example embodiments, based on the above-described scheme, the processor 1110, when performing the adjusting and rendering of the predetermined style image based on the eye movement behavior type and the eye feature, specifically performs the following operations:
if the eye movement behavior type of the user is a blink type, determining blink duration of the user based on the eye-closing image quantity;
adjusting and rendering the predetermined style image based on the blink duration and the eye features, generating blink animation,
the displaying the stylized eye image on the external display module includes:
displaying the blink animation on the external display module.
In some example embodiments, based on the above, the processor 1110 is further configured to:
If the eye movement behavior type of the user is an open eye type, determining the gaze point position of the user based on the eye feature;
based on the gaze point position of the user, an eye movement direction of the user is determined.
In some example embodiments, based on the above-described scheme, the processor 1110, when performing the adjusting and rendering of the predetermined style image based on the eye movement behavior type and the eye feature, specifically performs the following operations:
determining a pupil diameter of the eye image based on the pupil characteristics;
the predetermined style image is adjusted and rendered based on the eye movement direction and the pupil diameter.
In some example embodiments, based on the above, the processor 1110 is further configured to:
determining the eye movement special effect corresponding to the eye movement behavior type according to the eye movement behavior type and the preset corresponding relation, wherein the preset corresponding relation is the corresponding relation between the eye movement behavior type and the eye movement special effect,
the adjusting and rendering the predetermined style image based on the eye movement behavior type and the eye feature includes:
and adjusting and rendering the image of the preset style based on the eye feature, the eye movement behavior type and the corresponding eye movement special effect.
It should be noted that, the head display device provided in the above embodiment and the image processing method embodiment belong to the same concept, which embody the implementation process and the implementation effect in detail, and are not described herein again.
The embodiment of the present application further provides a computer storage medium, where the computer storage medium may store a plurality of instructions, where the instructions are adapted to be loaded by a processor and execute the image processing method according to the foregoing embodiment, and a specific execution process may refer to a specific description of the foregoing embodiment, which is not repeated herein.
The present application also provides a computer program product, where at least one instruction is stored, where the at least one instruction is loaded by the processor and executed by the processor, where a specific execution process may refer to a specific description of the foregoing embodiment, and a detailed description is omitted herein.
Referring to fig. 16, a schematic structural diagram of a head display device is provided in an embodiment of the present application. As shown in fig. 16, the head display device 1600 may include: at least one processor 1601, at least one communication module 1604, an input output interface 1603, a memory 1605, at least one communication bus 1602.
Wherein a communication bus 1602 is used to enable connected communication between these components.
The input/output interface 1603 may include a Display screen (Display), a Camera (Camera), and the optional input/output interface 1603 may further include an external Display module such as an external Display screen.
The communication module 1604 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface), among others.
Wherein the processor 1601 may include one or more processing cores. The processor 1601 utilizes various interfaces and lines to connect various portions of the overall head display device 1600, performing various functions and processing data of the head display device 1600 by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 1605, and invoking data stored in the memory 1605. Alternatively, the processor 1601 may be implemented in hardware in at least one of digital signal processing (Digital Signal Processing, DSP), field programmable gate array (Field-Programmable Gate Array, FPGA), programmable logic array (Programmable Logic Array, PLA). The processor 1601 may integrate one or a combination of several of a central processing unit (Central Processing Unit, CPU), a graphics processor (Graphics Processing Unit, GPU), and a modem, etc. The CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing the content required to be displayed by the display screen; the modem is used to handle wireless communications. It will be appreciated that the modem may not be integrated into the processor 1601 and may be implemented by a single chip.
The Memory 1605 may include a random access Memory (Random Access Memory, RAM) or a Read-Only Memory (Read-Only Memory). Optionally, the memory 1605 includes a non-transitory computer readable medium (non-transitory computer-readable storage medium). Memory 1605 may be used to store instructions, programs, code, sets of codes, or sets of instructions. The memory 1605 may include a stored program area that may store instructions for implementing an operating system, instructions for at least one function (e.g., a touch function, a sound playing function, an image playing function, etc.), instructions for implementing the various method embodiments described above, etc., and a stored data area; the storage data area may store data or the like referred to in the above respective method embodiments. Memory 1605 may also optionally be at least one storage device located remotely from the aforementioned processor 1601. As shown in fig. 16, an operating system, a communication module, an input-output interface module, and an image processing application program may be included in the memory 1605, which is one type of computer storage medium.
In the head display device 1600 shown in fig. 16, the input/output interface 1603 is mainly used for providing an input interface for a user, and acquiring data input by the user; and the processor 1601 may be configured to invoke an image processing program stored in the memory 1605, such that the processor 1601 performs steps in an image processing method according to various exemplary embodiments of the present disclosure. For example, the processor 1601 may be configured to call an image processing application stored in the memory 1605 and specifically perform the following operations:
Acquiring an eye image of a user wearing the head display device at a predetermined frequency through the eye movement tracking module;
extracting an eye feature of the user from the eye image;
performing stylized rendering based on the eye features and the predetermined style image to generate a stylized eye image;
displaying the stylized eye image on the external display module.
In some embodiments, based on the above-described scheme, the processor 1601 performs the stylized rendering of the eye image based on the eye feature and the predetermined style image by specifically performing the following operations:
and adjusting and rendering the image of the preset style based on the eye characteristics.
In some embodiments, based on the above-described scheme, the processor 1601 adjusts and renders the predetermined style image based on the eye feature by performing the following operations:
determining an eye movement behavior type of the user based on the eye feature;
and adjusting and rendering the image of the preset style based on the eye movement behavior type and the eye characteristics.
In some embodiments, based on the above-described approach, the ocular feature includes a pupil feature, and the processor 1601, when executing the determining the type of eye movement behavior of the user based on the ocular feature, specifically performs the following operations:
Determining whether the ocular image is a closed-eye image based on the pupil characteristics;
counting the eye-closing images in a plurality of eye images acquired in a preset time to obtain the number of the eye-closing images;
if the number of the eye-closing images is greater than or equal to a preset threshold value, determining that the eye movement behavior type of the user is a blink type;
and if the number of the closed-eye images is smaller than the preset threshold value, determining that the eye movement behavior type of the user is an open-eye type.
In some embodiments, based on the above-described scheme, the processor 1601 adjusts and renders the predetermined style image based on the eye movement behavior type and the eye feature by performing the following operations:
if the eye movement behavior type of the user is a blink type, determining blink duration of the user based on the eye-closing image quantity;
adjusting and rendering the predetermined style image based on the blink duration and the eye features, generating blink animation,
displaying the eye image rendered by the stylization on the external display module includes:
displaying the blink animation on the external display module.
In some embodiments, based on the above scheme, the processor 1601 further performs the following:
if the eye movement behavior type of the user is an open eye type, determining the gaze point position of the user based on the eye feature;
based on the gaze point position of the user, an eye movement direction of the user is determined.
In some embodiments, based on the above-described scheme, the processor 1601 adjusts and renders the predetermined style image based on the eye movement behavior type and the eye feature by performing the following operations:
determining a pupil diameter of the eye image based on the pupil characteristics;
the predetermined style image is adjusted and rendered based on the eye movement direction and the pupil diameter.
In some embodiments, based on the above scheme, the processor 1601 further performs the following:
determining the eye movement special effect corresponding to the eye movement behavior type according to the eye movement behavior type and the preset corresponding relation, wherein the preset corresponding relation is the corresponding relation between the eye movement behavior type and the eye movement special effect,
the adjusting and rendering the predetermined style image based on the eye movement behavior type and the eye feature includes:
And adjusting and rendering the image of the preset style based on the eye feature, the eye movement behavior type and the corresponding eye movement special effect.
In some embodiments, based on the above-mentioned scheme, the eye tracking module includes an infrared transmitting end and an infrared receiving end, and the processor 1601, when executing the acquiring, by the eye tracking module, an eye image of a user wearing the head display device, specifically performs the following operations:
transmitting infrared light at a predetermined frequency through the infrared transmitting end to irradiate eyes of a user wearing the head display device;
and receiving the reflected light of the eyes of the user through an infrared receiving end to generate an eye image of the user.
In some embodiments, based on the above-described aspects, the head-display device further includes an image acquisition module, such as a camera, the eye feature includes a gaze point location of the user, and the processor 1601 further performs the following operations:
collecting an external image of the head display device through an image collecting module;
if the external image comprises an external person, identifying the face position of the external person;
the gaze point position of the user is adjusted based on the facial position of the outsider.
The above is a schematic solution of a head display device according to an embodiment of the present specification. It should be noted that, the technical solution of the head display device and the technical solution of the image processing method belong to the same conception, and details of the technical solution of the head display device, which are not described in detail, can be referred to the description of the technical solution of the image processing method.
In the description of the present application, it should be understood that the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. In the description of the present application, it should be noted that, unless expressly specified and limited otherwise, "comprise" and "have" and any variations thereof are intended to cover non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed steps or elements but may include other steps or elements not listed or inherent to such process, method, article, or apparatus. The specific meaning of the above terms in the present application will be understood in specific cases by those of ordinary skill in the art. Furthermore, in the description of the present application, unless otherwise indicated, "a plurality" means two or more. "and/or", describes an association relationship of an association object, and indicates that there may be three relationships, for example, a and/or B, and may indicate: a exists alone, A and B exist together, and B exists alone. The character "/" generally indicates that the context-dependent object is an "or" relationship.
Those skilled in the art will appreciate that implementing all or part of the above-described methods in accordance with the embodiments may be accomplished by way of a computer program stored on a computer readable storage medium, which when executed may comprise the steps of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a read-only memory, a random access memory, or the like.
The foregoing disclosure is illustrative of the present application and is not to be construed as limiting the scope of the application, which is defined by the appended claims.

Claims (22)

1. An image processing method, applied to a head-display device including an external display module and an eye-tracking module, comprising:
acquiring an eye image of a user wearing the head display device at a predetermined frequency through the eye movement tracking module;
extracting an eye feature of the user from the eye image;
performing stylized rendering based on the eye features and the predetermined style image to generate a stylized eye image;
Displaying the stylized eye image on the external display module.
2. The method of claim 1, wherein the head-mounted device further comprises a physiological data acquisition sensor, the method further comprising:
collecting physiological data of the user through the physiological data collecting sensor;
extracting physiological characteristics of the user from the collected physiological data of the user,
the stylized rendering based on the ocular feature and the predetermined style image comprises:
the predetermined style image is adjusted and rendered based on the ocular feature and the physiological feature.
3. The method of claim 1, wherein the head-up display device further comprises: a motion sensor, the method further comprising:
acquiring motion data of the head display equipment through the motion sensor;
determining a positional shift of the head display device based on the motion data of the head display device point,
the stylized rendering based on the ocular feature and the predetermined style image comprises:
and adjusting and rendering the predetermined style image based on the eye feature and the position offset.
4. The method of claim 1, wherein the stylized rendering based on the ocular feature and the predetermined style image comprises:
adjusting and rendering the predetermined style image based on the ocular feature; or,
based on the ocular feature, a style of the predetermined style image is migrated to the ocular image.
5. The method of claim 4, wherein the adjusting and rendering the predetermined style image based on the ocular feature comprises:
determining an eye movement behavior type of the user based on the eye feature;
and adjusting and rendering the image of the preset style based on the eye movement behavior type and the eye characteristics.
6. The method of claim 5, wherein the ocular feature comprises a pupil feature, and wherein the determining the type of eye movement behavior of the user based on the ocular feature comprises:
determining whether the ocular image is a closed-eye image based on the pupil characteristics;
counting the eye-closing images in a plurality of eye images acquired in a preset time to obtain the number of the eye-closing images;
If the number of the eye-closing images is greater than or equal to a preset threshold value, determining that the eye movement behavior type of the user is an eye-closing type;
and if the number of the closed-eye images is smaller than the preset threshold value, determining that the eye movement behavior type of the user is an open-eye type.
7. The method of claim 6, wherein the method further comprises:
if the eye movement behavior type of the user is a closed eye type, acquiring a next eye image;
and if the next eye image is an open eye image, determining that the eye movement behavior type of the user is a blink type.
8. The method of claim 7, wherein said adjusting and rendering the predetermined style image based on the eye movement behavior type and the eye feature comprises:
if the eye movement behavior type of the user is a blink type, determining blink duration of the user based on the eye-closing image quantity;
adjusting and rendering the predetermined style image based on the blink duration and the eye features, generating blink animation,
the displaying the stylized eye image on the external display module includes:
displaying the blink animation on the external display module.
9. The method of claim 6, wherein the method further comprises:
if the eye movement behavior type of the user is an open eye type, determining the gaze point position of the user based on the eye feature;
based on the gaze point position of the user, an eye movement direction of the user is determined.
10. The method of claim 9, wherein said adjusting and rendering the predetermined style image based on the eye movement behavior type and the eye feature comprises:
determining a pupil diameter of the eye image based on the pupil characteristics;
the predetermined style image is adjusted and rendered based on the eye movement direction and the pupil diameter.
11. The method of any one of claims 5 to 10, the method further comprising:
determining the eye movement special effect corresponding to the eye movement behavior type according to the eye movement behavior type and the preset corresponding relation, wherein the preset corresponding relation is the corresponding relation between the eye movement behavior type and the eye movement special effect,
the adjusting and rendering the predetermined style image based on the eye movement behavior type and the eye feature includes:
And adjusting and rendering the image of the preset style based on the eye feature, the eye movement behavior type and the corresponding eye movement special effect.
12. The method according to any one of claims 1 to 10, wherein the eye-tracking module includes a transmitting end and a receiving end, the acquiring, by the eye-tracking module, an eye image of a user wearing the head-display device at a predetermined frequency, comprising:
emitting incident light at a predetermined frequency through the emitting end to illuminate eyes of a user wearing the head display device;
and receiving the reflected light of the eyes of the user through the receiving end to generate an eye image of the user.
13. The method of any one of claims 1 to 5, wherein the head-mounted device further comprises an image acquisition module, the ocular feature comprises an ocular angle of the user, the method further comprising:
collecting an external image of the head display device through the image collecting module;
if the external image comprises an external person, identifying the face position of the external person;
the eye angle of the user is dynamically adjusted based on the facial position of the external person.
14. An image processing apparatus, characterized by being applied to a head-display device including an external display module and an eye-tracking module, comprising:
an image acquisition module for acquiring an eye image of a user wearing the head display device at a predetermined frequency through the eye movement tracking module;
the feature extraction module is used for extracting the eye features of the user from the eye images;
the stylized rendering module is used for performing stylized rendering based on the eye features and the images of the preset styles, and generating stylized eye images;
and the display module is used for displaying the stylized eye image on the external display module.
15. A head-display device, characterized in that the head-display device comprises:
a processor and a memory;
an eye tracking module, which is in communication connection with the processor, and is used for acquiring an eye image of a user wearing the head display device at a preset frequency;
the processor is configured to: extracting eye features of the user from the eye image, performing stylized rendering based on the eye features and a predetermined style image, and generating a stylized eye image;
and the external display module is in communication connection with the processor and is used for displaying the stylized eye image.
16. The head display device according to claim 15, wherein the head display device further comprises:
the physiological data acquisition sensor is in communication connection with the processor and is used for acquiring physiological data of the user,
the processor is further configured to: and extracting physiological characteristics of the user from the acquired physiological data of the user, and adjusting and rendering the image of the preset style based on the eye characteristics and the physiological characteristics.
17. The head display device of claim 16, wherein the physiological data acquisition sensor is located at a location external to the head display device that is in contact with the skin of the user.
18. The head display device according to claim 15, wherein the head display device further comprises:
the motion sensor is in communication connection with the processor and is used for acquiring motion data of the head display device;
the processor is further configured to: and determining the position offset of the head display equipment based on the motion data of the head display equipment points, and adjusting and rendering the image of the preset style based on the eye features and the position offset.
19. The head-up display device of any one of claims 15-18, wherein the eye-tracking module comprises:
A transmitting end for emitting incident light at a predetermined frequency to illuminate eyes of a user wearing the head display device;
and the receiving end is used for receiving the reflected light of the eyes of the user and generating an eye image of the user.
20. The head display apparatus according to any one of claims 15 to 18, further comprising:
the external image acquisition module is in communication connection with the processor and is used for acquiring external images of the head display device;
the processor is further configured to: if the external image comprises an external person, identifying the face position of the external person; the eye angle of the user is dynamically adjusted based on the facial position of the external person.
21. A computer storage medium storing a plurality of instructions adapted to be loaded by a processor and to perform the steps of the method according to any one of claims 1 to 13.
22. A head display apparatus comprising: a processor and a memory; wherein the memory stores a computer program adapted to be loaded by the processor and to perform the steps of the method according to any one of claims 1-13.
CN202210639122.XA 2022-06-08 2022-06-08 Image processing method, device, storage medium and head display equipment Pending CN117234325A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202210639122.XA CN117234325A (en) 2022-06-08 2022-06-08 Image processing method, device, storage medium and head display equipment
PCT/CN2023/098980 WO2023237023A1 (en) 2022-06-08 2023-06-07 Image processing method and apparatus, storage medium, and head-mounted display device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210639122.XA CN117234325A (en) 2022-06-08 2022-06-08 Image processing method, device, storage medium and head display equipment

Publications (1)

Publication Number Publication Date
CN117234325A true CN117234325A (en) 2023-12-15

Family

ID=89081348

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210639122.XA Pending CN117234325A (en) 2022-06-08 2022-06-08 Image processing method, device, storage medium and head display equipment

Country Status (2)

Country Link
CN (1) CN117234325A (en)
WO (1) WO2023237023A1 (en)

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10726584B2 (en) * 2018-11-28 2020-07-28 International Business Machines Corporation Displaying a virtual eye on a wearable device
CN110531516A (en) * 2019-07-12 2019-12-03 上海大学 A kind of intelligent apparatus of wear-type eye-tracking operation auxiliary
CN112989904B (en) * 2020-09-30 2022-03-25 北京字节跳动网络技术有限公司 Method for generating style image, method, device, equipment and medium for training model

Also Published As

Publication number Publication date
WO2023237023A1 (en) 2023-12-14

Similar Documents

Publication Publication Date Title
US11563700B2 (en) Directional augmented reality system
CN112034977B (en) Method for MR intelligent glasses content interaction, information input and recommendation technology application
US10055642B2 (en) Staredown to produce changes in information density and type
CN107209386B (en) Augmented reality view object follower
TWI549505B (en) Comprehension and intent-based content for augmented reality displays
CN107111370B (en) Virtual representation of real world objects
KR102304827B1 (en) Gaze swipe selection
AU2022202543A1 (en) Eye image collection, selection, and combination
US9135508B2 (en) Enhanced user eye gaze estimation
US9035970B2 (en) Constraint based information inference
US20160343168A1 (en) Virtual personification for augmented reality system
US9865093B1 (en) Contextual augmented reality devices collaboration
US20130241805A1 (en) Using Convergence Angle to Select Among Different UI Elements
JP2022132550A (en) Detailed Eye Shape Model for Robust Biometric Applications
KR20160019964A (en) Hybrid world/body locked hud on an hmd
CN112181152A (en) Advertisement push management method, equipment and application based on MR glasses
EP2965265A2 (en) Inconspicuous tag for generating augmented reality experiences
US11645823B2 (en) Neutral avatars
US11620792B2 (en) Fast hand meshing for dynamic occlusion
US20180190019A1 (en) Augmented reality user interface visibility
US11328187B2 (en) Information processing apparatus and information processing method
CN117234325A (en) Image processing method, device, storage medium and head display equipment
JP2020087003A (en) Character display method
JP2023520448A (en) A system for providing guidance
JP2018205647A (en) Head-mounted type display device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination