CN109799899B - Interaction control method and device, storage medium and computer equipment - Google Patents

Interaction control method and device, storage medium and computer equipment Download PDF

Info

Publication number
CN109799899B
CN109799899B CN201711142437.9A CN201711142437A CN109799899B CN 109799899 B CN109799899 B CN 109799899B CN 201711142437 A CN201711142437 A CN 201711142437A CN 109799899 B CN109799899 B CN 109799899B
Authority
CN
China
Prior art keywords
field
area
virtual reality
origin
view
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711142437.9A
Other languages
Chinese (zh)
Other versions
CN109799899A (en
Inventor
周扬
王金桂
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201711142437.9A priority Critical patent/CN109799899B/en
Publication of CN109799899A publication Critical patent/CN109799899A/en
Application granted granted Critical
Publication of CN109799899B publication Critical patent/CN109799899B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

The invention relates to an interaction control method, an interaction control device, a storage medium and computer equipment, wherein the method comprises the following steps: acquiring and outputting a picture; determining an origin field of view region in the picture; detecting a sight line drop point position in the picture; when the sight line landing position is located at the edge of the origin field of view region, controlling an interactive object located in a region outside the origin field of view region in the screen to move toward the origin field of view region. The scheme provided by the application improves the interaction control efficiency.

Description

Interaction control method and device, storage medium and computer equipment
Technical Field
The present invention relates to the field of computer technologies, and in particular, to an interaction control method, an interaction control apparatus, a storage medium, and a computer device.
Background
Along with the development of computer technology and internet, computer technology brings great convenience to the life of people, and the life quality of people is greatly improved. More and more users are accustomed to watching video or live broadcast through computer devices and interacting during the watching process.
In the conventional technology, when interactive control is performed on a screen, a user needs to perform aiming control through mouse-like operations by using auxiliary equipment such as a handle, and needs to perform multi-step operations such as further clicking through a key, so that interactive control efficiency is low.
Disclosure of Invention
In view of the above, it is necessary to provide an interactive control method, an interactive control apparatus, a storage medium, and a computer device for solving the problem that the efficiency of interactive control is low at present.
An interaction control method comprising:
acquiring and outputting a picture;
determining an origin field of view region in the picture;
detecting a sight line drop point position in the picture;
when the sight line landing position is located at the edge of the origin field of view region, controlling an interactive object located in a region outside the origin field of view region in the screen to move toward the origin field of view region.
An interactive control apparatus comprising:
the acquisition module is used for acquiring and outputting a picture;
the determining module is used for determining an origin field of view region in the picture;
the detection module is used for detecting the sight line drop point position in the picture;
and the control module is used for controlling the interactive objects positioned in the area outside the origin view field area in the picture to move towards the origin view field area when the sight line falling point position is positioned at the edge of the origin view field area.
A computer-readable storage medium having stored thereon a computer program which, when executed by a processor, causes the processor to perform the steps of:
acquiring and outputting a picture;
determining an origin field of view region in the picture;
detecting a sight line drop point position in the picture;
when the sight line landing position is located at the edge of the origin field of view region, controlling an interactive object located in a region outside the origin field of view region in the screen to move toward the origin field of view region.
A computer device comprising a memory and a processor, the memory having stored therein a computer program that, when executed by the processor, causes the processor to perform the steps of:
acquiring and outputting a picture;
determining an origin field of view region in the picture;
detecting a sight line drop point position in the picture;
when the sight line landing position is located at the edge of the origin field of view region, controlling an interactive object located in a region outside the origin field of view region in the screen to move toward the origin field of view region.
According to the interaction control method, the device, the storage medium and the computer equipment, after a picture is locally acquired and output, an original point view field area can be automatically determined in the picture, then the sight line falling point position in the picture is detected, when the sight line falling point position is detected to move to the edge of the original point view field area, namely when a user intends to perform interaction through an interaction object, the interaction object is controlled to move towards the original point view field area, so that the interaction object is controlled according to the sight line of the user, the problem that interaction control needs to be achieved through auxiliary equipment is avoided, the interaction control efficiency is improved, and the interaction object is located in an area outside the original point view field area in the picture in a default state, so that the situation that the interaction object blocks picture content in the original point view field area to influence the user to watch the locally output picture is avoided.
Drawings
FIG. 1 is a diagram of an application environment of an interactive control method in one embodiment;
FIG. 2 is a diagram showing an application environment of an interactive control method in another embodiment;
FIG. 3 is a flow diagram illustrating an exemplary interaction control method;
FIG. 4 is a diagram illustrating a field of view region of origin in a picture in one embodiment;
FIG. 5 is a diagram illustrating the relationship of a first field of view region, a second field of view region, and an origin field of view region in one embodiment;
FIG. 6 is a diagram illustrating an exemplary screen;
FIG. 7 is a diagram illustrating a screen of another embodiment;
FIG. 8 is a diagram illustrating a screen of another embodiment;
FIG. 9 is a flowchart illustrating an interactive control method according to another embodiment;
FIG. 10 is a block diagram of an interactive control device in one embodiment;
FIG. 11 is a block diagram of an interactive control device in accordance with another embodiment;
FIG. 12 is a diagram showing an internal structure of a computer device in one embodiment;
fig. 13 is an internal structural view of a computer device in another embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
FIG. 1 is a diagram of an application environment of an interaction control method in one embodiment. As shown in FIG. 1, the application environment includes a computer device 110 and a user 120. The computer device 110 may be an electronic device capable of playing video, such as a personal computer, a television, or a tablet computer. The computer device 110 may interact with the user 120 by performing an interaction control method. Those skilled in the art will understand that the application environment shown in fig. 1 is only a part of the scenario related to the present application, and does not constitute a limitation to the application environment of the present application.
Fig. 2 is a diagram of an application environment of an interaction control method in a virtual reality environment in another embodiment. As shown in fig. 2, the application environment includes a computer device 210, a user 220, and a virtual reality device 230. The computer device 210 may be an electronic device such as a personal computer, a television, or a tablet computer that can play video. The virtual reality device 230 may be a virtual reality head-mounted display device, such as VR glasses or VR eyecups. The virtual reality device 230 may interact with the user 120 by performing an interaction control method.
Fig. 3 is a flowchart illustrating an interactive control method according to an embodiment. The embodiment is mainly illustrated by applying the method to the computer device 110 in fig. 1. Referring to fig. 3, the interactive control method specifically includes the following steps:
s302, acquiring and outputting a picture.
Wherein a picture is data presented on a screen of a computer device. The picture may specifically be a picture formed by displaying video frames constituting a video stream on a screen, a picture formed by displaying image frames acquired from the real world on a screen, or a picture formed on a screen according to page data. The picture may specifically be a live picture.
Specifically, the computer device may collect image frames from the real world through a built-in camera or a connected external camera, and play the collected image frames frame by frame according to the collected time sequence to acquire and output a picture. The computer equipment can also collect sound from the real world through the sound collection equipment, obtain live broadcast data stream according to the collected sound and the collected image frame, and play the live broadcast data stream according to the collected time sequence so as to obtain and output live broadcast pictures. The live data stream may also comprise only an image data stream. The computer device can also read the video stream resource from the local storage space, and play the video frames included in the video stream resource frame by frame to acquire and output the picture.
In one embodiment, the computer device may also receive an image frame sequence or a video stream sent by another computer device, play the received image frame sequence frame by frame, or play video frames included in the received video stream frame by frame to acquire and output a picture.
In one embodiment, the screen may specifically be a virtual reality screen. The virtual reality picture is a picture displayed in a virtual reality scene. The virtual reality scene is a virtual world which is simulated to generate a three-dimensional space, and provides simulation of senses such as vision, hearing, touch and the like for a user, so that the user can observe objects in the three-dimensional space as if the user is in his own experience.
In the present embodiment, the interactive control method is executed by a virtual reality device. The image formats of virtual reality pictures can be classified into an upper and lower picture image format and a left and right picture image format according to picture positions, and into a panoramic image format, a semi-panoramic image format and a fixed view image format according to a view field. The virtual reality screen may specifically be a virtual reality live screen.
Specifically, the virtual reality device may obtain a video identifier of a video stream to be played. The video identifier is used to identify the content of the video stream, and the same video identifier may correspond to a plurality of homologous video streams. The homologous video streams are video streams having the same content but different encoding systems or image processing systems. There is a difference in the size of data amount or image presentation effect between a plurality of homologous videos. For example, a video stream resource in a virtual reality video format and a video stream resource in a normal video format may exist in the same video stream. The video stream to be played may be a video stream that has been recorded, or may be a video stream formed in real time from a picture frame sequence acquired in the real world.
The virtual reality equipment can position the video file locally according to the video identification and the default virtual reality video format, and then obtains the video stream resource to be played. The virtual reality equipment can also send the video identification to the computer equipment, and when the video identification is sent to the computer equipment, the virtual reality video format is appointed, so that the computer equipment positions the video streaming resource according to the video identification and the appointed virtual reality video format, and sends the video streaming resource to the virtual reality equipment to be received by the virtual reality equipment. It is understood that the video stream resource located by the computer device may be a video stream resource that has already been recorded, or a video stream resource that is generated in real time. Completed video stream resources such as virtual reality movie resources have been recorded. A video streaming resource generated in real time such as a virtual reality live resource.
In particular, the video stream in a virtual reality video format may be a sequence of panoramic image frames, which the virtual reality device may play frame by frame. The video streams in the virtual reality video format may also include a left-eye video stream and a right-eye video stream, and the virtual reality device may play the left-eye video stream and the right-eye video stream frame by frame synchronously. Through the built-in or external lens of the virtual reality equipment, a user can watch the visual effect similar to scenery in a real scene.
S304, the origin field area in the picture is determined.
Wherein, the origin visual field area is a picture area in the visual field range of the static observation point. Static observation points represent that the spatial attitude of the observation points remains unchanged. The spatial pose may include a direction of observation of the observation point. In the present embodiment, the observation point may specifically be an eye of a natural person. The size of the observation angle of the observation point may be specifically the size of an angle that a natural human eyeball can rotate.
Specifically, the computer device can acquire the size of the observation visual angle of the observation point and the distance from the observation point to the picture, and calculate the origin visual field area in the picture according to the size of the visual angle of the observation point and the distance from the observation point to the picture.
In one embodiment, in a virtual reality scene, determining an origin field of view region in a picture includes: and determining an origin field area in the virtual reality picture according to the current field angle.
Wherein the Field angle (Field Of View FOV) is the Field angle Of the optical lens Of the virtual reality device. The angle of view is an angle formed by the boundary of the maximum range where the screen can pass through the optical lens, with the optical lens as the vertex. The field of view may include a horizontal field of view and a vertical field of view. The field angles of different optical lenses are different. The size of the field of view determines the field of view of the optical lens, with a larger field of view and a larger magnification. It is understood that the magnification corresponding to the field angle indicates a field of view that can satisfy the field angle when the screen is enlarged at a magnification that is a fixed size. In the present embodiment, the origin field of view region is a virtual reality screen region within the field of view of the static optical lens. A static optical lens means that the spatial attitude of the optical lens remains unchanged.
In one embodiment, the determining the origin field of view region in the virtual reality picture according to the current field of view angle comprises: determining an angle value of a current field angle; determining an origin point view field reference area in the virtual reality picture according to the angle value of the reference view field angle; and determining the origin visual field area in the virtual reality picture according to the acquired angle value and the origin visual field reference area.
Wherein the reference field angle is reference data employed for determining the origin field region in the virtual reality screen. In the virtual reality screen, the visual field range at the reference visual field angle, that is, the origin visual field reference region is an objectively measurable region. Specifically, the reference angle of view is the angle of view of the optical lens at magnification of 1. The screen in the optical lens view field with the magnification of 1 is not enlarged.
Specifically, the virtual reality device may obtain an angle value of a current field angle, obtain a magnification of an optical lens of the virtual reality device according to the angle value, and asynchronously obtain the angle value of the reference field angle to determine the origin field reference region in the virtual reality image and the magnification of the reference optical lens corresponding to the angle value of the reference field angle. And the virtual reality equipment multiplies the size of the origin field reference area by the ratio of the magnification of the optical lens of the virtual reality equipment to the magnification of the reference optical lens, so that the size of the origin field reference area in the virtual reality picture under the optical lens of the virtual reality equipment can be obtained.
FIG. 4 is a diagram illustrating a field of view region of origin in a picture in one embodiment. Referring to the left diagram of fig. 4, the left diagram is a schematic diagram of the origin field of view area under the observation point, and includes an observation point 411, an observation angle 412, an origin field of view area 413, and a screen 414. Referring to the right diagram of fig. 4, the right diagram is a schematic diagram of an origin field of view region in a virtual reality scene, and includes an optical lens 421, a field angle 422, an origin field of view region 423, and a virtual reality picture 424.
In the embodiment, the origin field area under the current field angle is obtained by calculating the origin field reference area under the reference field angle, so that the accuracy of the origin field area under the current field angle is ensured.
In the above embodiment, the interaction control method is applied to a virtual reality scene, the origin field area is determined according to the field angle of the current device, then after the current origin field area is determined, the interaction object is placed in an area outside the origin field area, and the interaction object is controlled to move to the origin field area only when the interaction object is needed, so that the interaction object is not disturbed when a user normally watches a virtual reality picture, and the interaction object can be called to interact when the interaction is needed, thereby improving the interaction control efficiency.
S306, the sight line falling point position in the picture is detected.
The sight line landing point position is a specific position of an observation picture of the observation point. In this embodiment, the gaze point position may specifically be a gaze point position of the user in the screen. Specifically, the computer device may collect an observation point image according to a ray tracing manner, determine an intersection point of an observation ray from an observation point and the picture according to the collected observation point image, and determine the intersection point as a sight line drop point position in the picture.
In one embodiment, in a virtual reality environment, the virtual reality device may also detect a gaze drop point position in the screen according to a ray tracing manner; and determining the sight line deflection amount according to the sensor data, and calculating the current sight line drop point position in the virtual reality picture according to the sight line deflection amount and the initial sight line drop point position.
And S308, when the sight line landing position is positioned at the edge of the original point view field area, controlling the interactive object positioned in the area outside the original point view field area in the picture to move towards the original point view field area.
Wherein the edge of the origin field of view region is a boundary portion of the origin field of view region. The edge of the origin field region may be a linear region, that is, a boundary line of the origin field region. The edge of the origin field region may be a block region, that is, a region formed within a certain distance from the boundary line of the origin field region.
An interactive object is a component of a user's interaction with a computer device. The interactive object may specifically be an interactive interface. Such as a function menu or a selection menu, etc. The interaction object may include interaction controls such as a select button, a confirm button, or a cancel button, among others. It will be appreciated that the interactive object herein is also any medium upon which a user interacts with a computer device.
When the observation point does not adjust the spatial posture, the region outside the origin field of view region in the picture is not in the field of view of the observation point, that is, when the observation point is the eyes of the natural person, the natural person cannot view the region outside the origin field of view region in the picture without adjusting the head posture. Therefore, when the user does not need to interact, the interaction object is placed outside the view of the user, and the user is guaranteed not to be disturbed when watching the picture.
The computer equipment can preset an interactive object control strategy, and the preset interactive object control strategy maps the edge of the sight line landing point position in the origin view field area into the behavior intended by the user to carry out interactive operation, so that when the sight line landing point position is at the edge of the origin view field area, the interactive object in the area outside the origin view field area in the picture is controlled to move towards the origin view field area, and the interactive object is moved into the user view field area to realize interaction.
Specifically, when the computer device detects that the sight line landing point position moves to the edge of the origin view field area, the computer device determines that the user intends to perform interactive operation, and controls the interactive object located in the area outside the origin view field area in the picture to move towards the origin view field area.
In one embodiment, in the virtual reality scene, the area outside the original point field of view area in the virtual reality picture is the area inside the field of view of the optical lens after the adjustment of the spatial posture. For example, the user wears the head-mounted virtual reality device and rotates the head, and the spatial posture of the optical lens can be adjusted. Therefore, when the user does not need to interact, the interactive object is placed outside the optical lens field of view, and the user is guaranteed not to be disturbed when watching the virtual reality picture.
In one embodiment, the virtual reality screen further comprises a first field of view region and a second field of view region; the second field of view region encompasses the first field of view region; the first field of view region encompasses the origin field of view region. S308 comprises the following steps: when the sight line landing position is positioned at the edge of the origin field of view region, the interactive object positioned in the first field of view region in the virtual reality picture is controlled to move towards the origin field of view region.
Specifically, according to the field of view region when the virtual reality device deflects different angle values, a region outside the origin field of view region in the virtual reality picture is divided into a first field of view region and a second market region. The first view field area is a view field area when the virtual reality device horizontally deflects an angle value within a first preset angle interval, or vertically deflects an angle value within a second preset angle interval. The first view field region is a view field region when the virtual reality device horizontally deflects an angle value within a third preset angle interval, or vertically deflects an angle value within a fourth preset angle interval. The maximum angle value of the first preset angle interval is the same as the minimum angle value of the third preset angle interval, and the maximum angle value of the second preset angle interval is the same as the minimum angle value of the fourth preset angle interval.
For example, assuming that the virtual reality device is a head mounted device, the first field of view region and the second field of view region may be divided according to the field of view region after the head of the user is deflected. The deflection angle of the user's head can be divided into different angle intervals according to whether the user's head is comfortable or not. The comfortable angle interval for horizontal rotation is a first preset angle interval, and may be-30-30 degrees (i.e. 0-30 degrees to left horizontally and 0-30 degrees to right horizontally), and the comfortable angle interval for vertical rotation is a second preset angle interval, and may be-12-20 degrees (i.e. 0-20 degrees to up vertically and 0-12 degrees to down vertically). The uncomfortable angle interval of the rotation in the horizontal direction is a third preset angle interval, and specifically can be-55- (-30) degrees, 30-55 degrees (namely, 30-55 degrees to the left horizontally and 30-55 degrees to the right horizontally), and the comfortable angle interval of the rotation in the vertical direction is a fourth preset angle interval, specifically can be-40- (-12) degrees, and 20-60 degrees (namely, 20-60 degrees to the upward vertically and 12-40 degrees to the downward vertically). The specific data varies from person to person.
In this embodiment, the interactive object is placed in the first view field area, and when the sight line landing point position is located at the edge of the origin view field area and then continuously moves, the interaction can be performed in the first view field area with a proper deflection angle, so that the interaction situation in the second view field area with an improper deflection angle is avoided, the interaction comfort level is ensured, and the interaction control efficiency is improved.
FIG. 5 is a diagram illustrating a relationship of a first field of view region, a second field of view region, and an origin field of view region in one embodiment. As can be seen from fig. 5, the virtual reality screen is divided into a first field of view region 510, a second field of view region 520, and an origin field of view region 530. Wherein the second field of view region 520 surrounds the first field of view region 510, and the first field of view region 510 surrounds the origin field of view region 530.
According to the interaction control method, after a picture is locally obtained and output, the original point view field area can be automatically determined in the picture, then the sight line falling point position in the picture is detected, when the sight line falling point position is detected to move to the edge of the original point view field area, namely when a user intends to carry out interaction through an interaction object, the interaction object is controlled to tend to move towards the original point view field area, so that the interaction object is controlled according to the sight line of the user, the problem that interaction control needs to be achieved by using auxiliary equipment is solved, the interaction control efficiency is improved, and the interaction object is located in the area outside the original point view field area in the picture in a default state, so that the situation that the interaction object shields picture content in the original point view field area to influence the user to watch the locally output picture is avoided.
In one embodiment, S306 includes: acquiring an eye image; determining the staring point position of a pupil imaging point on a screen in an eye image; the gaze location is converted to a gaze drop location in the screen.
The eye image comprises a pupil imaging point and a light source imaging point formed by reflecting a light source through a cornea. The pupil, a small circular hole in the center of the iris inside the eye, is the passage for light rays to enter the eye. The cornea is a transparent film located on the anterior wall of the eyeball and corresponds to a concave-convex mirror. The anterior surface of the cornea is convex and spherically curved.
The pupil imaging point is the imaging of the pupil refraction point after the pupil center is refracted by the cornea in the shot eye image. The center of the pupil, which is the center point of the pupil region. The light source imaging point is the imaging of the pupil reflection point after the light source center is reflected by the cornea in the shot eye image. The light source is incident light directed towards the eye. And the light source center is the center point of the light source area. In one embodiment, the light source may be an infrared light source.
Wherein the screen is a display screen of the computer device. In a virtual reality environment, the screen may also be a display screen of a virtual reality device. The gaze point is a point corresponding to a line of sight of an eye entity to which the eye image corresponds. The gaze point position on the screen is a position at which the eye entity corresponding to the eye image looks at a point on the screen.
Specifically, the computer device can identify a pupil imaging point and a light source imaging point in the eye image, determine the optical axis direction according to the pupil imaging point and the light source imaging point, and determine the sight line direction according to the optical visual axis direction angle difference matched with the eye image and the optical axis direction. The computer device may determine an intersection of the gaze direction and the screen, the intersection being taken as the gaze point on the screen. The optical visual axis direction angle difference matched with the eye image is the optical visual axis direction angle difference existing in the eye entity corresponding to the eye image. It is understood that the optical visual axis direction angle difference existing for a normal eye is fixed, and the influence on the optical visual axis direction angle difference of the eye caused by the deformation or the abnormality of the eye is not considered.
Further, it can be understood that the position of the coordinate corresponding to the coordinate of the gaze point position on the screen in the two-dimensional picture is the gaze point position. In the virtual reality scene, eyes see a three-dimensional scene through a two-dimensional screen, and in order to realize the three-dimensional stereo property of the virtual reality scene, the three-dimensional virtual reality scene needs to be generated according to the principle that the eye parallax generates the three-dimensional scene. The computer device may perform parallax conversion according to the gaze point position to obtain a target point position corresponding to the gaze point position in the virtual reality scene, which is the gaze point position. It should be noted that the target point position is a target point position in the virtual reality scene, and is not a coordinate position on the screen.
In one embodiment, the eye images are binocular eye images; the gaze point location is a binocular gaze point location. Converting gaze location to gaze drop location in a screen, comprising: and performing parallax conversion on the positions of the binocular gaze points to obtain sight line falling point positions corresponding to the positions of the binocular gaze points in the picture.
It is to be understood that the binocular eye images may be both binocular eye images (i.e., one eye image includes both eyes), or may be respective binocular eye images (i.e., one eye image includes only one eye, and the computer device obtains eye images of both eyes), which is not limited thereto. It should be noted that, in the embodiments of the present application, the process of determining the gaze point position on the screen according to the eye image is a process for one eye, and if the respective gaze point positions of the two eyes need to be determined, the corresponding processes may be respectively executed for each eye to obtain the respective gaze point positions of the two eyes.
Specifically, for a two-dimensional screen scene, the middle position of the positions of the binocular gaze points may be used as the gaze point position corresponding to the positions of the binocular gaze points in the screen. And in the virtual reality scene, performing parallax conversion according to the positions of the binocular gaze points to obtain the same target point position corresponding to the positions of the binocular gaze points in the virtual reality picture, namely the position of the binocular sight line drop point. The virtual reality equipment can acquire parameters of a virtual camera in the rendering engine, and determines the position of a target point corresponding to the position of the binocular gaze point in the virtual reality scene according to the acquired parameters of the virtual camera, so as to obtain the position of the binocular sight line drop point.
In the embodiment, the user gaze point is tracked according to the binocular eye image, so that interactive control response is realized according to the user gaze, and the accuracy of the determined gaze point position is ensured.
In the above-described embodiment, the accuracy of the determined gaze point position is ensured based on gaze tracking. And then the linear falling point position in the picture is determined more accurately according to the binocular gaze position. Therefore, the interactive control operation executed according to the linear falling point position is more accurate.
In one embodiment, in the virtual reality scenario, S306 includes: determining an initial position of a sight line drop point in a virtual reality picture according to initial sensor data; acquiring current sensor data; determining an offset angle according to a difference value between current sensor data and initial sensor data; and determining the current sight line drop point position in the virtual reality picture according to the offset angle and the sight line drop point initial position.
Wherein the sensor data comprises data reflecting a spatial pose of the virtual reality device. The initial sensor data is sensor data initially received by the virtual reality device after the virtual reality device enters the rendering of the virtual reality scene, is used for determining the current state of the virtual reality device, and is defined as an initial state. The initial state includes a pose state of the virtual reality device. Wherein the attitude state includes the direction of the virtual reality device deflection, the angle of the deflection and the like. The sensor data may be from at least one of a direction sensor, a gravity sensor, an acceleration sensor, and a three-axis gyroscope sensor.
The virtual reality device can determine the initial state of the virtual reality device in the three-dimensional space according to the three-dimensional fixed reference coordinate system of the virtual reality device. If the fixed reference coordinate system is a three-dimensional reference coordinate system, the fixed reference coordinate system comprises three axes which are perpendicular to each other, wherein the two axes can be parallel to the display screen of the virtual reality device, and the rest one axis is perpendicular to the display screen. The initial state of the virtual reality device determined by the fixed reference coordinate system can accurately reflect the initial state of the virtual reality device in the three-dimensional space represented by the fixed reference coordinate system. Specifically, the virtual reality device may determine the initial position of the gaze point in the virtual reality screen according to the initial state in the three-dimensional space. For example, when the virtual reality device is assumed to be a head-mounted device, a three-dimensional reference coordinate system is established with a central point position of the head-mounted device as a coordinate origin, and at this time, the virtual reality device is in an initial non-deflection state, and a central point position of a field-of-view region of the origin is taken as an initial position of a sight-line-falling point.
Further, after the initial state is determined, the virtual reality device continues to acquire subsequent sensor data, so that the subsequent state of the virtual reality device is determined according to the subsequent sensor data, and the subsequent state comprises the attitude state of the virtual reality device. And the virtual reality equipment takes the initial state as a reference, compares the subsequent state with the initial state, and determines the current sight line landing point position in the virtual reality picture according to the changed deflection angle of the subsequent state relative to the initial state. For example, the subsequent state of the virtual reality device is tilted 15 ° to the lower left corner relative to the initial state, and then the current position of the gaze drop point in the virtual reality screen is the position tilted 15 ° to the lower left corner of the initial position of the gaze drop point.
In the embodiment, the sight line falling point position in the picture is determined through the sensor data, a sight line falling point position detection mode without an eyeball sight line tracking basis is provided, ways for detecting the sight line falling point position are enriched, and the sight line falling point position can be detected in interactive control.
In one embodiment, S308 comprises: when the sight line drop point position is located at the edge of the origin sight field area, determining a trigger type corresponding to the edge; when the trigger type is a display control type, displaying the content to be read at the edge; when the trigger type is a movement control type, controlling the interactive object located in the area outside the origin field of view area in the picture to move towards the origin field of view area.
Wherein the trigger type is a type of the triggered control operation. The display control type is a type that controls display of specific content. The movement control type is a type that controls movement of specific content. In this embodiment, the specific content displayed by the control is the content to be read. The content to be read is the content which is only needed to be watched by the user and does not need to be interactively operated. The content to be read is such as a barrage, a conversation message or a system message.
The computer device may perform trigger type division on the edge of the origin view field region in advance. Specifically, the computer device may select a partial edge (first edge) from an edge of the origin-view region, and set a trigger type corresponding to the selected partial edge as the display control type. A partial edge is selected, such as the left portion of the lower edge of the origin field of view region. The computer device may further select a partial edge (second edge) from the edge of the origin-view region, and set a trigger type corresponding to the selected partial edge as the movement control type. A partial edge is selected, such as the right portion of the lower edge of the origin field of view region. In one embodiment, in order to avoid triggering two control operations at the same time, the edge corresponding to the display control type and the edge corresponding to the movement control type may be set to be disjoint.
The computer equipment can preset an interactive object control strategy, and the preset interactive object control strategy maps a first edge of the sight line falling point position in the original point view field area as a behavior intended to be read by the user, so that when the sight line falling point position is at the first edge of the original point view field area, the content to be read is controlled to be displayed at the edge of the original point view field area, and the content to be read is displayed in the view field range of the user. Wherein, the computer device can preset the display position of the content to be read, such as the lower edge area of the origin visual field area. When the content to be read is displayed, if the data volume of the content to be read is large, the content can be displayed in a rolling manner in the edge area of the origin view field area.
The computer equipment can also be used for taking the second edge of the origin view field area where the sight line drop point position is located as the behavior of the interactive operation intended by the user, so that when the sight line drop point position is located at the second edge of the origin view field area, the interactive object located in the area outside the origin view field area in the picture is controlled to move towards the origin view field area, and therefore the interactive object is moved into the field of view of the user to achieve interaction. Therefore, the moving appearance mode of the interactive object is different from the direct display mode of the content to be read, the interactive operation false triggering caused by the instantaneous appearance of the interactive object is avoided, and the practicability problem of the interactive object is reduced.
For example, fig. 6 shows a schematic diagram of a screen in one embodiment. Referring to fig. 6, the screen is divided into a first field of view region to which an interactive object 610 is added, a second field of view region, and an origin field of view region. When a user normally watches a picture, the sight line falling point position is located in the central area of the original point view field area. When the user's gaze drop point position moves to the first edge 620 of the origin field of view region, the computer device controls the display of the content to be read 710 at the edge as shown in fig. 7. When the user's sight-line landing position is moved to the second edge 630 of the origin field of view region, the computer apparatus controls the interactive object 610 located in the region outside the origin field of view region in the screen to move toward the origin field of view region as shown in fig. 8.
In the embodiment, the content which only needs to be browsed by the user can be browsed by rotating eyeballs of the user, and when interactive operation needs to be triggered, the interactive object outside the visual field range of the user is controlled to be automatically close to the sight line falling point, so that the turning of the user is reduced, and the interactive practicability is improved.
In one embodiment, the interactive control method further comprises: when the sight line landing point position continuously moves and leaves the edge of the origin sight field area, hiding the content to be read when the trigger type corresponding to the edge area is the display control type; and when the sight line drop point position continuously moves and leaves the edge, controlling the interactive object to move in the opposite direction of the current moving direction when the trigger type corresponding to the edge area is the movement control type.
Specifically, when the computer device detects that the position of the sight line drop point moves from the area outside the edge of the origin view field area to the edge of the origin view field area according to the manner of detecting the position of the sight line drop point set forth above, and continuously and irregularly moves, and leaves the edge of the origin view field area, it is determined that the sight line movement behavior of the user is falsely triggered. At this time, the computer device may hide the content to be read while displaying the content to be read. Or when the control interactive object moves towards the origin visual field area, the control interactive object moves along the reverse direction of the current moving direction and returns to the initial state.
In the embodiment, a coping way when the user triggers the control operation by mistake is provided, and the fault tolerance of the interactive control is submitted.
In one embodiment, S308 comprises: when the sight line landing position is located at the edge of the original point view field region, continuously moving the interactive object located in the region outside the original point view field region in the picture to a preset position in the original point view field region; or when the sight line landing point position is located at the edge of the original point view field area, controlling the interactive object located in the area outside the original point view field area in the picture to move towards the original point view field area until the sight line landing point position is located in the interactive object.
Wherein the preset position is a position where the movement of the interactive object is stopped, which is preset by the computer device. Specifically, the computer device may control the interactive object in the region outside the origin field of view on the screen to continuously move to a preset position where the movement of the interactive object is stopped, so as to move the interactive object into the user field of view, when it is detected that the gaze point position is located at the edge of the origin field of view.
Because the sight line landing point position can move to the edge of the origin view field area and then move continuously according to the previous moving direction, the computer equipment can control the interactive object located in the area outside the origin view field area in the picture to move to the origin view field area until the sight line landing point position is located in the interactive object, so that the interactive object is moved to the user view field area. Thus, the user can view the interactive object without going back.
In the embodiment, the mobile control of the interactive object is realized through the sight of the user, the problem that the interactive control can be realized only by using auxiliary equipment is avoided, and the practicability of the interactive control is improved. And enriches the moving mode when controlling the movement of the interactive object.
In one embodiment, the interactive control method further comprises: determining an interaction area where a sight line drop point position stays in an interaction object; and when the time length of the sight line landing point position staying in the interactive area exceeds a first preset time length, executing interactive operation corresponding to the interactive area.
Specifically, the computer device may divide a display area of the interactive object into interactive areas corresponding to the interactive operations in advance, and establish a corresponding relationship between the interactive areas and the interactive operations. For example, the interactive region corresponding to the operation is confirmed or the interactive region corresponding to the operation is cancelled.
The computer equipment can determine the interaction area and the time length of the sight line landing point position after detecting that the sight line landing point position is located in the interaction object, and asynchronously inquire the interaction operation corresponding to the interaction area. And when the computer equipment judges that the time length of the sight line landing point position staying in the interaction area exceeds a first preset time length, executing the inquired interaction operation. Wherein the first preset duration is a gaze confirmation duration preset by the computing device. And when the time length of the sight line landing point position staying in a certain interaction area reaches a first preset time length, judging that the user confirms to select the instruction computer equipment to execute the interaction operation corresponding to the interaction area. A first preset duration such as 3 seconds, etc.
In the embodiment, the control process of the interactive behavior is realized through the sight of the user, and the problem that the interactive control can be realized only by using auxiliary equipment is solved.
In one embodiment, the interactive control method further comprises: starting timing when the sight line drop point position moves out of the interactive object; and when the timing exceeds a second preset time length, controlling the interactive object to return to the area outside the original point view field area in the picture.
Specifically, the computer device may start timing after detecting that the gaze point position is away from the interactive object, determine that the user no longer needs to interact when the timing duration exceeds a second preset duration, and control the interactive object to return to an area outside the origin field of view area in the screen. And the second preset time length is the leaving confirmation time length preset by the computing equipment. And when the time length that the sight line falling point position leaves the interactive object reaches a second preset time length, judging that the user confirms that the interaction is not needed any more. A first preset duration such as 10 seconds, etc.
In this embodiment, when it is determined that the user no longer needs to perform interaction, the interaction object is restored to an area outside the origin field of view area, so that the influence of the interaction object staying in the origin field of view area on the user viewing screen is avoided.
As shown in fig. 9, in a virtual reality scene, in a specific embodiment, the interaction control method specifically includes the following steps:
and S902, acquiring and outputting a virtual reality picture.
S904, determining the angle value of the current field angle; determining an origin point view field reference area in the virtual reality picture according to the angle value of the reference view field angle; and determining the origin visual field area in the virtual reality picture according to the acquired angle value and the origin visual field reference area.
S906, acquiring a dual-purpose eye image; determining the binocular gaze point position of a pupil imaging point on a screen in the dual-purpose eye image; and performing parallax conversion on the positions of the binocular gaze points to obtain sight line falling point positions corresponding to the positions of the binocular gaze points in the picture.
S908, judging whether the sight line drop point position is located at the edge of the origin sight field area; if yes, go to step S910; if not, go to step S906.
S910, judging a trigger type corresponding to the edge where the sight line drop point is positioned; if the trigger type is the display control type, jumping to step S912; if the trigger type is the mobility control type, the process goes to step S916.
And S912, displaying the content to be read at the edge of the origin visual field area.
And S914, hiding the content to be read when the sight line landing position is away from the edge of the origin sight field area.
S916, continuously moving the interactive object located in the first view field area to a preset position in the origin view field area; or controlling the interactive object positioned in the first view field area to move towards the origin view field area until the sight line falling point position is positioned in the interactive object; the virtual reality picture also comprises a first view field region and a second view field region; the second field of view region encompasses the first field of view region; the first field of view region encompasses the origin field of view region.
S918, determining an interaction area where the sight line drop point position stays in the interaction object; and when the time length of the sight line landing point position staying in the interactive area exceeds a first preset time length, executing interactive operation corresponding to the interactive area.
S920, timing is started when the sight line drop point position moves out of the interactive object; and when the timing exceeds a second preset time length, controlling the interactive object to return to the area outside the original point view field area in the picture.
Wherein, the step of obtaining the sight line landing position may further be: determining an initial position of a sight line drop point in a virtual reality picture according to initial sensor data; acquiring current sensor data; determining an offset angle according to a difference value between current sensor data and initial sensor data; and determining the current sight line drop point position in the virtual reality picture according to the offset angle and the sight line drop point initial position.
In this embodiment, after the virtual reality picture is locally acquired and output, the origin field area may be automatically determined in the virtual reality picture, then the gaze drop point position in the virtual reality picture is detected, when it is detected that the gaze drop point position moves to the edge of the origin field area, and when the user intends to interact through the interactive object, the interactive object is controlled to move into the user field of view, so that the user interacts through gaze operation, and when the user intends to browse the content to be read, the content to be read is displayed at the edge of the origin field area. Therefore, the interactive object is controlled according to the sight of the user, the problem that the interactive control needs to be realized by using auxiliary equipment is solved, the interactive control efficiency is improved, and the interactive object is positioned in the area outside the original point view field area in the picture in a default state, so that the interactive object is prevented from blocking the picture content in the original point view field area to influence the user to watch the locally output picture.
It should be understood that, although the steps in the flowcharts of the above embodiments are shown in sequence as indicated by the arrows, the steps are not necessarily executed in sequence as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a part of the steps in the above embodiments may include multiple sub-steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of performing the sub-steps or the stages is not necessarily performed in sequence, but may be performed alternately or alternately with other steps or at least a part of the sub-steps or the stages of other steps.
It is to be understood that, in the above embodiments, the execution subject of the interactive control method is a computer device. However, in a virtual reality scene, the execution subject of the interactive control method is interactive control equipment.
As shown in FIG. 10, in one embodiment, an interactive control device 1000 is provided. Referring to fig. 10, the interactive control apparatus 1000 includes: an acquisition module 1001, a determination module 1002, a detection module 1003 and a control module 1004.
An acquiring module 1001 is configured to acquire and output a screen.
A determining module 1002 is configured to determine an origin field of view region in the frame.
The detecting module 1003 is configured to detect a gaze point position in the screen.
And the control module 1004 is configured to control the interactive object located in the area outside the original point field of view in the screen to move towards the original point field of view when the sight line landing position is located at the edge of the original point field of view.
The interaction control device 1000 may automatically determine the origin field area in the picture after locally acquiring and outputting the picture, and then detect the position of the sight line drop point in the picture, and when detecting that the position of the sight line drop point moves to the edge of the origin field area, that is, when the user intends to interact through the interaction object, control the interaction object to move towards the origin field area, so as to control the interaction object according to the sight line of the user, thereby avoiding the problem of interaction control by using auxiliary equipment, improving the interaction control efficiency, and the interaction object is located in the area outside the origin field area in the picture in a default state, thereby avoiding the influence of the image content in the origin field area by the interaction object on the user viewing the locally output picture.
In one embodiment, the detection module 1003 is further configured to acquire an eye image; determining the staring point position of a pupil imaging point on a screen in an eye image; the gaze location is converted to a gaze drop location in the screen.
In one embodiment, the eye images are binocular eye images; the gaze point location is a binocular gaze point location. The detection module 1003 is further configured to perform parallax conversion on the binocular gaze point positions to obtain gaze point positions corresponding to the binocular gaze point positions in the picture.
In one embodiment, the control module 1004 is configured to determine a trigger type corresponding to an edge when the gaze drop point position is located at the edge of the origin view field region; when the trigger type is a display control type, displaying the content to be read at the edge; when the trigger type is a movement control type, controlling the interactive object located in the area outside the origin field of view area in the picture to move towards the origin field of view area.
In one embodiment, the control module 1004 is further configured to hide the content to be read when the gaze point position continuously moves and leaves the edge and the trigger type corresponding to the edge area is the display control type; and when the sight line drop point position continuously moves and leaves the edge, controlling the interactive object to move in the opposite direction of the current moving direction when the trigger type corresponding to the edge area is the movement control type.
In one embodiment, the control module 1004 is further configured to continuously move the interactive object located in the area outside the origin field area in the screen to a preset position in the origin field area when the sight line drop point position is located at the edge of the origin field area; or when the sight line landing point position is located at the edge of the original point view field area, controlling the interactive object located in the area outside the original point view field area in the picture to move towards the original point view field area until the sight line landing point position is located in the interactive object.
In one embodiment, the interactive control device 1000 further comprises: an interaction module 1005.
An interaction module 1005, configured to determine an interaction area where a gaze drop point position stays in an interaction object; and when the time length of the sight line landing point position staying in the interactive area exceeds a first preset time length, executing interactive operation corresponding to the interactive area.
As shown in fig. 11, in one embodiment, the interactive control device 1000 further includes: an interaction module 1005 and a return module 1006.
A return module 1006 for starting timing when the gaze drop position moves out of the interactive object; and when the timing exceeds a second preset time length, controlling the interactive object to return to the area outside the original point view field area in the picture.
In one embodiment, the screen is a virtual reality screen. The determining module 1002 is further configured to determine an origin field of view region in the virtual reality screen according to the current field of view angle.
In one embodiment, the determining module 1002 is further configured to determine an angle value of a current field angle; determining an origin point view field reference area in the virtual reality picture according to the angle value of the reference view field angle; and determining the origin visual field area in the virtual reality picture according to the acquired angle value and the origin visual field reference area.
In one embodiment, the detection module 1003 is further configured to determine an initial position of the gaze drop point in the virtual reality frame according to the initial sensor data; acquiring current sensor data; determining an offset angle according to a difference value between current sensor data and initial sensor data; and determining the current sight line drop point position in the virtual reality picture according to the offset angle and the sight line drop point initial position.
In one embodiment, the virtual reality screen further comprises a first field of view region and a second field of view region; the second field of view region encompasses the first field of view region; the first field of view region encompasses the origin field of view region. The control module 1004 is further configured to control the interactive object located in the first field of view region in the virtual reality screen to move towards the origin field of view region when the gaze drop point location is located at an edge of the origin field of view region.
FIG. 12 is a diagram illustrating an internal structure of a computer device in one embodiment. The computer device may specifically be the computer device 110 in fig. 1. As shown in fig. 12, the computer device includes a processor, a memory, a network interface, and a display screen connected through a system bus. Wherein the memory includes a non-volatile storage medium and an internal memory. The non-volatile storage medium of the computer device stores an operating system and may also store a computer program that, when executed by the processor, causes the processor to implement the resource sharing method. The internal memory may also have a computer program stored therein, which when executed by the processor, causes the processor to perform the resource sharing method. The display screen of the computer device can be a liquid crystal display screen or an electronic ink display screen. Those skilled in the art will appreciate that the architecture shown in fig. 12 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
FIG. 13 is a diagram illustrating an internal structure of a computer device in one embodiment. The computer device may specifically be the virtual reality device 110 in fig. 2. As shown in fig. 13, the computer device includes a processor, a memory, a network interface, an optical lens, and a display screen connected through a system bus. Wherein the memory includes a non-volatile storage medium and an internal memory. The non-volatile storage medium of the computer device stores an operating system and may also store a computer program that, when executed by the processor, causes the processor to implement the resource sharing method. The internal memory may also have a computer program stored therein, which when executed by the processor, causes the processor to perform the resource sharing method. The display screen of the computer device can be a liquid crystal display screen or an electronic ink display screen. Those skilled in the art will appreciate that the architecture shown in fig. 13 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, the interactive control apparatus provided in the present application may be implemented in a form of a computer program, where the computer program may be run on a computer device as shown in fig. 12 or fig. 13, and a nonvolatile storage medium of the computer device may store various program modules constituting the interactive control apparatus, such as the obtaining module 1001, the determining module 1002, the detecting module 1003, the control module 1004, and the like shown in fig. 10. The computer program composed of the respective program modules causes the processor to execute the steps in the interaction control method of the respective embodiments of the present application described in the present specification.
For example, the computer device shown in fig. 12 may acquire and output a screen through the acquisition module 1001 in the interactive control apparatus 1000 shown in fig. 10. The origin field of view region in the picture is determined by the determination module 1002. The sight line landing position in the screen is detected by the detection module 1003. When the sight line landing position is located at the edge of the origin field of view region, the control module 1004 controls the interactive object located in the region outside the origin field of view region in the screen to move toward the origin field of view region. In the virtual display environment, the computer device shown in fig. 13 may perform corresponding operations through the modules of the interactive control device 1000 shown in fig. 10.
In one embodiment, a computer readable storage medium is provided, having a computer program stored thereon, which, when executed by a processor, causes the processor to perform the steps of: acquiring and outputting a picture; determining an origin field of view region in a picture; detecting a sight line drop point position in a picture; when the sight line landing position is positioned at the edge of the original point view field area, controlling the interactive object positioned in the area outside the original point view field area in the picture to move towards the original point view field area.
In one embodiment, detecting a gaze point location in a frame comprises: acquiring an eye image; determining the staring point position of a pupil imaging point on a screen in an eye image; the gaze location is converted to a gaze drop location in the screen.
In one embodiment, the eye images are binocular eye images; the gaze point location is a binocular gaze point location. Converting gaze location to gaze drop location in a screen, comprising: and performing parallax conversion on the positions of the binocular gaze points to obtain sight line falling point positions corresponding to the positions of the binocular gaze points in the picture.
In one embodiment, when the gaze point position is located at the edge of the origin field of view region, controlling the interactive objects located in the region outside the origin field of view region in the screen to move toward the origin field of view region includes: when the sight line drop point position is located at the edge of the origin sight field area, determining a trigger type corresponding to the edge; when the trigger type is a display control type, displaying the content to be read at the edge; when the trigger type is a movement control type, controlling the interactive object located in the area outside the origin field of view area in the picture to move towards the origin field of view area.
In one embodiment, the computer program, when executed by the processor, further causes the processor to perform the steps of: when the sight line drop point position continuously moves and leaves the edge, hiding the content to be read when the trigger type corresponding to the edge area is the display control type; and when the sight line drop point position continuously moves and leaves the edge, controlling the interactive object to move in the opposite direction of the current moving direction when the trigger type corresponding to the edge area is the movement control type.
In one embodiment, when the gaze point position is located at the edge of the origin field of view region, controlling the interactive objects located in the region outside the origin field of view region in the screen to move toward the origin field of view region includes: when the sight line landing position is located at the edge of the original point view field region, continuously moving the interactive object located in the region outside the original point view field region in the picture to a preset position in the original point view field region; or when the sight line landing point position is located at the edge of the original point view field area, controlling the interactive object located in the area outside the original point view field area in the picture to move towards the original point view field area until the sight line landing point position is located in the interactive object.
In one embodiment, the computer program, when executed by the processor, further causes the processor to perform the steps of: determining an interaction area where a sight line drop point position stays in an interaction object; and when the time length of the sight line landing point position staying in the interactive area exceeds a first preset time length, executing interactive operation corresponding to the interactive area.
In one embodiment, the computer program, when executed by the processor, further causes the processor to perform the steps of: starting timing when the sight line drop point position moves out of the interactive object; and when the timing exceeds a second preset time length, controlling the interactive object to return to the area outside the original point view field area in the picture.
In one embodiment, the screen is a virtual reality screen. Determining an origin field of view region in a picture, comprising: and determining an origin field area in the virtual reality picture according to the current field angle.
In one embodiment, the determining the origin field of view region in the virtual reality picture according to the current field of view angle comprises: determining an angle value of a current field angle; determining an origin point view field reference area in the virtual reality picture according to the angle value of the reference view field angle; and determining the origin visual field area in the virtual reality picture according to the acquired angle value and the origin visual field reference area.
In one embodiment, detecting a gaze point location in a virtual reality screen comprises: determining an initial position of a sight line drop point in a virtual reality picture according to initial sensor data; acquiring current sensor data; determining an offset angle according to a difference value between current sensor data and initial sensor data; and determining the current sight line drop point position in the virtual reality picture according to the offset angle and the sight line drop point initial position.
In one embodiment, the virtual reality screen further comprises a first field of view region and a second field of view region; the second field of view region encompasses the first field of view region; the first field of view region encompasses the origin field of view region. When the sight line landing position is located at the edge of the original point view field area, controlling the interactive object located in the area outside the original point view field area in the picture to move towards the original point view field area, wherein the method comprises the following steps: when the sight line landing position is positioned at the edge of the origin field of view region, the interactive object positioned in the first field of view region in the virtual reality picture is controlled to move towards the origin field of view region.
The storage medium can automatically determine the original point view field area in the picture after the picture is locally acquired and output, then detect the sight line drop point position in the picture, when the sight line drop point position is detected to move to the edge of the original point view field area, namely when a user intends to interact through the interactive object, the interactive object is controlled to move towards the original point view field area, so that the interactive object is controlled according to the sight line of the user, the problem that interactive control is needed to be achieved by using auxiliary equipment is solved, the interactive control efficiency is improved, and the interactive object is located in the area outside the original point view field area in the picture in a default state, so that the phenomenon that the user watches the locally output picture due to the fact that the interactive object shields the picture content in the original point view field area is avoided.
In one embodiment, there is provided a computer device comprising a memory and a processor, the memory having stored therein a computer program that, when executed by the processor, causes the processor to perform the steps of: acquiring and outputting a picture; determining an origin field of view region in a picture; detecting a sight line drop point position in a picture; when the sight line landing position is positioned at the edge of the original point view field area, controlling the interactive object positioned in the area outside the original point view field area in the picture to move towards the original point view field area.
In one embodiment, detecting a gaze point location in a frame comprises: acquiring an eye image; determining the staring point position of a pupil imaging point on a screen in an eye image; the gaze location is converted to a gaze drop location in the screen.
In one embodiment, the eye images are binocular eye images; the gaze point location is a binocular gaze point location. Converting gaze location to gaze drop location in a screen, comprising: and performing parallax conversion on the positions of the binocular gaze points to obtain sight line falling point positions corresponding to the positions of the binocular gaze points in the picture.
In one embodiment, when the gaze point position is located at the edge of the origin field of view region, controlling the interactive objects located in the region outside the origin field of view region in the screen to move toward the origin field of view region includes: when the sight line drop point position is located at the edge of the origin sight field area, determining a trigger type corresponding to the edge; when the trigger type is a display control type, displaying the content to be read at the edge; when the trigger type is a movement control type, controlling the interactive object located in the area outside the origin field of view area in the picture to move towards the origin field of view area.
In one embodiment, the computer program, when executed by the processor, further causes the processor to perform the steps of: when the sight line drop point position continuously moves and leaves the edge, hiding the content to be read when the trigger type corresponding to the edge area is the display control type; and when the sight line drop point position continuously moves and leaves the edge, controlling the interactive object to move in the opposite direction of the current moving direction when the trigger type corresponding to the edge area is the movement control type.
In one embodiment, when the gaze point position is located at the edge of the origin field of view region, controlling the interactive objects located in the region outside the origin field of view region in the screen to move toward the origin field of view region includes: when the sight line landing position is located at the edge of the original point view field region, continuously moving the interactive object located in the region outside the original point view field region in the picture to a preset position in the original point view field region; or when the sight line landing point position is located at the edge of the original point view field area, controlling the interactive object located in the area outside the original point view field area in the picture to move towards the original point view field area until the sight line landing point position is located in the interactive object.
In one embodiment, the computer program, when executed by the processor, further causes the processor to perform the steps of: determining an interaction area where a sight line drop point position stays in an interaction object; and when the time length of the sight line landing point position staying in the interactive area exceeds a first preset time length, executing interactive operation corresponding to the interactive area.
In one embodiment, the computer program, when executed by the processor, further causes the processor to perform the steps of: starting timing when the sight line drop point position moves out of the interactive object; and when the timing exceeds a second preset time length, controlling the interactive object to return to the area outside the original point view field area in the picture.
In one embodiment, the screen is a virtual reality screen. Determining an origin field of view region in a picture, comprising: and determining an origin field area in the virtual reality picture according to the current field angle.
In one embodiment, the determining the origin field of view region in the virtual reality picture according to the current field of view angle comprises: determining an angle value of a current field angle; determining an origin point view field reference area in the virtual reality picture according to the angle value of the reference view field angle; and determining the origin visual field area in the virtual reality picture according to the acquired angle value and the origin visual field reference area.
In one embodiment, detecting a gaze point location in a virtual reality screen comprises: determining an initial position of a sight line drop point in a virtual reality picture according to initial sensor data; acquiring current sensor data; determining an offset angle according to a difference value between current sensor data and initial sensor data; and determining the current sight line drop point position in the virtual reality picture according to the offset angle and the sight line drop point initial position.
In one embodiment, the virtual reality screen further comprises a first field of view region and a second field of view region; the second field of view region encompasses the first field of view region; the first field of view region encompasses the origin field of view region. When the sight line landing position is located at the edge of the original point view field area, controlling the interactive object located in the area outside the original point view field area in the picture to move towards the original point view field area, wherein the method comprises the following steps: when the sight line landing position is positioned at the edge of the origin field of view region, the interactive object positioned in the first field of view region in the virtual reality picture is controlled to move towards the origin field of view region.
According to the computer equipment, after a picture is locally obtained and output, an original point view field area can be automatically determined in the picture, then the sight line falling point position in the picture is detected, when the sight line falling point position is detected to move to the edge of the original point view field area, namely when a user intends to interact through an interactive object, the interactive object is controlled to tend to the original point view field area to move, so that the interactive object is controlled according to the sight line of the user, the problem that interactive control needs to be achieved through auxiliary equipment is solved, the interactive control efficiency is improved, and the interactive object is located in the area outside the original point view field area in the picture in a default state, so that the situation that the interactive object shields picture content in the original point view field area to influence the user to watch the locally output picture is avoided.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a non-volatile computer-readable storage medium, and can include the processes of the embodiments of the methods described above when the program is executed. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above examples only show some embodiments of the present invention, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the inventive concept, which falls within the scope of the present invention. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (20)

1. An interaction control method, characterized in that the method comprises:
acquiring and outputting a virtual reality picture; the virtual reality picture comprises a first view field region and a second view field region which are divided based on corresponding view field region ranges when the virtual reality equipment deflects different angle intervals; the second field of view region encompasses the first field of view region;
acquiring an angle value of a current field angle of a static observation point with a fixed space posture;
determining an origin field reference area in the virtual reality picture according to the angle value of the reference field angle;
determining an origin field area in the virtual reality picture according to the obtained angle value of the current field angle and the origin field reference area; the original point view field region is a picture region in the static observation point view field range; the first field of view region encompasses the origin field of view region;
detecting a sight line drop point position in the virtual reality picture;
when the sight line drop point position is located at the edge of the origin sight field area, determining a trigger type corresponding to the edge;
when the trigger type is a display control type, displaying the content to be read at the edge;
when the trigger type is a movement control type, controlling the interactive object located in the first field of view region in the virtual reality picture to tend to move towards the origin field of view region;
when the sight line drop point position continuously moves and leaves the edge, hiding the content to be read when the trigger type corresponding to the edge area is a display control type;
and when the sight line drop point position continuously moves and leaves the edge, controlling the interactive object to move along the opposite direction of the current moving direction when the trigger type corresponding to the edge area is the movement control type.
2. The method according to claim 1, wherein the detecting the gaze point location in the virtual reality screen comprises:
acquiring an eye image;
determining the staring point position of a pupil imaging point on a screen in the eye image;
and converting the gaze point position into a sight line landing point position in the virtual reality picture.
3. The method of claim 2, wherein the eye images are binocular eye images; the staring point position is a dual-purpose staring point position;
the converting the gaze point location to a gaze point location in the virtual reality screen includes:
and carrying out parallax conversion on the positions of the binocular gaze points to obtain sight line falling point positions which correspond to the positions of the binocular gaze points in the virtual reality picture.
4. The method of claim 1, further comprising:
when the sight line landing point position is located at the edge of the origin view field area, continuously moving an interactive object located in an area outside the origin view field area in the virtual reality picture to a preset position in the origin view field area; or,
when the sight line landing point position is located at the edge of the origin view field area, controlling an interactive object located in an area outside the origin view field area in the virtual reality picture to move towards the origin view field area until the sight line landing point position is located in the interactive object.
5. The method of claim 4, further comprising:
determining an interaction area where the sight line drop point position stays in the interaction object;
when the time length of the sight line falling point position staying in the interaction area exceeds a first preset time length, the sight line falling point position stays in the interaction area for a time length exceeding the first preset time length
And executing the interactive operation corresponding to the interactive area.
6. The method of claim 4, further comprising:
starting timing when the sight line drop position moves out of the interactive object;
and when the timing exceeds a second preset time length, controlling the interactive object to return to the area outside the origin point view field area in the virtual reality picture.
7. The method of any of claims 1-6, wherein the field angles comprise horizontal field viewing angles and vertical field viewing angles.
8. The method according to claim 1, wherein the detecting the gaze point location in the virtual reality screen comprises:
determining an initial position of a sight line drop point in the virtual reality picture according to initial sensor data;
acquiring current sensor data;
determining an offset angle according to a difference value between current sensor data and initial sensor data;
and determining the current sight line drop point position in the virtual reality picture according to the offset angle and the sight line drop point initial position.
9. The method of claim 1, wherein the spatial pose comprises a direction of observation of the static observation point.
10. An interactive control apparatus, the apparatus comprising:
the acquisition module is used for acquiring and outputting a virtual reality picture; the virtual reality picture comprises a first view field region and a second view field region which are divided based on corresponding view field region ranges when the virtual reality equipment deflects different angle intervals; the second field of view region encompasses the first field of view region;
the determining module is used for acquiring the angle value of the current field angle of the static observation point with fixed space posture; determining an origin field reference area in the virtual reality picture according to the angle value of the reference field angle; determining an origin field area in the virtual reality picture according to the obtained angle value of the current field angle and the origin field reference area; the original point view field region is a picture region in the static observation point view field range; the first field of view region encompasses the origin field of view region;
the detection module is used for detecting the sight line drop point position in the virtual reality picture;
the control module is used for determining a trigger type corresponding to the edge when the sight line drop point position is located at the edge of the origin sight field area; when the trigger type is a display control type, displaying the content to be read at the edge; when the trigger type is a movement control type, controlling the interactive object located in the first field of view region in the virtual reality picture to tend to move towards the origin field of view region;
the control module is further configured to hide the content to be read when the gaze point position continuously moves and leaves the edge and the trigger type corresponding to the edge area is a display control type; and when the sight line drop point position continuously moves and leaves the edge, controlling the interactive object to move along the opposite direction of the current moving direction when the trigger type corresponding to the edge area is the movement control type.
11. The apparatus of claim 10, wherein the detection module is further configured to acquire an eye image; determining the staring point position of a pupil imaging point on a screen in the eye image; and converting the gaze point position into a sight line landing point position in the virtual reality picture.
12. The apparatus of claim 11, wherein the eye images are binocular eye images; the staring point position is a dual-purpose staring point position;
the detection module is further used for performing parallax conversion on the positions of the binocular gaze points to obtain sight line falling point positions corresponding to the positions of the binocular gaze points in the virtual reality picture.
13. The apparatus according to claim 10, wherein the control module is further configured to continuously move the interactive object located in the area outside the origin field of view area in the virtual reality screen to a preset position in the origin field of view area when the gaze point location is located at the edge of the origin field of view area; or when the sight line drop point position is located at the edge of the origin view field area, controlling the interactive object located in the area outside the origin view field area in the virtual reality picture to move towards the origin view field area until the sight line drop point position is located in the interactive object.
14. The apparatus of claim 13, further comprising:
the interaction module is used for determining an interaction area where the sight line drop point position stays in the interaction object; and when the time length of the sight line landing point position staying in the interaction area exceeds a first preset time length, executing the interaction operation corresponding to the interaction area.
15. The apparatus of claim 13, further comprising:
a returning module for starting timing when the sight line drop point position moves out of the interactive object; and when the timing exceeds a second preset time length, controlling the interactive object to return to the area outside the origin point view field area in the virtual reality picture.
16. The apparatus of any of claims 10 to 15, wherein the field angles comprise horizontal field viewing angles and vertical field viewing angles.
17. The apparatus of claim 16, wherein the detection module is further configured to determine an initial position of a gaze point in the virtual reality screen based on initial sensor data; acquiring current sensor data; determining an offset angle according to a difference value between current sensor data and initial sensor data; and determining the current sight line drop point position in the virtual reality picture according to the offset angle and the sight line drop point initial position.
18. The apparatus of claim 10, wherein the spatial pose comprises a direction of observation of the static observation point.
19. A computer-readable storage medium, having stored thereon a computer program which, when executed by a processor, causes the processor to carry out the steps of the method according to any one of claims 1 to 9.
20. A computer device comprising a memory and a processor, the memory having stored therein a computer program that, when executed by the processor, causes the processor to perform the steps of the method according to any one of claims 1 to 9.
CN201711142437.9A 2017-11-17 2017-11-17 Interaction control method and device, storage medium and computer equipment Active CN109799899B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711142437.9A CN109799899B (en) 2017-11-17 2017-11-17 Interaction control method and device, storage medium and computer equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711142437.9A CN109799899B (en) 2017-11-17 2017-11-17 Interaction control method and device, storage medium and computer equipment

Publications (2)

Publication Number Publication Date
CN109799899A CN109799899A (en) 2019-05-24
CN109799899B true CN109799899B (en) 2021-10-22

Family

ID=66554616

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711142437.9A Active CN109799899B (en) 2017-11-17 2017-11-17 Interaction control method and device, storage medium and computer equipment

Country Status (1)

Country Link
CN (1) CN109799899B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110597387B (en) * 2019-09-05 2021-11-30 腾讯科技(深圳)有限公司 Artificial intelligence based picture display method and device, computing equipment and storage medium
CN110478903B (en) * 2019-09-09 2023-05-26 珠海金山数字网络科技有限公司 Control method and device for virtual camera
CN114205669B (en) * 2021-12-27 2023-10-17 咪咕视讯科技有限公司 Free view video playing method and device and electronic equipment
CN114615430B (en) * 2022-03-07 2022-12-23 清华大学 Interaction method and device between mobile terminal and external object and electronic equipment

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101866215A (en) * 2010-04-20 2010-10-20 复旦大学 Human-computer interaction device and method adopting eye tracking in video monitoring
CN104428732A (en) * 2012-07-27 2015-03-18 诺基亚公司 Multimodal interaction with near-to-eye display
CN104866105A (en) * 2015-06-03 2015-08-26 深圳市智帽科技开发有限公司 Eye movement and head movement interactive method for head display equipment
CN105425971A (en) * 2016-01-15 2016-03-23 中意工业设计(湖南)有限责任公司 Interaction method and interaction device for eye movement interface and near-eye display
CN106125930A (en) * 2016-06-27 2016-11-16 上海乐相科技有限公司 A kind of virtual reality device and the method for main perspective picture calibration
CN106227412A (en) * 2016-07-27 2016-12-14 深圳羚羊极速科技有限公司 A kind of utilization obtains the exchange method that focus triggering mobile phone VR applies
CN106462231A (en) * 2014-03-17 2017-02-22 Itu 商业发展公司 Computer-implemented gaze interaction method and apparatus
CN106537290A (en) * 2014-05-09 2017-03-22 谷歌公司 Systems and methods for biomechanically-based eye signals for interacting with real and virtual objects
CN106648055A (en) * 2016-09-30 2017-05-10 珠海市魅族科技有限公司 Method of managing menu in virtual reality environment and virtual reality equipment

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9996221B2 (en) * 2013-12-01 2018-06-12 Upskill, Inc. Systems and methods for look-initiated communication
CN106919248A (en) * 2015-12-26 2017-07-04 华为技术有限公司 It is applied to the content transmission method and equipment of virtual reality
CN106527722B (en) * 2016-11-08 2019-05-10 网易(杭州)网络有限公司 Exchange method, system and terminal device in virtual reality

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101866215A (en) * 2010-04-20 2010-10-20 复旦大学 Human-computer interaction device and method adopting eye tracking in video monitoring
CN104428732A (en) * 2012-07-27 2015-03-18 诺基亚公司 Multimodal interaction with near-to-eye display
CN106462231A (en) * 2014-03-17 2017-02-22 Itu 商业发展公司 Computer-implemented gaze interaction method and apparatus
CN106537290A (en) * 2014-05-09 2017-03-22 谷歌公司 Systems and methods for biomechanically-based eye signals for interacting with real and virtual objects
CN104866105A (en) * 2015-06-03 2015-08-26 深圳市智帽科技开发有限公司 Eye movement and head movement interactive method for head display equipment
CN105425971A (en) * 2016-01-15 2016-03-23 中意工业设计(湖南)有限责任公司 Interaction method and interaction device for eye movement interface and near-eye display
CN106125930A (en) * 2016-06-27 2016-11-16 上海乐相科技有限公司 A kind of virtual reality device and the method for main perspective picture calibration
CN106227412A (en) * 2016-07-27 2016-12-14 深圳羚羊极速科技有限公司 A kind of utilization obtains the exchange method that focus triggering mobile phone VR applies
CN106648055A (en) * 2016-09-30 2017-05-10 珠海市魅族科技有限公司 Method of managing menu in virtual reality environment and virtual reality equipment

Also Published As

Publication number Publication date
CN109799899A (en) 2019-05-24

Similar Documents

Publication Publication Date Title
JP7094266B2 (en) Single-depth tracking-accommodation-binocular accommodation solution
CN108292489B (en) Information processing apparatus and image generating method
US11314088B2 (en) Camera-based mixed reality glass apparatus and mixed reality display method
KR101741335B1 (en) Holographic displaying method and device based on human eyes tracking
JP6023801B2 (en) Simulation device
CN109799899B (en) Interaction control method and device, storage medium and computer equipment
TWI669635B (en) Method and device for displaying barrage and non-volatile computer readable storage medium
JP6454851B2 (en) 3D gaze point location algorithm
KR20190112712A (en) Improved method and system for video conferencing with head mounted display (HMD)
JP2007052304A (en) Video display system
JPWO2016113951A1 (en) Head-mounted display device and video display system
JP2010541513A (en) One-source multi-use (OSMU) type stereo camera and method for producing stereo image content thereof
JP2017204674A (en) Imaging device, head-mounted display, information processing system, and information processing method
JPH11155152A (en) Method and system for three-dimensional shape information input, and image input device thereof
US10885651B2 (en) Information processing method, wearable electronic device, and processing apparatus and system
CN205195880U (en) Watch equipment and watch system
CN107209949B (en) Method and system for generating magnified 3D images
CN111880654A (en) Image display method and device, wearable device and storage medium
JP2011010126A (en) Image processing apparatus, and image processing method
JP2007501950A (en) 3D image display device
JP4492597B2 (en) Stereoscopic display processing apparatus and stereoscopic display processing method
US20190281280A1 (en) Parallax Display using Head-Tracking and Light-Field Display
US11212502B2 (en) Method of modifying an image on a computational device
CN113870213A (en) Image display method, image display device, storage medium, and electronic apparatus
JP2012244453A (en) Image display device, image display system, and three-dimensional spectacles

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant