CN115665398A - Image adjusting method, device, equipment and medium based on virtual reality technology - Google Patents

Image adjusting method, device, equipment and medium based on virtual reality technology Download PDF

Info

Publication number
CN115665398A
CN115665398A CN202211420240.8A CN202211420240A CN115665398A CN 115665398 A CN115665398 A CN 115665398A CN 202211420240 A CN202211420240 A CN 202211420240A CN 115665398 A CN115665398 A CN 115665398A
Authority
CN
China
Prior art keywords
image
adjustment
distance
adjusted
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211420240.8A
Other languages
Chinese (zh)
Other versions
CN115665398B (en
Inventor
刘寿君
程黎辉
关亚东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Longcheer Electronics Huizhou Co Ltd
Original Assignee
Longcheer Electronics Huizhou Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Longcheer Electronics Huizhou Co Ltd filed Critical Longcheer Electronics Huizhou Co Ltd
Priority to CN202211420240.8A priority Critical patent/CN115665398B/en
Publication of CN115665398A publication Critical patent/CN115665398A/en
Application granted granted Critical
Publication of CN115665398B publication Critical patent/CN115665398B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

The application provides an image adjusting method, device, equipment and medium based on a virtual reality technology. The method is applied to virtual reality equipment, the virtual reality equipment comprises a video perspective camera, and the method comprises the following steps: acquiring an image to be adjusted; determining a target adjustment range corresponding to the user pupil distance information according to the user pupil distance information and the incidence relation between the pupil distance information and the adjustment range; based on the target adjustment range, adjusting the image to be adjusted according to the image perception distance adjustment operation sent by the user to obtain an adjusted target image; determining a preset adjustment identifier corresponding to the image perception distance adjustment operation, and storing the adjustment identifier; the adjustment identifier is used for indicating the adjustment of the perception distance of the image to be adjusted in the image perception distance adjustment operation. According to the method and the device, the perception distance of the image is adjusted, so that a user feels that the perception distance of the image is consistent with the actual distance, and the user experience is improved.

Description

Image adjusting method, device, equipment and medium based on virtual reality technology
Technical Field
The present application relates to virtual reality technologies, and in particular, to an image adjustment method, apparatus, device, and medium based on a virtual reality technology.
Background
In recent years, VR (Virtual Reality) technology has been developed. To solve the problem that the surrounding environment cannot be seen when the VR device is worn, the Video See Through technology is gradually applied. Through increasing two colored cameras on VR equipment, transmit the image that colored camera caught to the screen of VR equipment on, make the user can observe the surrounding environment.
However, the interpupillary distances of different users are different, and the assembly distance between the two color cameras is fixed, so that the perception distance of the surrounding environment seen by the user through the VR device and the actual distance may have a large deviation, and the user experience of the user on the VR device is affected because the surrounding objects cannot be accurately taken and used.
Disclosure of Invention
The application provides an image adjustment method, device, equipment and medium based on a virtual reality technology, which are used for improving the flexibility of image adjustment when a user wears VR equipment and improving the viewing experience of the user on the surrounding environment through the VR equipment.
In a first aspect, the present application provides an image adjustment method based on a virtual reality technology, where the method is applied to a virtual reality device, where the virtual reality device includes a video perspective camera, and the method includes:
acquiring an image to be adjusted based on a video perspective camera on the virtual reality equipment;
determining an adjustment range corresponding to the pupil distance information of the user as a target adjustment range according to prestored pupil distance information of the user and an association relation between preset pupil distance information and the adjustment range;
based on the target adjustment range, adjusting the image to be adjusted according to image perception distance adjustment operation sent by a user to obtain an adjusted image which is a target image; the perception distance is used for representing the distance between objects corresponding to the viewed images, which are perceived by a user through the fixation of human eyes on the images to be adjusted;
determining a preset adjustment identifier corresponding to the image perception distance adjustment operation, and storing the adjustment identifier; the adjustment identifier is used for indicating the adjustment of the perception distance of the image to be adjusted in the image perception distance adjustment operation.
In a second aspect, the present application provides an image adjusting apparatus based on virtual reality technology, the apparatus is applied to a virtual reality device, the virtual reality device includes a video perspective camera thereon, including:
the image acquisition module is used for acquiring an image to be adjusted based on the video perspective camera on the virtual reality equipment;
the range determining module is used for determining an adjusting range corresponding to the pupil distance information of the user according to the pre-stored pupil distance information of the user and the association relation between the preset pupil distance information and the adjusting range, and the adjusting range is a target adjusting range;
the distance adjusting module is used for adjusting the image to be adjusted according to image perception distance adjusting operation sent by a user based on the target adjusting range to obtain an adjusted image which is a target image; the perception distance is used for representing the distance between objects corresponding to the viewed images, which are perceived by a user through the fixation of human eyes on the images to be adjusted;
the identification storage module is used for determining a preset adjustment identification corresponding to the image perception distance adjustment operation and storing the adjustment identification; the adjustment identifier is used for indicating the adjustment of the perception distance of the image to be adjusted in the image perception distance adjustment operation.
In a third aspect, the present application provides an electronic device, comprising: a processor, and a memory communicatively coupled to the processor;
the memory stores computer execution instructions;
the processor executes the computer-executable instructions stored in the memory to implement the virtual reality technology-based image adjustment method according to the first aspect of the present application.
In a fourth aspect, the present application provides a computer-readable storage medium, having stored therein computer-executable instructions, which when executed by a processor, are used to implement the virtual reality technology-based image adjustment method according to the first aspect of the present application.
In a fifth aspect, the present application provides a computer program product comprising a computer program which, when executed by a processor, implements a virtual reality technology-based image adjustment method as described in the first aspect of the present application.
According to the image adjusting method, device, equipment and medium based on the virtual reality technology, the scene image around the user is obtained through the camera on the VR equipment and is used as the image to be adjusted. The user may perform an image perceived distance adjustment operation, and adjust the perceived distance of the image to be adjusted within the target adjustment range, for example, the perceived distance of the image to be adjusted may be pushed or pulled. The target adjustment ranges corresponding to different users are different, the adjustment range of the user is narrowed, and the adjustment efficiency of the user is improved. And after the adjustment is finished, obtaining a target image, and enabling the perception distance of the user to the target image to be consistent with the actual distance of the user to the article. Different adjustment operations can correspond to different adjustment identifications, the adjustment identification of the image perception distance adjustment operation corresponding to the target image is determined, and the adjustment identification is stored, so that the subsequent use is facilitated. The problem of among the prior art, because the assembly interval of camera is fixed, and can't adjust the image of surrounding environment is solved, improve the flexibility of adjusting the image. Through adjustment perception distance, the environment that makes the user see through VR equipment is unanimous with the environment that actually sees, and the user of being convenient for when wearing VR equipment accurately takes and uses article around, promotes the user and experiences the use of VR equipment.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application.
Fig. 1 is a schematic flowchart of an image adjustment method based on a virtual reality technology according to an embodiment of the present disclosure;
fig. 2 is a schematic diagram of a user wearing a VR device according to an embodiment of the present application;
fig. 3 is a schematic diagram of video perspective cameras on two sides of a VR device according to an embodiment of the present application;
FIG. 4 is a schematic diagram of an actual scene of an environment in which a user is located according to an embodiment of the present application;
fig. 5 is a schematic diagram of an image to be adjusted displayed by a VR device according to an embodiment of the present disclosure;
fig. 6 is a schematic diagram of an image to be adjusted for approximating a sensing distance according to an embodiment of the present disclosure;
fig. 7 is a schematic diagram of an image to be adjusted of a zoom-out sensing distance according to an embodiment of the present application;
fig. 8 is a schematic flowchart of an image adjustment method based on a virtual reality technology according to an embodiment of the present disclosure;
fig. 9 is a block diagram of an image adjusting apparatus based on virtual reality technology according to an embodiment of the present disclosure;
fig. 10 is a block diagram illustrating an image adjusting apparatus based on virtual reality technology according to an embodiment of the present disclosure;
fig. 11 is a block diagram of an electronic device according to an embodiment of the present disclosure;
fig. 12 is a block diagram of an electronic device according to an embodiment of the present application.
With the above figures, there are shown specific embodiments of the present application, which will be described in more detail below. These drawings and written description are not intended to limit the scope of the inventive concepts in any manner, but rather to illustrate the inventive concepts to those skilled in the art by reference to specific embodiments.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
It should be understood that the embodiments described are only a few embodiments of the present application, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the application, as detailed in the appended claims.
In the description of the present application, it is to be understood that the terms "first," "second," "third," and the like are used solely to distinguish one from another and are not necessarily used to describe a particular order or sequence, nor are they to be construed as indicating or implying relative importance. The specific meaning of the above terms in the present application can be understood by those of ordinary skill in the art as the case may be. Further, in the description of the present application, "a plurality" means two or more unless otherwise specified. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
It should be noted that, for the sake of brevity, this description does not exhaust all alternative embodiments, and it should be understood by those skilled in the art after reading this description that any combination of features may constitute an alternative embodiment as long as the features are not mutually inconsistent. The following examples are described in detail.
VR technology has been developed in recent years, wherein video see-through technology is gradually starting to be applied in order to solve the problem that the surrounding environment cannot be seen when wearing VR devices. Specifically, the video perspective technology is to add a color camera to a VR device, so that when a video perspective function is turned on, an image captured by the color camera is transmitted to a screen of the VR device, so that a user can observe the surrounding environment.
When two color cameras are assembled, parallax can be formed by utilizing image difference captured by the left camera and the right camera, so that a user can judge the distance between the two color cameras and surrounding objects. Because different users 'interpupillary distance is different, and the assembly interval of two color camera is fixed, consequently, different users when wearing VR equipment, can appear great deviation to the distance judgement of object around, influence user's experience and feel.
Currently, on VR products, video perspective technology is just beginning to be applied. Generally, in order to obtain a user experience close to the actual situation, two video perspective color cameras are assembled on a VR product, and the positions where the two cameras are assembled need to be right opposite to the centers of two VR lenses. The assembly distance between the two cameras is basically determined by the interpupillary distance of human eyes, and for example, the assembly distance can be 58mm to 72mm. However, this method causes the camera mounting position to be limited, and since the pupil distances of different users are also different, when different users use VR products to perform video perspective, there may be a case where the deviation is large in the distance determination of an object observed by the camera.
The application provides an image adjusting method, device, equipment and medium based on a virtual reality technology, and aims to solve the technical problems in the prior art.
The following describes the technical solutions of the present application and how to solve the above technical problems in detail with specific embodiments. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments. Embodiments of the present application will be described below with reference to the accompanying drawings.
Fig. 1 is a schematic flowchart of an image adjustment method based on a virtual reality technology, which is provided in an embodiment of the present application, and is applied to a virtual reality device, where the virtual reality device includes a video perspective camera, and the method can be executed by an image adjustment apparatus based on a virtual reality technology. As shown in fig. 1, the method comprises the steps of:
s101, acquiring an image to be adjusted based on a video perspective camera on the virtual reality equipment.
Illustratively, at least two video perspective cameras, which may be color cameras, may be installed on the VR device. In the embodiment of the application, because the image of the surrounding environment displayed by the VR device can be adjusted, the assembly position of the video perspective camera on the VR device is not limited. The video perspective camera can be installed on VR equipment just to the position at the center of VR lens, also can install the both sides at VR equipment. Fig. 2 is a schematic diagram of a user wearing a VR device. In fig. 2, there are two video perspective cameras in the front of the VR device, the positions of the video perspective cameras are right opposite to the eyes of the user, and the eyes of the user are close to the center of the VR lens, that is, the positions of the video perspective cameras are right opposite to the center of the VR lens. Fig. 3 is a schematic diagram of video perspective cameras on two sides of a VR device. In fig. 3, there are two video perspective cameras, one on each side of the VR lens, not aligned with the center of the VR lens.
When the user wears the VR equipment, the video perspective camera on the VR equipment can collect the images around the environment where the user is located, and can transmit the collected images to the screen of the VR equipment. And determining the acquired image as an image to be adjusted, and allowing a user to see the image to be adjusted through VR equipment.
When two video perspective cameras on the VR device capture images, the images are different, so that parallax is formed. Fig. 4 is a schematic diagram of an actual scene of an environment in which a user is located. A table lamp, a display and a, b, c and d four books exist in the actual scene of the environment where the user is located. Fig. 5 is a schematic diagram of an image to be adjusted displayed by the VR device. The VR equipment comprises a left display screen and a right display screen, images to be adjusted, which are collected by the video perspective camera on the left side, are sent to the left display screen, and images to be adjusted, which are collected by the video perspective camera on the right side, are sent to the right display screen. Due to the fact that the assembly positions of the cameras are different, the video perspective camera positioned on the left side of the VR equipment collects a plurality of images of a left scene, and a plurality of images of a right scene are collected; the video perspective camera positioned on the right side of the VR device collects more images of the right scene and less images of the left scene.
S102, determining an adjusting range corresponding to the pupil distance information of the user according to the pre-stored pupil distance information of the user and the association relation between the preset pupil distance information and the adjusting range, wherein the adjusting range is a target adjusting range.
Illustratively, before using the VR device, the user may input his/her own pupil distance information in advance, which is the distance between the centers of the pupils of both eyes of the user. And if the difference value between different pieces of interpupillary distance information is within a preset difference value threshold value, the different pieces of interpupillary distance information can correspond to the same adjustment range. That is, if there is only a slight difference between different pupil distance information, there is no large difference between corresponding adjustment ranges, and thus, the user does not need to input an accurate value of the pupil distance. For example, the association relationship between the numerical range of different pupil distance information and the adjustment range may be stored in advance, the user may input pupil distance information of 60mm, and the user's actual pupil distance information may be 61mm, and the adjustment range corresponding to 60mm and the adjustment range corresponding to 61mm may be the same.
The adjustment range refers to a distance range in which a user can zoom in or out on an image to be adjusted when adjusting the perception distance, or a range of a scaling scale. And according to the pupil distance information input by the user, a corresponding target adjustment range can be obtained when the user adjusts. That is, when the user adjusts the sensing distance, the user only needs to adjust within the limited target adjustment range, so that the adjustment range of the user is effectively reduced, and the adjustment efficiency and accuracy are improved.
S103, adjusting the image to be adjusted according to image perception distance adjusting operation sent by a user based on the target adjusting range to obtain an adjusted image which is a target image; the perceived distance is used for representing the distance between objects which are perceived by a user through the human eyes and correspond to the viewed image.
Illustratively, after seeing an image to be adjusted, a user forms parallax by using the difference between images captured by the left and right cameras, so that the distance between the user and an object corresponding to the image to be adjusted can be judged as a perceived distance. The user can adjust the perception distance of the image to be adjusted according to the actual distance between the user and each object in the environment where the user is located. The perceived distance may refer to a distance between objects corresponding to the viewed image, which is perceived by the user through human eyes.
The user can take off VR equipment, compares actual distance and perception distance at any time. For the convenience of comparison between the image perception distance and the actual distance in the calibration process of a user, when the user takes off the VR equipment in the calibration process, the VR equipment can not enter a sleep state and still keeps in a distance calibration mode. The distance calibration mode can be a mode for calibrating the perception distance, and a user can start the distance calibration mode when wearing the VR device, so that an image to be adjusted is obtained, and the perception distance of the image to be adjusted is adjusted within a target adjustment range.
An operation control for adjusting the image sensing distance may be arranged on the VR device, for example, the operation control may be an operation knob, the operation for adjusting the image sensing distance may be a rotation operation performed on the operation knob, and rotating the operation knob to the left may reduce a sensing distance of the user to the image to be adjusted, that is, the image to be adjusted seen by the user is perceptually closer to the user; rotating the operation knob to the right may be to increase a perceived distance of the user to the image to be adjusted, i.e., to make the image to be adjusted seen by the user perceptually farther from the user. The user can remove the VR equipment at any time in the process of executing the image perception distance adjustment operation, and the image to be adjusted is compared with the real picture seen by the user in the actual scene, so that the perception distance of the image to be adjusted by the user is consistent with the actual distance of the actual scene. When the user determines that the sensing distance is consistent with the actual distance, the image sensing distance adjustment operation can be stopped. And after the image perception distance adjustment operation is finished, the image to be adjusted is adjusted, and the adjusted image is determined as the target image. At this time, the distance between the user and the actual object, which is seen by the user without wearing the VR device, is consistent with the distance between the user and the object, which is seen by the VR device.
When the image to be adjusted is adjusted, the images of the left and right display screens are adjusted at the same time, for example, if the user wants to zoom in at the sensing distance, the images to be adjusted of the left and right display screens are simultaneously zoomed in at the same distance. That is, the sizes of the objects viewed by the user through the left and right display screens are identical.
Fig. 6 is a schematic diagram of an image to be adjusted for approximating a sensing distance in the embodiment of the present application. As can be seen from fig. 6 and 5, the image in fig. 6 is closer than the perceived distance of the image in fig. 5, and the perceived distances of both the left and right display screens are pushed closer. In the process of calibrating the perception distance of the image, if the user determines that the actual distance is closer than the displayed perception distance, the VR device may be controlled to zoom in on the picture. Comparing fig. 6 and 5, it can be seen that the picture is pushed forward, the image becomes larger, and the desktop area displayed on the screen also becomes smaller. The user can feel that the distance of the surrounding scenery is reduced, thereby the distance calibration function is realized.
Fig. 7 is a schematic diagram of an image to be adjusted of a zoom-out sensing distance in an embodiment of the present application. As can be seen from fig. 7 and 5, the image in fig. 7 is farther than the perceived distance of the image in fig. 5, and the perceived distances of both the left and right display screens are pulled far. In the process of calibrating the perceived distance of the image, if the user determines that the actual distance is longer than the displayed perceived distance, the VR device may be controlled to zoom out on the picture. Comparing fig. 7 and fig. 5, it can be seen that the picture is zoomed out, the image becomes smaller, and the desktop area displayed on the screen becomes larger. The user can feel that the distance of the surrounding scenery is increased, and the distance calibration function is achieved.
S104, determining a preset adjustment identifier corresponding to the image perception distance adjustment operation, and storing the adjustment identifier; the adjustment identifier is used for indicating the adjustment of the perception distance of the image to be adjusted in the image perception distance adjustment operation.
Illustratively, adjustment marks corresponding to different image perception distance adjustment operations are preset, for example, an adjustment mark corresponding to an approach of a perception distance of an image to be adjusted to 1% is set to be 01; and marking the adjustment mark corresponding to the fact that the perception distance of the image to be adjusted is increased by 1% as-01. The zooming-in percentage and the zooming-out percentage may refer to a scaling ratio of the image to be adjusted, the zooming-in percentage may refer to an enlargement of the image to be adjusted, and the zooming-out percentage may refer to a reduction of the image to be adjusted. For example, a zoom-in of 1% may refer to a zoom of the image to be adjusted to be 1% enlarged, with objects in the image to be adjusted being visually closer to the user. For another example, if the image sensing distance adjustment operation is to rotate a knob, it may be determined that the adjustment flag corresponding to one frame of leftward rotation is 01; the adjustment mark corresponding to rotating one grid to the right is-01. And determining the image perception distance adjustment operation of the obtained target image, and determining a corresponding adjustment identifier according to the image perception distance adjustment operation. The adjustment identifier may be used to indicate that when the user performs an image sensing distance adjustment operation on the image to be adjusted, the adjustment value of the sensing distance of the image to be adjusted may be an adjustment value of the sensing distance, for example, the adjustment value may be a push-up value of 1% or a pull-up value of 1%.
If the user adjusts the perception distance of the image to be adjusted, it indicates that the installation position of the video perspective camera on the VR device is not suitable for the interpupillary distance of the user, that is, the user needs to adjust the image to be adjusted when using the VR device. After the target image is obtained, the adjustment identification of the image perception distance adjustment operation corresponding to the target image is recorded, the adjustment identification is stored, the adjustment of the perception distance of the collected image is automatically carried out according to the adjustment identification when the user uses the VR equipment subsequently, and the use experience of the user is improved.
In this embodiment, after storing the adjustment identifier, the method further includes: responding to a use instruction of the virtual reality equipment, and acquiring a stored adjustment identifier; acquiring a current image acquired by a video perspective camera on virtual reality equipment; and adjusting the perception distance of the current image according to the adjustment identifier, and displaying the adjusted image to a user.
Specifically, after the adjustment identifier corresponding to the target image is stored in the VR device, the user can use the VR device at any time. When the user uses the VR device again, the VR device may obtain the stored adjustment identifier in response to a use instruction issued by the user to the VR device. For example, the obtained adjustment flag is 01.
The VR equipment can acquire an image of the current environment of the user in real time through the video perspective camera to serve as the current image. And determining the adjustment operation needed to be performed on the current image according to the acquired adjustment identifier. Each adjustment marker corresponds to a unique adjustment operation, for example, the adjustment operation indicated by the adjustment marker 01 is to approximate the acquired image to 1%. Namely, the perception distance of the current image is adjusted according to the adjustment identifier, and the adjusted image is displayed to the user, so that the perception distance of the image seen by the user can be consistent with the actual distance seen by the user without wearing the VR device.
The beneficial effect who sets up like this lies in, write in the adjustment sign in the memory cell of VR equipment to when the user opened video perspective function next time, call this adjustment sign, adjust the current image of gathering automatically, thereby make the user obtain in real time with the image of the perception distance that actual distance is close, promote user's use and experience.
The current assembly position of camera is fixed, though installed in VR lens place area, because different users' interpupillary distance is different, can be different to the perception of distance, does not pass through the calibration, still has the inaccurate problem of distance sense. In addition, in order to make the image obtained by the color camera as much as possible to be directly observed by human eyes, the color camera is basically assembled in the area corresponding to the VR lens, so that the appearance design of the product is limited and the development difficulty is increased. In the embodiment of the application, the assembly position of the color camera for video perspective has great flexibility. After the user adopts this application embodiment to calibrate, can be more accurate to the sense of distance to the external object who observes through video perspective technique, and need not all adjust when wearing at every turn, reduce user operation.
According to the image adjusting method based on the virtual reality technology, the scene image around the user is obtained through the camera on the VR device and serves as the image to be adjusted. The user may perform an image perceived distance adjustment operation, and adjust the perceived distance of the image to be adjusted within the target adjustment range, for example, the perceived distance of the image to be adjusted may be pushed or pulled. The target adjustment ranges corresponding to different users are different, the adjustment range of the user is narrowed, and the adjustment efficiency of the user is improved. And after the adjustment is finished, obtaining a target image, and enabling the perception distance of the user to the target image to be consistent with the actual distance of the user to the article. Different adjustment operations can correspond to different adjustment identifications, the adjustment identification of the image perception distance adjustment operation corresponding to the target image is determined, and the adjustment identification is stored for facilitating subsequent use. The problem of among the prior art, because the assembly interval of camera is fixed, and can't adjust the image of surrounding environment is solved, improve the flexibility of adjusting the image. Through adjustment perception distance, the scenery distance that makes the user see through VR equipment is unanimous with the scenery distance that actually sees, and the user of being convenient for when wearing VR equipment accurately takes and uses article around, promotes the user and experiences the use of VR equipment.
Fig. 8 is a schematic flowchart of an image adjustment method based on a virtual reality technology according to an embodiment of the present application, which is an alternative embodiment based on the foregoing embodiment.
In this embodiment, according to the image sensing distance adjustment operation sent by the user, the image to be adjusted is adjusted to obtain an adjusted image, which is a target image and can be subdivided into: in response to a picture zooming-in instruction or a picture zooming-out instruction made by a user through a handle of the virtual reality equipment, zooming in or zooming out is carried out on the perception distance of the image to be adjusted; the image zooming-out instruction is used for indicating that the perception distance of the image to be adjusted is increased; and determining the image picture after zooming in or zooming out as a target image.
As shown in fig. 8, the method comprises the steps of:
s801, acquiring an image to be adjusted based on a video perspective camera on the virtual reality device.
For example, this step may refer to step S101 described above, and is not described again.
S802, determining an adjusting range corresponding to the pupil distance information of the user as a target adjusting range according to the pre-stored pupil distance information of the user and the association relation between the preset pupil distance information and the adjusting range.
For example, this step may refer to step S102 described above, and is not described again.
S803, based on the target adjustment range, responding to a picture zooming-in instruction or a picture zooming-out instruction made by a user through a handle of the virtual reality device, and zooming in or zooming out the perception distance of the image to be adjusted; the image zooming-out instruction is used for indicating that the perception distance of the image to be adjusted is increased.
For example, the VR device may be configured with a handle, and the user may wear the VR device on his head and hold the handle for adjustment. For example, the handle may be provided with operation keys, which may be a plus key and a minus key. Plus keys indicate increasing the perceived distance and minus keys indicate decreasing the perceived distance. The adjustment step length of the sensing distance of each time the operation key is pressed can be preset, for example, the sensing distance is increased by 1% when the plus key is pressed; every time the minus key is pressed, the sensing distance is reduced by 1%. Or, the handle can be provided with a rocker, the rocker is rocked towards the front of eyes to indicate that the sensing distance is reduced, the rocker is rocked towards the back of the body to indicate that the sensing distance is increased, the sensing distance changes by 1% after the rocker is rocked once, and the rocker is self-reset after the rocker is rocked every time. Alternatively, the handle may be provided with a knob, wherein rotating the knob counterclockwise means decreasing the perceived distance and rotating the knob clockwise means increasing the perceived distance. The joystick may be a physical joystick on the handle, or may be a virtual joystick displayed by the VR device.
The handle can be a wireless handle, can also be wired handle, can also set up on VR equipment body.
The user sends out a picture zooming-in instruction or a picture zooming-out instruction by carrying out image perception distance adjustment operation on the handle. For example, the user may press the plus key to issue a zoom-out command, or may press the minus key to issue a zoom-in command. That is, the picture zooming-in instruction may be used to indicate that the perceived distance of the image to be adjusted is decreased, and the picture zooming-out instruction may be used to indicate that the perceived distance of the image to be adjusted is increased. The handle is used for executing the image perception distance adjustment operation, so that the operation of a user can be facilitated, and the image adjustment efficiency is improved.
In this embodiment, adjusting an image to be adjusted according to an image sensing distance adjustment operation sent by a user to obtain an adjusted image, which is a target image, includes: receiving a voice control instruction sent by a user to the image to be adjusted; the voice control instruction comprises a picture zooming-in instruction and a picture zooming-out instruction; and according to the voice control instruction, carrying out zooming-in or zooming-out on the perception distance of the image to be adjusted, and determining the image picture after zooming-in or zooming-out as a target image.
Specifically, the user may control the VR device with a voice to adjust the sensing distance, and the user sends a voice control instruction, where the voice control instruction may include a frame zoom-in instruction and a frame zoom-out instruction. For example, if the screen zoom-in instruction that the user can make a voice is "zoom-in 1%", the VR device zooms in the perceived distance of the screen to be adjusted by 1% after receiving the voice made by the user.
And the VR equipment adjusts the perception distance of the image to be adjusted according to the voice control instruction of the user. The user can check the adjusted image to be adjusted in real time, and if the user confirms that the adjustment is completed, the adjusted image to be adjusted can be determined as the target image. That is, the image screen after zooming in or out is determined as the target image.
The beneficial effect who sets up like this lies in, the user need not manual operation, only need send the pronunciation can, reduce user operation, improve image adjustment efficiency, promote user's use and experience.
In this embodiment, the process of zooming in or out the perception distance of the image to be adjusted includes: determining a distance value for approximating the perception distance of the image to be adjusted, and carrying out equal-scale amplification on the image to be adjusted according to the distance value for approximating the perception distance of the image to be adjusted; or determining a distance value for zooming out the perception distance of the image to be adjusted, and carrying out equal-scale reduction on the image to be adjusted according to the distance value for zooming out the perception distance of the image to be adjusted.
Specifically, the user may include a distance value that needs to be zoomed in or zoomed out in the picture zoom-in instruction or the picture zoom-out instruction, and the distance value that needs to be zoomed in or zoomed out is determined as the adjustment value. For example, if it is desired to approximate 1%, the adjustment value is 1%. The zooming-in and zooming-out can be represented by positive and negative values of the numerical value, for example, the adjustment value of the zooming-in 1% is positive 1%, corresponding to the adjustment identifier 01; the adjustment value of zoom out 1% is minus 1%, corresponding to adjustment flag-01.
And according to the adjustment value of the perception distance of the image to be adjusted, integrally zooming the image to be adjusted. For example, according to a distance value for approximating the perception distance of the image to be adjusted, the image to be adjusted is subjected to equal-scale amplification; and according to the distance value for zooming out the perception distance of the image to be adjusted, carrying out equal-scale reduction on the image to be adjusted. The distance value to be adjusted, i.e. the adjustment value, is the scaling of the image to be adjusted. And the scaling ratios of the images to be adjusted of the left display screen and the right display screen are consistent.
The method has the advantages that the whole image to be adjusted is enlarged or reduced through equal-proportion scaling, so that the target image is consistent with the actual scene, local scaling in the image to be adjusted is avoided, and the reality of image display is improved.
In this embodiment, based on the target adjustment range, according to the image sensing distance adjustment operation sent by the user, the image to be adjusted is adjusted to obtain an adjusted image, which is a target image, and the method includes: determining an adjustment value of the perception distance of the image to be adjusted according to the image perception distance adjustment operation sent by a user; the adjusting value comprises a distance value for pushing the perception distance of the image to be adjusted and a distance value for pulling the perception distance of the image to be adjusted; if the adjustment value exceeds the target adjustment range, sending a prompt message to a user; the prompt message is used for reminding a user that the adjustment value exceeds the target adjustment range.
Specifically, when the user adjusts the image to be adjusted each time, the VR device may determine the current adjustment value according to the image sensing distance adjustment operation sent by the user, and determine whether the adjustment value exceeds the target adjustment range. The target adjustment range may be an adjustment distance threshold of an adjustment value that can be adjusted by a user, the adjustment distance threshold may be a maximum value or a minimum value of a scaling ratio, and different adjustment ranges may correspond to different adjustment distance thresholds. For example, the preset adjustment distance threshold may be 20%, that is, the distance value for the push-up cannot exceed 20%, and the distance value for the pull-out cannot exceed 20%. And determining an adjusting value for each adjustment, and comparing the adjusting value with an adjusting distance threshold value. If the adjustment value exceeds the preset adjustment distance threshold, the image to be adjusted cannot be zoomed according to the adjustment value, and a prompt message can be sent to the user to remind the user that the adjustment value reaches the upper limit of the adjustment distance threshold and needs to be adjusted again. The prompt message may be a voice message or a vibration message, and the user may perceive that the adjustment value has reached the upper limit of the adjustment distance threshold.
The adjustment distance threshold may be determined according to actual requirements, and the adjustment distance threshold that is pushed up may be the same as or different from the adjustment distance threshold that is pulled up. For example, when performing the adjustment, the adjustment operation is determined to be zoom-out or zoom-in, then the adjustment distance threshold corresponding to the adjustment operation is determined, and it is determined whether the adjustment value exceeds the adjustment distance threshold.
The beneficial effect of setting like this lies in, through setting up adjustment distance threshold, avoids the user to carry out unlimited adjustment to the perception distance, makes the user adjust to the target image as soon as possible, improves the precision and the efficiency of image adjustment.
And S804, determining the image picture after zooming in or out as a target image.
Illustratively, after the user is determined to finish the adjustment, an image picture after zooming in or zooming out is obtained, and an image of the picture is determined as a target image.
In this embodiment, obtaining the adjusted image as the target image includes: and if the adjustment value does not exceed the target adjustment range, responding to an adjustment completion instruction sent by a user, and determining the image to be adjusted corresponding to the adjustment value as a target image.
Specifically, when the user performs adjustment each time, the VR device determines whether the current adjustment value exceeds a target adjustment range, that is, the adjustment distance threshold of the target. If not, the user can see the adjusted image. And determining whether the perception distance of the adjusted image is consistent with the actual distance by the user, and if so, sending an adjustment finishing instruction by the user. For example, the user may speak via a handle or by voice. And the VR equipment responds to an adjustment completion instruction sent by a user, and determines the currently displayed image as a target image, namely, determines the image corresponding to the last adjustment value as the target image. And determining an adjustment value corresponding to the target image, wherein the adjustment value corresponding to the target image can be combined with multiple adjustment values before the adjustment is completed. For example, if the user first approaches 2% and then zooms out by 1%, and obtains the target image, it may be determined that the adjustment value corresponding to the target image is approaching 1%.
The method has the advantages that if the adjusting value does not reach the preset upper limit, the user can continue to adjust and can see the adjusted picture, so that the target image is determined, and the determining efficiency and accuracy of the target image are improved.
S805, determining a preset adjustment identifier corresponding to the image sensing distance adjustment operation, and storing the adjustment identifier; the adjustment identifier is used for indicating the adjustment of the perception distance of the image to be adjusted in the image perception distance adjustment operation.
For example, the adjustment flag may be used to indicate an adjustment value of the perceived distance of the image to be adjusted in the image perceived distance adjustment operation. When the user adjusts each time, the user can determine the adjustment identifier corresponding to the adjustment value of each adjustment, and can display the adjustment identifier on the screen of the VR device in real time. For example, the positions above the left and right display screens may be displayed, respectively.
In this embodiment, determining a preset adjustment identifier corresponding to the image sensing distance adjustment operation includes: determining an adjustment value for adjusting the perception distance of an image to be adjusted in the image perception distance adjustment operation; and determining an adjustment identifier associated with the adjustment value for adjusting the perception distance of the image to be adjusted according to the association relationship between the preset adjustment value and the adjustment identifier, wherein the adjustment identifier is an adjustment identifier corresponding to the image perception distance adjustment operation.
Specifically, when the user performs the image perceived distance adjustment operation each time, the VR device may determine an adjustment value for adjusting the perceived distance of the image to be adjusted this time. Presetting an association relation between each adjusting value and an adjusting identifier, wherein for example, the adjusting value is approximate to 1%, and the associated adjusting identifier is 01; the adjustment value is zoom out by 1%, and the associated adjustment flag is-01. And determining the adjustment value each time, and determining the adjustment identifier corresponding to the adjustment value each time according to a preset incidence relation, namely obtaining the adjustment identifier corresponding to the image perception distance adjustment operation each time.
The beneficial effect who sets up like this lies in, VR equipment confirms the adjustment sign according to user's instruction to confirm the adjustment sign with user's interpupillary distance matching, be convenient for follow-up directly carry out image adjustment according to the adjustment sign, promote user experience.
According to the image adjusting method based on the virtual reality technology, the scene image around the user is obtained through the camera on the VR device and serves as the image to be adjusted. The user may perform an image perceived distance adjustment operation, and adjust the perceived distance of the image to be adjusted within the target adjustment range, for example, the perceived distance of the image to be adjusted may be pushed or pulled. The target adjustment ranges corresponding to different users are different, the adjustment range of the user is narrowed, and the adjustment efficiency of the user is improved. And after the adjustment is finished, obtaining a target image, and enabling the perception distance of the user to the target image to be consistent with the actual distance of the user to the article. Different adjustment operations can correspond to different adjustment identifications, the adjustment identification of the image perception distance adjustment operation corresponding to the target image is determined, and the adjustment identification is stored, so that the subsequent use is facilitated. The problem of among the prior art, because the assembly interval of camera is fixed, and can't adjust the image of surrounding environment is solved, improve the flexibility of adjusting the image. Through adjustment perception distance, the image distance that makes the user see through VR equipment is unanimous with the scenery distance of actually seeing, and the user of being convenient for when wearing VR equipment accurately takes and uses article around, promotes the user and experiences the use of VR equipment.
Fig. 9 is a block diagram of a structure of an image adjusting apparatus based on a virtual reality technology according to an embodiment of the present disclosure, where the apparatus is applied to a virtual reality device, and the virtual reality device includes a video perspective camera. For ease of illustration, only portions relevant to embodiments of the present disclosure are shown. Referring to fig. 9, the apparatus includes: an image acquisition module 901, a range determination module 902, a distance adjustment module 903, and an identification storage module 904.
An image obtaining module 901, configured to obtain an image to be adjusted based on a video perspective camera on the virtual reality device;
a range determining module 902, configured to determine, according to pre-stored pupil distance information of a user and an association relationship between preset pupil distance information and an adjustment range, an adjustment range corresponding to the pupil distance information of the user, where the adjustment range is a target adjustment range;
a distance adjusting module 903, configured to adjust the image to be adjusted according to an image sensing distance adjusting operation sent by a user based on the target adjustment range, so as to obtain an adjusted image, which is a target image; the perception distance is used for representing the distance between objects corresponding to the viewed images, which are perceived by a user through the fixation of human eyes on the images to be adjusted;
an identifier storage module 904, configured to determine a preset adjustment identifier corresponding to the image sensing distance adjustment operation, and store the adjustment identifier; the adjustment identifier is used for indicating the adjustment of the perception distance of the image to be adjusted in the image perception distance adjustment operation.
Fig. 10 is a block diagram of an image adjusting apparatus based on a virtual reality technology according to an embodiment of the present application, and based on the embodiment shown in fig. 9, as shown in fig. 10, the distance adjusting module 903 includes a sensing distance adjusting unit 9031 and a target image determining unit 9032.
The perception distance adjusting unit 9031 is configured to respond to a picture zooming-in instruction or a picture zooming-out instruction made by a user through a handle of the virtual reality device, and zoom in or zoom out a perception distance of the image to be adjusted; the picture zooming-out instruction is used for increasing the perception distance of the image to be adjusted;
and a target image determination unit 9032, configured to determine the image screen after zooming in or zooming out as a target image.
In an example, the distance adjusting module 903 further includes:
the voice control unit receives a voice control instruction sent by a user to the image to be adjusted; the voice control instruction comprises a picture zooming-in instruction and a picture zooming-out instruction;
and the image adjusting unit is used for zooming in or zooming out the perception distance of the image to be adjusted according to the voice control instruction, and determining the image picture after zooming in or zooming out as a target image.
In an example, the distance adjusting module 903 further includes:
the equal-scale scaling unit is used for determining a distance value for approximating the perception distance of the image to be adjusted and carrying out equal-scale amplification on the image to be adjusted according to the distance value for approximating the perception distance of the image to be adjusted; alternatively, the first and second electrodes may be,
determining a distance value for zooming out the perception distance of the image to be adjusted, and carrying out equal-scale reduction on the image to be adjusted according to the distance value for zooming out the perception distance of the image to be adjusted.
In one example, the distance adjustment module 903 includes:
the adjusting value determining module is used for determining the adjusting value of the perception distance of the image to be adjusted according to the image perception distance adjusting operation sent by a user; the adjusting value comprises a distance value for pushing the perception distance of the image to be adjusted and a distance value for pulling the perception distance of the image to be adjusted;
the prompting module is used for sending prompting information to a user if the adjusting value exceeds the target adjusting range; and the prompt message is used for reminding a user that the adjustment value exceeds the target adjustment range.
In an example, the distance adjusting module 903 further includes:
and the adjustment finishing unit is used for responding to an adjustment finishing instruction sent by a user and determining the image to be adjusted corresponding to the adjustment value as the target image if the adjustment value does not exceed a preset adjustment distance threshold.
In one example, the identifier storage module 904 is specifically configured to:
determining an adjustment value for adjusting the perception distance of the image to be adjusted in the image perception distance adjustment operation;
and determining an adjustment identifier associated with an adjustment value for adjusting the perception distance of the image to be adjusted according to the association relationship between a preset adjustment value and the adjustment identifier, wherein the adjustment identifier is an adjustment identifier corresponding to the image perception distance adjustment operation.
In one example, the apparatus further comprises:
the identification acquisition module is used for responding to a use instruction of the virtual reality equipment after the adjustment identification is stored, and acquiring the stored adjustment identification;
the current image acquisition module is used for acquiring a current image acquired by a video perspective camera on the virtual reality equipment;
and the current image adjusting module is used for adjusting the perception distance of the current image according to the adjusting identification and displaying the adjusted image to a user.
Fig. 11 is a block diagram of an electronic device according to an embodiment of the present application, and as shown in fig. 11, the electronic device includes: a memory 111, a processor 112; a memory 111 for storing executable instructions for the processor 112.
Wherein the processor 112 is configured to perform the methods provided in the above embodiments.
The electronic device further comprises a receiver 113 and a transmitter 114. The receiver 113 is used for receiving instructions and data transmitted by other devices, and the transmitter 114 is used for transmitting instructions and data to external devices.
FIG. 12 is a block diagram illustrating an electronic device, which may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like, in accordance with an exemplary embodiment.
Device 1200 may include one or more of the following components: processing component 1202, memory 1204, power component 1206, multimedia component 1208, audio component 1210, input/output (I/O) interface 1212, sensor component 1214, and communications component 1216.
The processing component 1202 generally controls overall operation of the device 1200, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 1202 may include one or more processors 1220 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 1202 can include one or more modules that facilitate interaction between the processing component 1202 and other components. For example, the processing component 1202 can include a multimedia module to facilitate interaction between the multimedia component 1208 and the processing component 1202.
The memory 1204 is configured to store various types of data to support operation at the device 1200. Examples of such data include instructions for any application or method operating on device 1200, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 1204 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
A power supply component 1206 provides power to the various components of the device 1200. Power components 1206 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for device 1200.
The multimedia component 1208 comprises a screen providing an output interface between the device 1200 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 1208 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the device 1200 is in an operating mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
Audio component 1210 is configured to output and/or input audio signals. For example, audio component 1210 includes a Microphone (MIC) configured to receive external audio signals when device 1200 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 1204 or transmitted via the communication component 1216. In some embodiments, audio assembly 1210 further includes a speaker for outputting audio signals.
The input/output interface 1212 provides an interface between the processing component 1202 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 1214 includes one or more sensors for providing various aspects of state assessment for the device 1200. For example, the sensor assembly 1214 may detect an open/closed state of the device 1200, the relative positioning of the components, such as a display and keypad of the device 1200, the sensor assembly 1214 may also detect a change in the position of the device 1200 or a component of the device 1200, the presence or absence of user contact with the device 1200, orientation or acceleration/deceleration of the device 1200, and a change in the temperature of the device 1200. The sensor assembly 1214 may include a proximity sensor configured to detect the presence of a nearby object in the absence of any physical contact. The sensor assembly 1214 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 1214 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
Communications component 1216 is configured to facilitate communications between device 1200 and other devices in a wired or wireless manner. The device 1200 may access a wireless network based on a communication standard, such as WiFi,2G/3G/4G/5G, or a combination thereof. In an exemplary embodiment, the communication component 1216 receives the broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communications component 1216 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 1200 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors, or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer readable storage medium is also provided that includes instructions, such as the memory 1204 that includes instructions, that are executable by the processor 1220 of the device 1200 to perform the above-described methods. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
A non-transitory computer-readable storage medium, in which instructions are executed by a processor of a terminal device, so that the terminal device can perform a virtual reality technology-based image adjustment method of the terminal device.
The application also discloses a computer program product comprising a computer program which, when executed by a processor, implements the method as described in the embodiments.
Various implementations of the systems and techniques described here above may be realized in digital electronic circuitry, integrated circuitry, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), system on a chip (SOCs), complex Programmable Logic Devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present application may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or electronic device.
In the context of this application, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as data electronics), or that includes a middleware component (e.g., application electronics), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the Internet.
The computer system may include a client and an electronic device. The client and the electronic device are generally remote from each other and typically interact through a communication network. The relationship of client and electronic device arises by virtue of computer programs running on the respective computers and having a client-electronic device relationship to each other. The electronic device may be a cloud electronic device, which is also called a cloud computing electronic device or a cloud host, and is a host product in a cloud computing service system, so as to solve the defects of high management difficulty and weak service extensibility in a traditional physical host and a VPS service ("Virtual Private Server", or "VPS" for short). The electronic device may also be a distributed system of electronic devices or an electronic device incorporating a blockchain. It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present application may be executed in parallel, sequentially, or in different orders, as long as the desired results of the technical solutions disclosed in the present application can be achieved, and the present invention is not limited herein.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It will be understood that the present application is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (11)

1. An image adjustment method based on a virtual reality technology is applied to a virtual reality device, the virtual reality device comprises a video perspective camera, and the method comprises the following steps:
acquiring an image to be adjusted based on a video perspective camera on the virtual reality equipment;
determining an adjustment range corresponding to the pupil distance information of the user as a target adjustment range according to prestored pupil distance information of the user and an association relation between preset pupil distance information and the adjustment range;
based on the target adjustment range, adjusting the image to be adjusted according to image perception distance adjustment operation sent by a user to obtain an adjusted image which is a target image; the perception distance is used for representing the distance between objects which are perceived by a user through the fact that the user gazes at the image to be adjusted and correspond to the viewed image;
determining a preset adjustment identifier corresponding to the image perception distance adjustment operation, and storing the adjustment identifier; the adjustment identifier is used for indicating the adjustment of the perception distance of the image to be adjusted in the image perception distance adjustment operation.
2. The method according to claim 1, wherein adjusting the image to be adjusted according to an image-perceived distance adjustment operation issued by a user to obtain an adjusted image, which is a target image, comprises:
responding to a picture zooming-in instruction or a picture zooming-out instruction made by a user through a handle of the virtual reality equipment, and zooming in or zooming out the perception distance of the image to be adjusted; the picture zooming-out instruction is used for increasing the perception distance of the image to be adjusted;
and determining the image picture after zooming in or zooming out as a target image.
3. The method according to claim 1, wherein adjusting the image to be adjusted according to an image perceived distance adjustment operation issued by a user to obtain an adjusted image, which is a target image, comprises:
receiving a voice control instruction sent by a user to the image to be adjusted; the voice control instruction comprises a picture zooming-in instruction and a picture zooming-out instruction;
and according to the voice control instruction, carrying out zooming-in or zooming-out on the perception distance of the image to be adjusted, and determining the image picture after zooming-in or zooming-out as a target image.
4. The method according to claim 2 or 3, wherein the step of zooming in or zooming out the perceived distance of the image to be adjusted comprises:
determining a distance value for approximating the perception distance of the image to be adjusted, and carrying out equal-scale amplification on the image to be adjusted according to the distance value for approximating the perception distance of the image to be adjusted; alternatively, the first and second electrodes may be,
determining a distance value for zooming out the perception distance of the image to be adjusted, and carrying out equal-scale reduction on the image to be adjusted according to the distance value for zooming out the perception distance of the image to be adjusted.
5. The method according to claim 1, wherein based on the target adjustment range, adjusting the image to be adjusted according to an image perceived distance adjustment operation issued by a user to obtain an adjusted image, which is a target image, includes:
determining an adjustment value of the perception distance of the image to be adjusted according to image perception distance adjustment operation sent by a user; the adjusting value comprises a distance value for pushing the perception distance of the image to be adjusted and a distance value for pulling the perception distance of the image to be adjusted;
if the adjustment value exceeds the target adjustment range, sending prompt information to a user; and the prompt message is used for reminding a user that the adjustment value exceeds the target adjustment range.
6. The method of claim 5, wherein obtaining the adjusted image as the target image comprises:
and if the adjustment value does not exceed the target adjustment range, responding to an adjustment completion instruction sent by a user, and determining the image to be adjusted corresponding to the adjustment value as the target image.
7. The method according to claim 5, wherein determining a preset adjustment flag corresponding to the image-perceived distance adjustment operation comprises:
determining an adjustment value for adjusting the perception distance of the image to be adjusted in the image perception distance adjustment operation;
and determining an adjustment identifier associated with an adjustment value for adjusting the perception distance of the image to be adjusted according to the association relationship between a preset adjustment value and the adjustment identifier, wherein the adjustment identifier is an adjustment identifier corresponding to the image perception distance adjustment operation.
8. The method of claim 1, after storing the adjustment identifier, further comprising:
responding to the use instruction of the virtual reality equipment, and acquiring the stored adjustment identification;
acquiring a current image acquired by a video perspective camera on the virtual reality equipment;
and adjusting the perception distance of the current image according to the adjustment identifier, and displaying the adjusted image to a user.
9. The utility model provides an image adjusting device based on virtual reality technique, its characterized in that, the device is applied to virtual reality equipment, include video perspective camera on the virtual reality equipment, the device includes:
the image acquisition module is used for acquiring an image to be adjusted based on the video perspective camera on the virtual reality equipment;
the range determining module is used for determining an adjusting range corresponding to the pupil distance information of the user according to the pre-stored pupil distance information of the user and the association relation between the preset pupil distance information and the adjusting range, and the adjusting range is a target adjusting range;
the distance adjusting module is used for adjusting the image to be adjusted according to image perception distance adjusting operation sent by a user based on the target adjusting range to obtain an adjusted image which is a target image; the perception distance is used for representing the distance between objects which are perceived by a user through the fact that the user gazes at the image to be adjusted and correspond to the viewed image;
the identification storage module is used for determining a preset adjustment identification corresponding to the image perception distance adjustment operation and storing the adjustment identification; the adjustment identifier is used for indicating the adjustment of the perception distance of the image to be adjusted in the image perception distance adjustment operation.
10. An electronic device, comprising: a processor, and a memory communicatively coupled to the processor;
the memory stores computer-executable instructions;
the processor executes the computer-executable instructions stored by the memory to implement the virtual reality technology-based image adjustment method of any one of claims 1-8.
11. A computer-readable storage medium, wherein computer-executable instructions are stored in the computer-readable storage medium, and when executed by a processor, the computer-executable instructions are used for implementing the virtual reality technology-based image adjustment method according to any one of claims 1 to 8.
CN202211420240.8A 2022-11-15 2022-11-15 Image adjusting method, device, equipment and medium based on virtual reality technology Active CN115665398B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211420240.8A CN115665398B (en) 2022-11-15 2022-11-15 Image adjusting method, device, equipment and medium based on virtual reality technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211420240.8A CN115665398B (en) 2022-11-15 2022-11-15 Image adjusting method, device, equipment and medium based on virtual reality technology

Publications (2)

Publication Number Publication Date
CN115665398A true CN115665398A (en) 2023-01-31
CN115665398B CN115665398B (en) 2023-03-21

Family

ID=85020931

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211420240.8A Active CN115665398B (en) 2022-11-15 2022-11-15 Image adjusting method, device, equipment and medium based on virtual reality technology

Country Status (1)

Country Link
CN (1) CN115665398B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1742224A (en) * 2003-01-28 2006-03-01 卡尔·蔡司视觉有限公司 Device and method for adjusting a position of an eyeglass lens relative to the position of a pupil
CN102289841A (en) * 2011-08-11 2011-12-21 四川虹微技术有限公司 Method for regulating audience perception depth of three-dimensional image
CN105575362A (en) * 2015-07-31 2016-05-11 宇龙计算机通信科技(深圳)有限公司 Terminal brightness adjusting method and terminal
CN105954875A (en) * 2016-05-19 2016-09-21 华为技术有限公司 VR (Virtual Reality) glasses and adjustment method thereof
CN106019590A (en) * 2016-07-05 2016-10-12 上海乐相科技有限公司 Virtual reality device spacing adjustment method
US20170054962A1 (en) * 2015-08-18 2017-02-23 Yanhui Zhou Three-dimensional depth perception method and apparatus with an adjustable working range
CN107682690A (en) * 2017-10-19 2018-02-09 京东方科技集团股份有限公司 Self-adapting parallax adjusting method and Virtual Reality display system
CN108259883A (en) * 2018-04-04 2018-07-06 联想(北京)有限公司 Image processing method, head-mounted display and readable storage medium storing program for executing
CN109613706A (en) * 2019-01-28 2019-04-12 京东方科技集团股份有限公司 For the method for adjustment, equipment and storage medium of intelligent helmet
US10373391B1 (en) * 2018-04-23 2019-08-06 AbdurRahman Bin Shahzad Bhatti Augmented reality system for fitness
US20210016704A1 (en) * 2019-07-18 2021-01-21 GM Global Technology Operations LLC Steerable scanning and perception system with active illumination

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1742224A (en) * 2003-01-28 2006-03-01 卡尔·蔡司视觉有限公司 Device and method for adjusting a position of an eyeglass lens relative to the position of a pupil
CN102289841A (en) * 2011-08-11 2011-12-21 四川虹微技术有限公司 Method for regulating audience perception depth of three-dimensional image
CN105575362A (en) * 2015-07-31 2016-05-11 宇龙计算机通信科技(深圳)有限公司 Terminal brightness adjusting method and terminal
US20170054962A1 (en) * 2015-08-18 2017-02-23 Yanhui Zhou Three-dimensional depth perception method and apparatus with an adjustable working range
CN105954875A (en) * 2016-05-19 2016-09-21 华为技术有限公司 VR (Virtual Reality) glasses and adjustment method thereof
CN106019590A (en) * 2016-07-05 2016-10-12 上海乐相科技有限公司 Virtual reality device spacing adjustment method
CN107682690A (en) * 2017-10-19 2018-02-09 京东方科技集团股份有限公司 Self-adapting parallax adjusting method and Virtual Reality display system
CN108259883A (en) * 2018-04-04 2018-07-06 联想(北京)有限公司 Image processing method, head-mounted display and readable storage medium storing program for executing
US10373391B1 (en) * 2018-04-23 2019-08-06 AbdurRahman Bin Shahzad Bhatti Augmented reality system for fitness
CN109613706A (en) * 2019-01-28 2019-04-12 京东方科技集团股份有限公司 For the method for adjustment, equipment and storage medium of intelligent helmet
US20210016704A1 (en) * 2019-07-18 2021-01-21 GM Global Technology Operations LLC Steerable scanning and perception system with active illumination

Also Published As

Publication number Publication date
CN115665398B (en) 2023-03-21

Similar Documents

Publication Publication Date Title
EP3154270B1 (en) Method and device for adjusting and displaying an image
EP3182716A1 (en) Method and device for video display
EP3096209B1 (en) Method and device for recognizing object
EP3258418B1 (en) Fingerprint entry prompting method and device
CN106791893A (en) Net cast method and device
EP3299946B1 (en) Method and device for switching environment picture
EP3291061A1 (en) Virtual reality control method, apparatus and electronic equipment
EP3024211B1 (en) Method and device for announcing voice call
KR20160127709A (en) Method and apparatus for display control, electronic device
EP3239827A1 (en) Method and apparatus for adjusting playing progress of media file
CN112541400B (en) Behavior recognition method and device based on sight estimation, electronic equipment and storage medium
CN107797662B (en) Viewing angle control method and device and electronic equipment
CN105266756A (en) Pupil distance measuring method, device and terminal
CN108986803B (en) Scene control method and device, electronic equipment and readable storage medium
CN115665398B (en) Image adjusting method, device, equipment and medium based on virtual reality technology
CN105635573A (en) Pick-up head visual angle adjusting method and apparatus
CN113891002B (en) Shooting method and device
CN114187874B (en) Brightness adjusting method, device and storage medium
CN114063876A (en) Virtual keyboard setting method, device and storage medium
CN111462284B (en) Animation generation method, animation generation device and electronic equipment
CN117130468A (en) Eye movement tracking method, device, equipment and storage medium
CN113551768B (en) Method and device for determining ambient brightness
EP4068077A1 (en) Shooting method and apparatus
CN108596719B (en) Image display method and device
CN116185267A (en) Display screen control method, display screen control device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant