CN112819970B - Control method and device and electronic equipment - Google Patents

Control method and device and electronic equipment Download PDF

Info

Publication number
CN112819970B
CN112819970B CN202110188853.2A CN202110188853A CN112819970B CN 112819970 B CN112819970 B CN 112819970B CN 202110188853 A CN202110188853 A CN 202110188853A CN 112819970 B CN112819970 B CN 112819970B
Authority
CN
China
Prior art keywords
mode
target
feature points
state
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110188853.2A
Other languages
Chinese (zh)
Other versions
CN112819970A (en
Inventor
王晓陆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN202110188853.2A priority Critical patent/CN112819970B/en
Publication of CN112819970A publication Critical patent/CN112819970A/en
Application granted granted Critical
Publication of CN112819970B publication Critical patent/CN112819970B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/337Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/19Sensors therefor

Abstract

The application discloses a control method, a control device and electronic equipment, wherein virtual images are displayed in a first mode in response to target equipment, target feature points are obtained, and the target feature points represent feature points with consistent display coordinate positions in front and rear acquired images; determining a first number of target feature points in a first state; determining a second number of target feature points in a second state, wherein the image acquisition environments of the target equipment in the first state and the second state are different; based on a ratio of the first number to the second number, it is determined whether to switch the first mode to the second mode, the first mode and the second mode utilizing different pose data when generating the virtual image. The method and the device realize the switching of the display modes based on the number of the target feature points under different image acquisition environments, so that the display modes of the target equipment are matched with the environments, and the actual application requirements are met.

Description

Control method and device and electronic equipment
Technical Field
The present disclosure relates to the field of information processing technologies, and in particular, to a control method, an apparatus, and an electronic device.
Background
Augmented reality (Augmented Reality, AR) is a technology capable of fusing virtual information into a real environment, and is a technology that performs simulated simulation processing on physical information that is otherwise difficult to experience in a real world space range based on scientific technology such as a computer, and superimposes virtual information content to be effectively applied in the real world.
In application of the AR device, the AR device needs to calculate coordinates of the virtual image displayed in the space by identifying the environment, but not all environments are favorable for the AR device to identify and calculate, so that actual requirements of the AR device cannot be met, and user experience effect is affected.
Disclosure of Invention
In view of this, the present application provides the following technical solutions:
a control method, comprising:
responding to the display of a virtual image of target equipment in a first mode, and acquiring target characteristic points, wherein the target characteristic points represent characteristic points with consistent display coordinate positions in the front and rear frame acquisition images;
determining a first number of the target feature points in a first state;
determining a second number of the target feature points in a second state, wherein the image acquisition environments of the target equipment in the first state and the second state are different;
based on a ratio of the first number to the second number, it is determined whether to switch the first mode to a second mode, the first mode and the second mode differing in pose data utilized in generating the virtual image.
Optionally, the acquiring the target feature point includes:
in response to the target device acquiring a left eye image and a right eye image, determining feature points with consistent display coordinate positions in the left eye image and the right eye image as target feature points;
or,
responding to the target equipment to acquire a left-eye two-dimensional image and a right-eye two-dimensional image, and performing space projection on the left-eye two-dimensional image to acquire a first space point;
performing space projection on the right-eye two-dimensional image to obtain a second space point;
and determining the space point with consistent space coordinates between the first space point and the second space point as a target feature point.
Optionally, the acquiring the target feature point includes:
acquiring characteristic points of a first moment and characteristic points of a second moment, wherein the first moment and the second moment have a time-sequence association relation;
and determining the feature points with consistent display positions in the feature points at the first time and the feature points at the second time as target feature points.
Optionally, the image acquisition environment includes one of an image acquisition time environment, an image acquisition space environment, and an image acquisition scene environment.
Optionally, the determining whether to switch the first mode to the second mode based on a ratio of the first number to the second number includes:
and if the ratio of the first number to the second number is smaller than a target threshold, switching the first mode to a second mode so that the target equipment displays the virtual image in the second mode, wherein the number of pose data which are respectively obtained in the first mode and the second mode and used for generating the virtual image is different.
Optionally, the method further comprises:
determining a third number of the target feature points in a third state;
and if the ratio of the third quantity to the second quantity is not smaller than the target threshold value, switching the second mode to the first mode.
Optionally, the method further comprises:
if the target equipment displays the virtual image in the second mode, acquiring first pose data in a calculation mode corresponding to the second mode based on the current target feature point;
obtaining second pose data in a calculation mode corresponding to the first mode based on the current target feature points;
and if the difference value between the first pose data and the second pose data is larger than the target difference value, switching the second mode to a third mode, wherein the pose data corresponding to the third mode is different from the first pose data corresponding to the second mode.
A control apparatus comprising:
the acquisition unit is used for responding to the display of the virtual image of the target equipment in the first mode, and acquiring target characteristic points, wherein the target characteristic points represent characteristic points with consistent display coordinate positions in the front and rear acquired images;
a first determining unit configured to determine a first number of the target feature points in a first state;
a second determining unit, configured to determine a second number of the target feature points in a second state, where the image acquisition environments of the target device in the first state and the second state are different;
and a third determining unit configured to determine whether to switch the first mode to a second mode based on a ratio of the first number to the second number, the first mode and the second mode being different in pose data used when generating the virtual image.
A storage medium storing computer executable instructions which, when executed by a processor, are to perform a control method as claimed in any one of the preceding claims.
An electronic device, comprising:
a memory for storing an application program and data generated by the operation of the application program;
a processor for executing the application program to realize:
responding to the display of a virtual image of target equipment in a first mode, and acquiring target characteristic points, wherein the target characteristic points represent characteristic points with consistent display coordinate positions in the front and rear frame acquisition images;
determining a first number of the target feature points in a first state;
determining a second number of the target feature points in a second state, wherein the image acquisition environments of the target equipment in the first state and the second state are different;
based on a ratio of the first number to the second number, it is determined whether to switch the first mode to a second mode, the first mode and the second mode differing in pose data utilized in generating the virtual image.
According to the technical scheme, the control method, the control device and the electronic equipment disclosed by the application respond to the display of the virtual image of the target equipment in the first mode, the target characteristic points are obtained, and the target characteristic points represent the characteristic points with consistent display coordinate positions in the front and rear acquired images; determining a first number of target feature points in a first state; determining a second number of target feature points in a second state, wherein the image acquisition environments of the target equipment in the first state and the second state are different; based on a ratio of the first number to the second number, it is determined whether to switch the first mode to the second mode, the first mode and the second mode utilizing different pose data when generating the virtual image. The method and the device realize the switching of the display modes based on the number of the target feature points under different image acquisition environments, so that the display modes of the target equipment are matched with the environments, and the actual application requirements are met.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic flow chart of a control method according to an embodiment of the present application;
fig. 2 is a schematic diagram of an application scenario provided in an embodiment of the present application;
fig. 3 is a schematic structural diagram of a control device according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
The embodiment of the application provides a control method which is applied to AR (Augmented Reality ) equipment, so that the AR equipment can switch corresponding display modes under different changes, the accuracy of displaying and anchoring of virtual images in space is ensured, and the experience effect of a user is improved.
AR (Augmented Reality ) technology is a technology of fusing virtual information with the real world, and a user can view a virtual effect superimposed in a real scene through an AR device, that is, the user sees a virtual image, for example, can see a virtual article superimposed in a real indoor environment, thereby completing the experience of the virtual article.
Referring to fig. 1, a flow chart of a control method provided in an embodiment of the present application is shown, where the method may include the following steps:
s101, responding to the target equipment to display the virtual image in the first mode, and acquiring the target feature points.
In this embodiment of the present application, the target device is an AR device, for example, AR smart glasses, and the virtual image displayed by the target device may be a final scene image presented to the user, that is, a virtual effect image obtained by superimposing virtual information with a real scene, or an image corresponding to the virtual information, for example, a virtual effect image obtained by superimposing a virtual cup with a real indoor scene, or a virtual image corresponding to the virtual cup.
The first mode is one of target device display modes, wherein the display mode refers to a mode of displaying and anchoring a virtual image, i.e., a mode of generating a virtual image using what pose data. The target characteristic points represent characteristic points with consistent display coordinate positions in the front and rear frame acquisition images, wherein the characteristic points are characteristic points of point cloud data in a space acquired by target equipment (namely AR equipment) through a depth camera, namely a plurality of characteristic points can form characteristic point clouds, namely point clouds for short. The feature points refer to visual difference feature points in captured camera images, multiple layers of repeated screening are carried out based on the feature points, a certain number of target feature points are obtained, the target feature points can be used for calculating pose data, such as camera pose data of AR equipment, and the target feature points can be used as stable feature points in certain application scenes, namely feature points with consistent display coordinate positions in front and rear frame acquisition images.
S102, determining the first number of the target feature points in the first state.
S103, determining a second number of target feature points in a second state.
S104, determining whether to switch the first mode to the second mode based on the ratio of the first number to the second number.
It should be noted that, the image acquisition environments of the target device in the first state and the second state are different, where the different image acquisition environments may refer to one of different image acquisition time environments, different image acquisition space environments, and different image acquisition scene environments. The number of target feature points acquired by the target device in different states is also different. Since the target device is worn by the user, the state in which the target device is in is related to its corresponding user, such as the environment in which the target device is worn by the user, a shift in the user's gaze point, or a movement of the user. The target device presents the virtual image as a real-time process, so that the acquisition of the target feature points is also a real-time process, and in order to enable the display mode of the virtual device for displaying the virtual image to conform to the environmental characteristics, the target feature points in different states need to be determined. For example, the first state may be a state of the target device when the user is looking at the first area, and the second state may be a state of the target device when the user is looking at the second area. The target feature points may be stable feature points, that is, in the process of continuously screening feature points, the display mode is determined according to the latest stable feature point number and the initially screened stable feature point number, where the latest stable feature point number corresponds to the first number, and the initially screened determined feature point number corresponds to the second number.
It is then determined whether a display mode of the target device needs to be switched, i.e. whether the first mode is switched to the second mode, based on a ratio of the first number to the second number. The first mode and the second mode differ in pose data utilized in generating the virtual image. The pose refers to the position and direction of an object, wherein the position data can refer to latitude, longitude and altitude, and the direction is a direction angle, a pitch angle and a roll angle. The position of an object can be expressed in terms of (x, y, z). While directions may be expressed in terms of (α, β, γ), which are angles representing rotations about three coordinate axes. Specifically, pose data, which is pose data of a camera of the target device, can be obtained by determining target feature points on the displayed virtual image and then calculating based on coordinate parameters corresponding to the target feature points and motion data acquired by a sensor corresponding to the target device.
For example, the first mode may be a 6DoF (Degrees OfFreedom, degree of freedom) mode, the second mode may be a 3DoF mode, and 3DoF may refer to 3 degrees of freedom of rotation, such as rotation of the head of a user wearing the target device in different directions, but which cannot detect spatial displacement of the head from front to back and side to side. The 6DoF can be based on 3DoF, so that the change of up-down, back-forth, left-right movement caused by the body movement of a courier wearing target equipment is increased, and the tracking and positioning of a user can be better realized.
The embodiment of the application provides a control method, which responds to the display of a virtual image of target equipment in a first mode, and obtains target characteristic points, wherein the target characteristic points represent characteristic points with consistent display coordinate positions in two frames of acquired images before and after; determining a first number of target feature points in a first state; determining a second number of target feature points in a second state, wherein the image acquisition environments of the target equipment in the first state and the second state are different; based on a ratio of the first number to the second number, it is determined whether to switch the first mode to the second mode, the first mode and the second mode utilizing different pose data when generating the virtual image. The method and the device realize the switching of the display modes based on the number of the target feature points under different image acquisition environments, so that the display modes of the target equipment are matched with the environments, and the actual application requirements are met.
In the embodiment of the application, the state of the point cloud data can be detected in real time by means of SLAM (Simultaneous Localization And Mapping) and synchronous positioning and map construction, and then the target characteristic points are determined according to the state of the characteristic points in the point cloud data. Specifically, in the embodiment of the present application, the target feature point may be obtained by the following manner:
in one possible implementation, in response to the target device acquiring the left-eye image and the right-eye image, feature points in which display coordinate positions in the left-eye image and the right-eye image are consistent are determined as target feature points.
When the target device is an AR device, the target device generally includes a left-eye display device and a right-eye display device, and when the left-eye display device and the right-eye display device display a left-eye image and a right-eye image corresponding to the virtual object, and when display coordinate positions of some feature points in the left-eye image and the right-eye image are consistent or have small errors, the feature points can be used as target feature points, that is, stable feature points. The feature points refer to visual difference feature points in the captured camera image, such as points with larger differences of brightness, color and gray scale in the image, and specifically, edge points of the virtual object.
In another possible implementation, the method may include spatially projecting the left-eye two-dimensional image to obtain a first spatial point in response to the target device acquiring the left-eye two-dimensional image and the right-eye two-dimensional image; performing space projection on the right-eye two-dimensional image to obtain a second space point; and determining the space points with consistent space coordinates in the first space point and the second space as target feature points.
Specifically, a plurality of left-eye two-dimensional images can be acquired, and then according to three-dimensional space position information of an object included in the plurality of left-eye two-dimensional images in the point cloud data, a first space point corresponding to the three-dimensional space pose information can be obtained, and similarly, a second space point obtained by projection of the plurality of acquired right-eye two-dimensional images can be obtained. And determining the space point with consistent space coordinates between the first space point and the second space point as a stable space point, namely a target feature point.
In another possible embodiment, the feature points with identical display positions among the feature points at the first time and the feature points at the second time may be determined as the target feature points by acquiring the feature points at the first time and the feature points at the second time.
The first time and the second time have a time-sequence association relationship, for example, the first time is the current time, and the second time is the next second of the current time. The method comprises the steps of selecting a plurality of feature points at a first moment, selecting a corresponding number of feature points at a second moment, matching the feature points at the two moments, and determining the matched feature points as target feature points, wherein the feature points with consistent display positions in the feature points can be particularly determined as target feature points. Correspondingly, a certain number of characteristic points can be randomly selected for matching in batches through ICP (Iterative Closest Point) and iterative nearest point algorithm. So that the point cloud data at different coordinates are combined into the same coordinate system, first an available transformation is found, and the registration operation is actually to find a rigid transformation from coordinate system 1 to coordinate system 2. The ICP algorithm is essentially an optimal registration method based on the least squares method. The algorithm repeatedly selects corresponding relation point pairs, and calculates the optimal rigid body transformation until the convergence accuracy requirement of correct registration is met. The purpose of the ICP algorithm is to find the rotation parameter R and translation parameter T between the point cloud data to be registered and the reference cloud data such that an optimal match under some metric is satisfied between the two points. Thereby obtaining stable feature points, i.e., target feature points.
It should be noted that, in determining the number of target feature points in the embodiment of the present application, the number of target feature points is determined in different image capturing environments. The corresponding number of the target feature points can be determined when the image acquisition time environments are different, and although the process of screening the target feature points is performed in real time, in order to reduce the occupation of processing resources, the method can also be more in line with the requirements of actual application scenes, the number of the target feature points can be determined and compared after a fixed period, for example, the number of the target feature points is acquired every 10 seconds, so that whether to switch the display mode of the target device is judged according to the number of the target feature points. In addition, the number of the target feature points may be obtained based on a change of the image acquisition space environment, for example, when the acquired image includes a certain reference object, the number of the target feature points is obtained once, and then the number of the target feature points is obtained again when the user looks at the blank wall, so that whether to switch the display mode is determined according to the number. The number of target feature points may be acquired based on the difference of the image acquisition scene environments, for example, the number of target feature points may be determined in a normal environment, i.e., an indoor environment, and the number of target feature points may be determined once again when the environment where the target device is worn is a running automobile, and then whether to switch the display mode of the target device may be determined.
In the embodiment of the application, the switching control of the display mode of the target device is a real-time judging process, namely, the display mode is switched in real time according to different image acquisition environments of the target device. If the third quantity of the target feature points in the third state is determined, determining whether to switch the current second mode to the first mode according to the ratio of the third quantity to the second quantity. The third state refers to a state different from the second state.
Specifically, determining whether to switch the first mode to the second mode based on the ratio of the first number to the second number in the embodiment of the present application includes:
and if the ratio of the first number to the second number is smaller than a target threshold, switching the first mode to a second mode so that the target equipment displays the virtual image in the second mode, wherein the number of pose data which are respectively obtained in the first mode and the second mode and used for generating the virtual image is different.
The target threshold may be a threshold for display mode switching determined according to an actual application scenario. Taking target equipment as AR equipment for illustration, the AR equipment acquires spatial point cloud data through a depth camera, repeatedly screening the spatial point cloud data based on the feature points to obtain first number of stable feature points, taking the stable feature points at the moment as target feature points, assuming the corresponding number as N, determining whether switching of a display mode is required according to the latest stable feature point number M and the proportion of the feature points N screened initially in the continuous screening process of the feature points, switching the first mode into the second mode if the value of N/M is lower than a threshold A, and switching the first mode into the first mode again if the value of N/M is higher than or equal to the threshold A. The number of pose data for generating a virtual image obtained in the first mode and the second mode is different, that is, the degree of freedom of the rotation angle for generating the pose data in the first mode and the second mode.
In this embodiment of the present application, the mode in which the target device displays the virtual image may have a third mode in addition to the first mode and the second mode, where pose data of the three modes are different, and the pose data includes degrees of freedom of a rotation angle, and the corresponding mode may include the first mode with the largest number of degrees of freedom and the third mode with the smallest number of degrees of freedom. For example, the first mode is a corresponding 6DoF mode, i.e. a mode including 6 degrees of freedom of rotation, the second mode is a corresponding 3DoF mode, i.e. a mode including 3 degrees of freedom of rotation, and the third mode is a Head supported mode (Head aiming mode), i.e. a mode without any anchoring, and only a display mode of association relation between the Head and the eyes is established.
Specifically, in determining whether to switch to the third mode, it may be achieved by:
if the target equipment displays the virtual image in the second mode, acquiring first pose data in a calculation mode corresponding to the second mode based on the current target feature point;
obtaining second pose data in a calculation mode corresponding to the first mode based on the current target feature points;
and if the difference value between the first pose data and the second pose data is larger than a target difference value, switching the second mode to a third mode.
The pose data corresponding to the third mode is different from the first pose data corresponding to the second mode. When the target device displays the virtual image in the second mode, it may be determined whether to switch the second mode to the first mode or to switch the second mode to the third mode based on a change or calculation of the target feature point.
The calculation modes of pose data corresponding to different virtual image display modes are different, and the specific data sources obtained in the calculation process are also different, namely the acquisition units for obtaining the data sources are also different. For example, the first mode is a 6DoF mode, the second mode is a 3DoF mode, and the third mode is a Head Locked mode. In the 6DoF mode, the main data sources of SLAM calculation are two parts of a depth Camera (Camera) and an IMU (Inertial measurement unit ), the 3DoF display mode is adopted, the target equipment mainly calculates pose data according to the IMU data, if the pose data still adopts a calculation mode corresponding to the 6DoF mode at the moment, the obtained pose data has larger difference with the coordinate values of the pose data only obtained by the calculation mode corresponding to the IMU data due to the lack of data acquired by the depth Camera. And judging the credibility of the data calculated by the 6DoF mode through the two values, and switching the display mode of the target equipment into the Head Locked mode when the credibility is lower than the standard B. The IMU provides data including acceleration data on three axes and gyro data on three axes, that is, angular velocity data, and through measurement of the acceleration data and the angular velocity, a position of the next moment can be predicted, prediction data of the next moment and measurement data of the next moment can be iterated, so that corresponding pose data can be obtained.
Therefore, when the display mode of the target device is the 6DoF mode, whether to switch to the 3DoF display mode is judged according to the numerical proportion of the stable feature points, and when the 3DoF display mode is adopted, the 6DoF data is calculated according to the current depth camera and the IMU fusion data and the reliability degree of the 6DoF data is judged according to the data calculated only by the IMU, whether to switch to the Head Locked mode is judged according to the reliability degree, and the target device can adapt to different use environments through the switching mode.
Referring to fig. 2, which is a schematic diagram of an application scenario provided in an embodiment of the present application, in fig. 2, a user wears AR glasses, and calculates coordinates of a virtual image displayed in space by recognizing an environment through the AR glasses, where the virtual image is an image of a tree in fig. 2, and the scene corresponds to a scene where a house is located, so that the user can see a virtual tree superimposed in the scene where the house is located through the AR glasses. However, the environment in which the user wears the AR glasses is not fixed, and some environments are unfavorable for the AR glasses to recognize and calculate, such as a blank wall surface, continuously moving and rotating, and changing environments, such as a running automobile, an airplane and other places, can cause inaccurate anchoring of the virtual image in space because the feature points of the space environment are not done or continuously changing, wherein the anchoring refers to a mode of recording the position of the virtual object in the image corresponding to the real scene. The number of target points or pose data obtained by calculating the target points can be used for judging which display mode is switched to, such as a first mode, a second mode or a third mode. For a specific switch, please refer to the description of the above embodiments, which will not be described in detail herein. In fig. 2, the virtual image seen by the user is an image corresponding to one of the display modes, and when the different display modes are switched, the image seen by the user is matched with the corresponding display mode. Therefore, in the embodiment of the application, the target device can adapt to different use environments by switching the display mode of the target device.
In an embodiment of the present application, there is further provided a control device, referring to fig. 3, including:
an obtaining unit 10, configured to respond to display of a virtual image by a target device in a first mode, and obtain a target feature point, where the target feature point represents a feature point with consistent display coordinate positions in two acquired images before and after the feature point;
a first determining unit 20 for determining a first number of the target feature points in a first state;
a second determining unit 30, configured to determine a second number of the target feature points in a second state, where the image capturing environments of the target device in the first state and the second state are different;
a third determining unit 40 for determining whether to switch the first mode to a second mode based on a ratio of the first number to the second number, the first mode and the second mode being different in pose data utilized in generating the virtual image.
The embodiment of the application provides a control device, which responds to the display of a virtual image of target equipment in a first mode to acquire target characteristic points, wherein the target characteristic points represent characteristic points with consistent display coordinate positions in two frames of acquired images before and after; determining a first number of target feature points in a first state; determining a second number of target feature points in a second state, wherein the image acquisition environments of the target equipment in the first state and the second state are different; based on a ratio of the first number to the second number, it is determined whether to switch the first mode to the second mode, the first mode and the second mode utilizing different pose data when generating the virtual image. The method and the device realize the switching of the display modes based on the number of the target feature points under different image acquisition environments, so that the display modes of the target equipment are matched with the environments, and the actual application requirements are met.
Alternatively, the acquisition unit 10 includes:
a first determining subunit, configured to determine, as a target feature point, a feature point with consistent display coordinate positions in the left-eye image and the right-eye image in response to the target device acquiring the left-eye image and the right-eye image;
or,
the second determining subunit is used for responding to the left-eye two-dimensional image and the right-eye two-dimensional image acquired by the target equipment, and performing space projection on the left-eye two-dimensional image to obtain a first space point; performing space projection on the right-eye two-dimensional image to obtain a second space point; and determining the space point with consistent space coordinates between the first space point and the second space point as a target feature point.
Optionally, the acquisition unit 10 further includes:
the third determining subunit is used for obtaining the characteristic points of the first moment and the characteristic points of the second moment, and the first moment and the second moment have a time-sequential association relationship; and determining the feature points with consistent display positions in the feature points at the first time and the feature points at the second time as target feature points.
Optionally, the image acquisition environment includes one of an image acquisition time environment, an image acquisition space environment, and an image acquisition scene environment.
Optionally, the third determining unit includes:
and the first switching subunit is used for switching the first mode to a second mode if the ratio of the first number to the second number is smaller than a target threshold value so as to enable the target equipment to display the virtual image in the second mode, wherein the number of pose data which are respectively obtained in the first mode and the second mode and used for generating the virtual image is different.
Optionally, the apparatus further comprises:
a second switching subunit configured to determine a third number of the target feature points in a third state; and if the ratio of the third quantity to the second quantity is not smaller than the target threshold value, switching the second mode to the first mode.
Optionally, the apparatus further comprises:
the third switching subunit is used for obtaining first pose data in a calculation mode corresponding to the second mode based on the current target feature point if the target equipment displays the virtual image in the second mode; obtaining second pose data in a calculation mode corresponding to the first mode based on the current target feature points; and if the difference value between the first pose data and the second pose data is larger than the target difference value, switching the second mode to a third mode, wherein the pose data corresponding to the third mode is different from the first pose data corresponding to the second mode.
It should be noted that, the specific implementation of each unit in this embodiment may refer to the corresponding content in the foregoing, which is not described in detail herein.
Referring to fig. 4, a schematic structural diagram of an electronic device according to an embodiment of the present application is provided. The technical scheme in the embodiment is mainly used for enabling the display mode of the target equipment to be matched with the environment, and meets the actual application requirements.
Specifically, the electronic device in this embodiment may include the following structure:
a memory 401 for storing an application program and data generated by the operation of the application program;
a processor 402, configured to execute the application program to implement:
responding to the display of a virtual image of target equipment in a first mode, and acquiring target characteristic points, wherein the target characteristic points represent characteristic points with consistent display coordinate positions in the front and rear frame acquisition images;
determining a first number of the target feature points in a first state;
determining a second number of the target feature points in a second state, wherein the image acquisition environments of the target equipment in the first state and the second state are different;
based on a ratio of the first number to the second number, it is determined whether to switch the first mode to a second mode, the first mode and the second mode differing in pose data utilized in generating the virtual image.
According to the technical scheme, in the electronic equipment provided by the implementation of the application, the target characteristic points are obtained by responding to the display of the virtual image of the target equipment in the first mode, and the characteristic points with consistent display coordinate positions in the front and rear frame acquisition images are represented by the target characteristic points; determining a first number of target feature points in a first state; determining a second number of target feature points in a second state, wherein the image acquisition environments of the target equipment in the first state and the second state are different; based on a ratio of the first number to the second number, it is determined whether to switch the first mode to the second mode, the first mode and the second mode utilizing different pose data when generating the virtual image. The method and the device realize the switching of the display modes based on the number of the target feature points under different image acquisition environments, so that the display modes of the target equipment are matched with the environments, and the actual application requirements are met.
It should be noted that, the specific implementation of the processor in this embodiment may refer to the corresponding content in the foregoing, which is not described in detail herein.
There is also provided in an embodiment of the present application a storage medium storing computer-executable instructions for performing the control method as set forth in any one of the above when executed by a processor.
In the present specification, each embodiment is described in a progressive manner, and each embodiment is mainly described in a different point from other embodiments, and identical and similar parts between the embodiments are all enough to refer to each other. For the device disclosed in the embodiment, since it corresponds to the method disclosed in the embodiment, the description is relatively simple, and the relevant points refer to the description of the method section.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative elements and steps are described above generally in terms of functionality in order to clearly illustrate the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. The software modules may be disposed in Random Access Memory (RAM), memory, read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A control method, comprising:
responding to the display of a virtual image of target equipment in a first mode, and acquiring target characteristic points, wherein the target characteristic points represent characteristic points with consistent display coordinate positions in the front and rear frame acquisition images;
determining a first number of the target feature points in a first state;
determining a second number of the target feature points in a second state, wherein the image acquisition environments of the target equipment in the first state and the second state are different;
based on a ratio of the first number to the second number, it is determined whether to switch the first mode to a second mode, the first mode and the second mode differing in pose data utilized in generating the virtual image.
2. The method of claim 1, the obtaining target feature points comprising:
in response to the target device acquiring a left eye image and a right eye image, determining feature points with consistent display coordinate positions in the left eye image and the right eye image as target feature points;
or,
responding to the target equipment to acquire a left-eye two-dimensional image and a right-eye two-dimensional image, and performing space projection on the left-eye two-dimensional image to acquire a first space point;
performing space projection on the right-eye two-dimensional image to obtain a second space point;
and determining the space point with consistent space coordinates between the first space point and the second space point as a target feature point.
3. The method of claim 1, the obtaining target feature points comprising:
acquiring characteristic points of a first moment and characteristic points of a second moment, wherein the first moment and the second moment have a time-sequence association relation;
and determining the feature points with consistent display positions in the feature points at the first time and the feature points at the second time as target feature points.
4. The method of claim 1, the image acquisition environment comprising one of an image acquisition temporal environment, an image acquisition spatial environment, and an image acquisition scene environment.
5. The method of claim 1, the determining whether to switch the first mode to a second mode based on a ratio of the first number to the second number, comprising:
and if the ratio of the first number to the second number is smaller than a target threshold, switching the first mode to a second mode so that the target equipment displays the virtual image in the second mode, wherein the number of pose data which are respectively obtained in the first mode and the second mode and used for generating the virtual image is different.
6. The method of claim 5, the method further comprising:
determining a third number of the target feature points in a third state;
and if the ratio of the third quantity to the second quantity is not smaller than the target threshold value, switching the second mode to the first mode.
7. The method of claim 1, the method further comprising:
if the target equipment displays the virtual image in the second mode, acquiring first pose data in a calculation mode corresponding to the second mode based on the current target feature point;
obtaining second pose data in a calculation mode corresponding to the first mode based on the current target feature points;
and if the difference value between the first pose data and the second pose data is larger than the target difference value, switching the second mode to a third mode, wherein the pose data corresponding to the third mode is different from the first pose data corresponding to the second mode.
8. A control apparatus comprising:
the acquisition unit is used for responding to the display of the virtual image of the target equipment in the first mode, and acquiring target characteristic points, wherein the target characteristic points represent characteristic points with consistent display coordinate positions in the front and rear acquired images;
a first determining unit configured to determine a first number of the target feature points in a first state;
a second determining unit, configured to determine a second number of the target feature points in a second state, where the image acquisition environments of the target device in the first state and the second state are different;
and a third determining unit configured to determine whether to switch the first mode to a second mode based on a ratio of the first number to the second number, the first mode and the second mode being different in pose data used when generating the virtual image.
9. A storage medium storing computer-executable instructions which, when executed by a processor, are for performing the control method of any one of claims 1 to 7.
10. An electronic device, comprising:
a memory for storing an application program and data generated by the operation of the application program;
a processor for executing the application program to realize:
responding to the display of a virtual image of target equipment in a first mode, and acquiring target characteristic points, wherein the target characteristic points represent characteristic points with consistent display coordinate positions in the front and rear frame acquisition images;
determining a first number of the target feature points in a first state;
determining a second number of the target feature points in a second state, wherein the image acquisition environments of the target equipment in the first state and the second state are different;
based on a ratio of the first number to the second number, it is determined whether to switch the first mode to a second mode, the first mode and the second mode differing in pose data utilized in generating the virtual image.
CN202110188853.2A 2021-02-19 2021-02-19 Control method and device and electronic equipment Active CN112819970B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110188853.2A CN112819970B (en) 2021-02-19 2021-02-19 Control method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110188853.2A CN112819970B (en) 2021-02-19 2021-02-19 Control method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN112819970A CN112819970A (en) 2021-05-18
CN112819970B true CN112819970B (en) 2023-12-26

Family

ID=75865488

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110188853.2A Active CN112819970B (en) 2021-02-19 2021-02-19 Control method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN112819970B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113932805B (en) * 2021-10-12 2024-02-23 天翼数字生活科技有限公司 Method for improving positioning accuracy and speed of AR virtual object

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107705333A (en) * 2017-09-21 2018-02-16 歌尔股份有限公司 Space-location method and device based on binocular camera
WO2018235923A1 (en) * 2017-06-21 2018-12-27 国立大学法人 東京大学 Position estimating device, position estimating method, and program
CN112258658A (en) * 2020-10-21 2021-01-22 河北工业大学 Augmented reality visualization method based on depth camera and application

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013225245A (en) * 2012-04-23 2013-10-31 Sony Corp Image processing device, image processing method, and program
WO2016017254A1 (en) * 2014-08-01 2016-02-04 ソニー株式会社 Information processing device, information processing method, and program
US11189098B2 (en) * 2019-06-28 2021-11-30 Snap Inc. 3D object camera customization system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018235923A1 (en) * 2017-06-21 2018-12-27 国立大学法人 東京大学 Position estimating device, position estimating method, and program
CN107705333A (en) * 2017-09-21 2018-02-16 歌尔股份有限公司 Space-location method and device based on binocular camera
CN112258658A (en) * 2020-10-21 2021-01-22 河北工业大学 Augmented reality visualization method based on depth camera and application

Also Published As

Publication number Publication date
CN112819970A (en) 2021-05-18

Similar Documents

Publication Publication Date Title
US10507381B2 (en) Information processing device, position and/or attitude estimiating method, and computer program
JP6359644B2 (en) Method for facilitating computer vision application initialization
EP3665506B1 (en) Apparatus and method for generating a representation of a scene
CN107113376B (en) A kind of image processing method, device and video camera
US20160307374A1 (en) Method and system for providing information associated with a view of a real environment superimposed with a virtual object
US20190033988A1 (en) Controller tracking for multiple degrees of freedom
US10999412B2 (en) Sharing mediated reality content
US20170078570A1 (en) Image processing device, image processing method, and image processing program
US20180300040A1 (en) Mediated Reality
KR20160094190A (en) Apparatus and method for tracking an eye-gaze
JP6723743B2 (en) Information processing apparatus, information processing method, and program
US20220148207A1 (en) Processing of depth maps for images
CN112819970B (en) Control method and device and electronic equipment
CN110969706B (en) Augmented reality device, image processing method, system and storage medium thereof
US11275434B2 (en) Information processing apparatus, information processing method, and storage medium
WO2017163648A1 (en) Head-mounted device
KR101947372B1 (en) Method of providing position corrected images to a head mount display and method of displaying position corrected images to a head mount display, and a head mount display for displaying the position corrected images
JP6168597B2 (en) Information terminal equipment
CN115202475A (en) Display method, display device, electronic equipment and computer-readable storage medium
US11822851B2 (en) Information display system, information display method, and processing device
US20230122185A1 (en) Determining relative position and orientation of cameras using hardware
US10885319B2 (en) Posture control system
CN117710445A (en) Target positioning method and device applied to AR equipment and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant