CN112764658B - Content display method and device and storage medium - Google Patents

Content display method and device and storage medium Download PDF

Info

Publication number
CN112764658B
CN112764658B CN202110105983.5A CN202110105983A CN112764658B CN 112764658 B CN112764658 B CN 112764658B CN 202110105983 A CN202110105983 A CN 202110105983A CN 112764658 B CN112764658 B CN 112764658B
Authority
CN
China
Prior art keywords
acceleration
pose data
preset
content
target content
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110105983.5A
Other languages
Chinese (zh)
Other versions
CN112764658A (en
Inventor
彭文佳
张兴泷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Mobile Software Co Ltd
Original Assignee
Beijing Xiaomi Mobile Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Mobile Software Co Ltd filed Critical Beijing Xiaomi Mobile Software Co Ltd
Priority to CN202110105983.5A priority Critical patent/CN112764658B/en
Publication of CN112764658A publication Critical patent/CN112764658A/en
Application granted granted Critical
Publication of CN112764658B publication Critical patent/CN112764658B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04847Interaction techniques to control parameter settings, e.g. interaction with sliders or dials
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1407General aspects irrespective of display type, e.g. determination of decimal point position, display with fixed or driving decimal point, suppression of non-significant zeros

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The disclosure relates to a content presentation method, a content presentation device and a storage medium. The content display method is applied to intelligent equipment and comprises the following steps: responding to the receiving of a preset operation in the process of displaying the target content through the intelligent equipment, and acquiring the duration of the preset operation; when the duration is smaller than a preset time threshold, determining current pose data of the intelligent device; and determining a display mode corresponding to the pose data according to the pose data, and performing three-dimensional display on the target content according to the display mode. Through the method and the device, the target content of the intelligent equipment can be subjected to three-dimensional browsing prejudgment on the premise of not improving the hardware of the intelligent equipment and not increasing the cost of the intelligent equipment, and the browsing effect on the target content is improved.

Description

Content display method and device and storage medium
Technical Field
The present disclosure relates to the field of information display technologies, and in particular, to a content display method and apparatus, and a storage medium.
Background
With the continuous development of the three-dimensional technology, the three-dimensional browsing of the target content and the three-dimensional browsing prejudgment of the target content can be realized through the intelligent equipment.
However, currently, when the intelligent device is used to perform three-dimensional browsing and three-dimensional browsing prediction on the target content, the browsing threshold is relatively high. On one hand, the hardware of the intelligent device has high requirements, and on the other hand, the layout of the spatial internet data needs to be carefully and densely carried out on the software level of the intelligent device, and the layout is complicated, so that the intelligent device cannot be widely applied to browsing the target content.
Disclosure of Invention
To overcome the problems in the related art, the present disclosure provides a content presentation method, apparatus, and storage medium.
According to a first aspect of the embodiments of the present disclosure, there is provided a content display method applied to an intelligent device, including:
responding to the receiving of a preset operation in the process of displaying the target content through the intelligent equipment, and acquiring the duration of the preset operation;
when the duration is smaller than a preset time threshold, determining current pose data of the intelligent device;
and determining a display mode corresponding to the pose data according to the pose data, and performing three-dimensional display on the target content according to the display mode.
In some embodiments, the preset operation includes one or more of a preset touch operation, a preset gesture operation, or a preset physical key operation.
In some embodiments, the determining the current pose data of the smart device comprises:
acquiring acceleration data representing the offset information of the intelligent equipment based on an Inertial Measurement Unit (IMU) of the intelligent equipment, wherein the acceleration data comprises an acceleration magnitude and an acceleration direction;
comparing the acceleration with a predetermined acceleration threshold;
and when the acceleration is larger than the acceleration threshold, determining the acceleration data as the current pose data of the intelligent device.
In some embodiments, the target content is augmented reality content;
the determining the display mode corresponding to the pose data according to the pose data comprises the following steps:
determining a three-dimensional display mode matched with the acceleration direction according to the acceleration direction and a preset corresponding relation between the acceleration direction and the display mode;
the three-dimensional display of the target content according to the display mode comprises the following steps:
determining the display degree corresponding to the acceleration according to the acceleration and a preset proportional relation between the acceleration and the display degree;
and performing three-dimensional display on the target content according to a three-dimensional display mode matched with the acceleration direction and a display degree corresponding to the acceleration.
In some embodiments, the acceleration threshold is predetermined by:
acquiring a plurality of historical pose data of the intelligent equipment in a preset historical time period;
inputting the plurality of historical pose data into a threshold determination model, determining an acceleration threshold value matched with the plurality of historical pose data through the threshold determination model, and outputting the acceleration threshold value.
In some embodiments, the threshold determination model is trained by:
collecting initial pose data of each intelligent device in a plurality of intelligent devices;
acquiring an acceleration threshold corresponding to the initial pose data;
and taking the initial pose data and an acceleration threshold corresponding to the initial pose data as pose training data, and training to obtain the pose determination model through the pose training data.
In some embodiments, the smart device comprises a smart terminal or smart glasses.
According to a second aspect of the embodiments of the present disclosure, there is provided a content presentation apparatus applied to an intelligent device, the content presentation apparatus including:
the acquisition module is used for responding to the receiving of a preset operation in the process of displaying the target content through the intelligent equipment and acquiring the duration time of the preset operation;
the determining module is used for determining the current pose data of the intelligent equipment when the duration is less than a preset time threshold;
and the processing module is used for determining a display mode corresponding to the pose data according to the pose data and performing three-dimensional display on the target content according to the display mode.
In some embodiments, the preset operation includes one or more of a preset touch operation, a preset gesture operation, or a preset physical key operation.
In some embodiments, the determination module determines the current pose data of the smart device by:
acquiring pose data representing the deviation of the intelligent equipment and comprising the acceleration magnitude and the acceleration direction based on an Inertial Measurement Unit (IMU) of the intelligent equipment;
comparing the acceleration with a predetermined acceleration threshold;
and when the acceleration is larger than the acceleration threshold value, determining the pose data as the current pose data of the intelligent device.
In some embodiments, the target content is augmented reality content;
the processing module determines a display mode corresponding to the pose data according to the pose data in the following mode:
determining a three-dimensional display mode matched with the acceleration direction according to the acceleration direction and a preset corresponding relation between the acceleration direction and the display mode;
the three-dimensional display of the target content according to the display mode comprises the following steps:
determining a display degree corresponding to the acceleration according to the acceleration and a preset proportional relation between the acceleration and the display degree;
and performing three-dimensional display on the target content according to a three-dimensional display mode matched with the acceleration direction and a display degree corresponding to the acceleration.
In some embodiments, the determination module predetermines the acceleration threshold by:
acquiring acceleration data representing the offset information of the intelligent equipment based on an Inertial Measurement Unit (IMU) of the intelligent equipment, wherein the acceleration data comprises an acceleration magnitude and an acceleration direction;
comparing the acceleration with a predetermined acceleration threshold;
and when the acceleration is larger than the acceleration threshold, determining the acceleration data as the current pose data of the intelligent device.
In some embodiments, the determination module is further configured to train the threshold determination model by:
collecting initial pose data of each intelligent device in a plurality of intelligent devices;
acquiring an acceleration threshold corresponding to the initial pose data;
and taking the initial pose data and an acceleration threshold corresponding to the initial pose data as pose training data, and training to obtain the pose determination model through the pose training data.
In some embodiments, the smart device comprises a smart terminal or smart glasses.
According to a third aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the steps of the content presentation method provided by the first aspect of the present disclosure.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects: in the process of displaying the target content through the intelligent device, when the preset operation to the intelligent device is received and the operation time is less than the preset time threshold, the display mode corresponding to the current pose data of the intelligent device can be determined according to the current pose data of the intelligent device, and therefore the three-dimensional browsing prejudgment can be performed on the target content of the intelligent device on the premise that the hardware of the intelligent device is not required to be improved and the cost of the intelligent device is not increased, and the browsing effect of a user on the target content is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
FIG. 1 is a flow chart illustrating a method of content presentation according to an exemplary embodiment.
FIG. 2 is a flow diagram illustrating a method of content presentation according to an example embodiment.
FIG. 3 is a block diagram illustrating a content presentation device according to an example embodiment.
FIG. 4 is a block diagram illustrating an apparatus in accordance with an example embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
With the continuous development of three-dimensional technology, the three-dimensional browsing and the three-dimensional browsing prejudgment of target contents can be realized through intelligent equipment.
However, currently, when the intelligent device is used to perform three-dimensional browsing and three-dimensional browsing prediction on the target content, the browsing threshold is relatively high. On one hand, the hardware of the intelligent device has high requirements, and on the other hand, the layout of the spatial internet data needs to be carefully and densely carried out on the software level of the intelligent device, and the layout is complicated, so that the intelligent device cannot be widely applied to browsing the target content.
For example, when browsing and prejudging AR content by using an intelligent device, a depth sensor is usually required to be installed in the intelligent device, and spatial perception of the AR content is realized by using a structured light technology and a vision synchronization positioning and mapping (vSLAM) technology, so as to realize browsing and prejudging the AR content.
Or a visual sensor is installed in the intelligent device, and the spatial perception of the AR content is realized by using a binocular camera vSLAM technology or a monocular camera and an SLAM technology of an Inertial Measurement Unit (IMU), so that the browsing prejudgment of the AR content is realized.
In the above scheme, the hardware configured in the intelligent device is relatively complex, and the algorithm requirement on the hardware of the intelligent device is relatively high, so that most users cannot browse and prejudge the AR content in the intelligent device without high hardware support, and not only is the browsing effect of the users on the AR content influenced, but also the popularization and the popularity of the AR content on the intelligent device are influenced.
In view of this, the present disclosure provides a content display method, which enables a user to perform three-dimensional browsing prejudgment on target content of an intelligent device without improving hardware of the intelligent device or increasing cost of the intelligent device, so as to improve browsing effect of the user on the target content.
Fig. 1 is a flowchart illustrating a content presentation method according to an exemplary embodiment, where the content presentation method is applied to an intelligent device, as shown in fig. 1, and the content presentation method includes the following steps.
In step S11, in the process of displaying the target content through the smart device, in response to receiving the preset operation, the duration of the preset operation is obtained.
The smart device in this disclosure may include a mobile device or a head mounted display device. The intelligent device can be an intelligent terminal, such as a mobile phone, a tablet computer, an intelligent game machine and the like; but also AR devices, VR devices, MR devices, etc. In the present disclosure, the target content may be augmented reality AR content, or may be two-dimensional content supporting three-dimensional display, which is not limited in the present disclosure.
The preset operation may be an auxiliary confirmation operation for performing three-dimensional display on target content currently displayed by the smart device. The preset operation may include one or more of a preset touch operation on the smart device, a preset gesture operation, or a preset operation on a physical key, for example.
For example, when the smart device is a smart terminal, the preset operation may be touching the screen of the smart terminal for more than 3 seconds. For another example, when the smart device is an AR glasses, the preset operation may be to touch a temple of the AR glasses.
In step S12, when the duration is less than a preset time threshold, current pose data of the smart device is determined.
When the user presets the intelligent device, the user can have a misoperation on the intelligent device, but the user does not need to confirm the three-dimensional display of the target content, so that the user is prevented from performing the misoperation on the preset operation of the intelligent device. In a possible implementation manner, in the process of displaying the target content by the intelligent device, in response to receiving a preset operation on the intelligent device, the duration of the preset operation may be obtained, and when the duration of the preset operation is smaller than a preset time threshold, because the user performs a flicking operation or a moving operation on the intelligent device, the pose data measured by the inertial measurement unit in the intelligent device changes, and it is determined that the current preset operation of the user is the target content that needs to be currently displayed by the intelligent device, and an auxiliary confirmation operation of three-dimensional display is performed.
In one possible implementation, the present disclosure may determine the current pose data of the smart device, for example, by:
the method comprises the steps of obtaining acceleration data representing the offset information of the intelligent device based on an Inertial Measurement Unit (IMU) of the intelligent device, wherein the acceleration data comprise the acceleration magnitude and the acceleration direction. And comparing the acceleration in the acceleration data with a predetermined acceleration threshold, and determining the acquired acceleration data representing the offset information of the intelligent equipment as the current pose data of the intelligent equipment when the acceleration is greater than the acceleration threshold.
In step S13, a display mode corresponding to the pose data is determined according to the pose data, and the target content is displayed in three dimensions according to the display mode corresponding to the pose data.
In a possible implementation manner, when target content displayed on the smart device is two-dimensional content supporting three-dimensional display, in response to receiving a preset operation instructing three-dimensional display, duration of the preset operation is acquired, and when the duration is smaller than a preset time threshold, current pose data of the smart device is determined. And determining a display mode corresponding to the pose data according to the pose data, and performing three-dimensional display on the target content according to the display mode corresponding to the pose data.
The display mode of the target content can be determined according to the corresponding relation between the preset pose data and the display mode. For example, when the preset pose data is forward acceleration, displaying the target content in an amplifying and forward mode; when the preset pose data is backward acceleration, displaying the target content in a mode of shrinking and retreating; when the preset pose data is leftward acceleration, displaying the target content in a leftward roaming mode; when the preset pose data is rightward acceleration, displaying the target content in a rightward roaming mode; when the preset pose data is upward acceleration, displaying the target content in an upward roaming mode; and when the preset pose data is the downward acceleration, displaying the target content in a downward roaming mode.
In the exemplary embodiment of the disclosure, in the process of displaying the target content by the smart device, in response to receiving a preset operation on the smart device, and the operation duration is less than a preset time threshold, according to the current pose data of the smart device, a three-dimensional display mode corresponding to the current pose of the smart device can be determined, so that the target content of the smart device can be subjected to three-dimensional browsing prejudgment without improving hardware of the smart device and without increasing the cost of the smart device, and the browsing effect of a user on the target content is improved.
The following description will explain in detail a content presentation method applying the present disclosure, taking an example in which the target content is augmented reality AR content.
Fig. 2 is a flowchart illustrating a content presentation method according to an exemplary embodiment, where the content presentation method is used in an intelligent device, as shown in fig. 2, and the content presentation method includes the following steps.
In step S21, in the process of displaying the augmented reality content through the smart device, in response to receiving a preset operation on the smart device, a duration of the preset operation is obtained.
In step S22, when the duration is less than the preset time threshold, it is determined that the acceleration data representing the offset information of the smart device is obtained based on the inertial measurement unit IMU of the smart device, where the acceleration data includes an acceleration magnitude and an acceleration direction.
In step S23, the acceleration is compared with a predetermined acceleration threshold, and when the acceleration is greater than the acceleration threshold, the acceleration data is determined as the current pose data of the intelligent device.
In order to ensure that the pose data of the smart device is obtained based on IMU measurement, and the pose data generated by triggering when the user needs to perform three-dimensional display on AR content is determined, in one possible embodiment, the present disclosure may apply acceleration data representing offset information of the smart device, where the acceleration data includes an acceleration magnitude and an acceleration direction, and the acceleration data is acquired based on the IMU.
Comparing the acceleration in the acceleration data with a predetermined acceleration threshold, and determining the acquired acceleration data representing the offset information of the intelligent device as the current pose data of the intelligent device based on IMU measurement when the acceleration acquired based on the IMU is greater than the acceleration threshold.
In addition, because different users have different operation habits on the intelligent equipment, the pose data obtained by different intelligent equipment based on IMU measurement is different. In some possible embodiments, the type of the user is determined in advance, and the current pose data of the intelligent device is determined according to a threshold corresponding to the type of the user. For example, the user types may include users such as children, adults, or the elderly who may not have the same pose data based on IMU measurements when operating the smart device. And then, according to the operation of different users on the intelligent device, the acceleration threshold needs to be determined in an individualized mode so as to ensure that when the users need to perform three-dimensional display on the target content, the pose data of the intelligent device can be obtained based on IMU measurement in a triggering mode, and the purpose of performing three-dimensional display on the target content of the intelligent device is achieved.
In one possible embodiment, the present disclosure may predetermine the acceleration threshold, for example, by:
in a preset historical time period, based on a plurality of historical six-degree-of-freedom pose data of the IMU measurement intelligent device, the historical six-degree-of-freedom pose data are input into a threshold determination model, an acceleration threshold matched with the historical six-degree-of-freedom pose data is determined through the threshold determination model, and the acceleration threshold is output.
The threshold determination model can be obtained by training in the following way:
the method comprises the steps of collecting initial pose data of each intelligent device in a plurality of intelligent devices based on IMU measurement, obtaining an acceleration threshold corresponding to the initial pose data, using the initial pose data and the acceleration threshold corresponding to the initial pose data as pose training data, and training to obtain a pose determination model through the pose training data.
In step S24, a display mode corresponding to the pose data is determined according to the pose data, and the augmented reality content is displayed in a roaming manner according to the display mode corresponding to the pose data.
In a possible implementation manner, the present disclosure provides a method for determining a presentation manner corresponding to the pose data according to the pose data, including:
the method comprises the steps of obtaining acceleration data representing the deviation information of the intelligent equipment based on an Inertial Measurement Unit (IMU) of the intelligent equipment, wherein the acceleration data comprises acceleration magnitude and acceleration direction. And determining a three-dimensional display mode matched with the acceleration direction according to the acceleration direction and the preset corresponding relation between the acceleration direction and the display mode. And determining the display degree corresponding to the acceleration according to the acceleration and the preset proportional relation between the acceleration and the display degree, and performing three-dimensional display on the target content according to the three-dimensional display mode matched with the acceleration direction and the display degree corresponding to the acceleration.
For example, when the preset pose data is a forward acceleration, the target content is displayed in an enlarged and forward manner, and is displayed in an enlarged and forward manner at a forward speed corresponding to the magnitude of the forward acceleration.
When the preset pose data is backward acceleration, displaying the target content in a shrinking and retreating mode, and retreating and shrinking displaying the target content according to a retreating speed corresponding to the backward acceleration.
And when the preset pose data is leftward acceleration, displaying the target content in a leftward roaming mode, and displaying the target content according to a leftward roaming speed corresponding to the leftward acceleration.
And when the preset pose data is the rightward acceleration, displaying the target content in a rightward roaming mode, and displaying the target content at a rightward roaming speed corresponding to the rightward acceleration.
And when the preset pose data is the upward acceleration, displaying the target content in an upward roaming mode, and displaying the target content according to the upward roaming speed corresponding to the upward acceleration.
And when the preset pose data is the downward acceleration, displaying the target content in a downward roaming mode, and displaying the target content according to the downward roaming speed corresponding to the downward acceleration.
In the exemplary embodiment of the disclosure, through the personalized positioning pose data threshold, when the user needs to perform roaming display on the AR content, it is ensured that pose data of the smart device can be acquired according to the use habit of the user, and the purpose of performing three-dimensional roaming display on the AR content of the smart device is achieved. Moreover, when the AR content of the intelligent device is displayed in a three-dimensional mode, the display degree corresponding to the acceleration can be determined according to the acceleration representing the deviation of the intelligent device in the pose data, the effect of simulating the real operation of the AR content is achieved, and the display effect is vivid.
FIG. 3 is a block diagram 300 illustrating a content presentation device according to an example embodiment. Referring to fig. 3, the intelligent device is installed with an inertial measurement unit IMU, and the content presentation apparatus includes: an acquisition module 301, a determination module 302 and a processing module 303.
The obtaining module 301 is configured to, in the process of displaying the target content through the smart device, in response to receiving a preset operation, obtain a duration of the preset operation;
a determining module 302, configured to determine, when the duration is less than a preset time threshold, current pose data of the smart device;
and the processing module 303 is configured to determine a display mode corresponding to the pose data according to the pose data, and perform three-dimensional display on the target content according to the display mode.
In some embodiments, the preset operation includes one or more of a preset touch operation, a preset gesture operation, or a preset physical key operation.
In some embodiments, the determination module 302 determines the current pose data of the smart device by:
acquiring acceleration data representing the offset information of the intelligent equipment based on an Inertial Measurement Unit (IMU) of the intelligent equipment, wherein the acceleration data comprises an acceleration magnitude and an acceleration direction;
comparing the acceleration with a predetermined acceleration threshold;
and when the acceleration is larger than the acceleration threshold value, determining the pose data as the current pose data of the intelligent device.
In some embodiments, the target content is augmented reality content;
the processing module 303 determines a display mode corresponding to the pose data according to the pose data in the following manner:
determining a three-dimensional display mode matched with the acceleration direction according to the acceleration direction and a preset corresponding relation between the acceleration direction and the display mode;
the processing module 303 performs three-dimensional display on the target content according to the display mode as follows:
determining the display degree corresponding to the acceleration according to the acceleration and a preset proportional relation between the acceleration and the display degree;
and performing three-dimensional display on the target content according to a three-dimensional display mode matched with the acceleration direction and a display degree corresponding to the acceleration.
In some embodiments, the determination module 302 pre-determines the acceleration threshold by:
acquiring acceleration data representing the offset information of the intelligent equipment based on an Inertial Measurement Unit (IMU) of the intelligent equipment, wherein the acceleration data comprises an acceleration magnitude and an acceleration direction;
comparing the acceleration with a predetermined acceleration threshold;
and when the acceleration is larger than the acceleration threshold, determining the acceleration data as the current pose data of the intelligent device.
In some embodiments, the determination module 302 is further configured to train the threshold determination model by:
collecting initial pose data of each intelligent device in a plurality of intelligent devices;
acquiring an acceleration threshold corresponding to the initial pose data;
and taking the initial pose data and an acceleration threshold corresponding to the initial pose data as pose training data, and training to obtain the pose determination model through the pose training data.
In some embodiments, the smart device comprises a smart terminal or smart glasses.
With regard to the apparatus in the above embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be described in detail here.
The present disclosure also provides a computer-readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the steps of the content presentation method provided by the present disclosure.
FIG. 4 is a block diagram illustrating an apparatus 400 for content presentation according to an example embodiment. For example, the apparatus 400 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Referring to fig. 4, the apparatus 400 may include one or more of the following components: a processing component 402, a memory 404, a power component 406, a multimedia component 408, an audio component 410, an interface for input/output (I/O) 412, a sensor component 414, and a communication component 416.
The processing component 402 generally controls overall operation of the apparatus 400, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 402 may include one or more processors 420 to execute instructions to perform all or a portion of the steps of the content presentation methods described above. Further, the processing component 402 can include one or more modules that facilitate interaction between the processing component 402 and other components. For example, the processing component 402 can include a multimedia module to facilitate interaction between the multimedia component 408 and the processing component 402.
The memory 404 is configured to store various types of data to support operations at the apparatus 400. Examples of such data include instructions for any application or method operating on the device 400, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 404 may be implemented by any type or combination of volatile or non-volatile memory devices, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
Power components 406 provide power to the various components of device 400. Power components 406 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for apparatus 400.
The multimedia component 408 includes a screen that provides an output interface between the device 400 and the user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 408 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the apparatus 400 is in an operation mode, such as a photographing mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 410 is configured to output and/or input audio signals. For example, audio component 410 includes a Microphone (MIC) configured to receive external audio signals when apparatus 400 is in an operating mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 404 or transmitted via the communication component 416. In some embodiments, audio component 410 also includes a speaker for outputting audio signals.
The I/O interface 412 provides an interface between the processing component 402 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor component 414 includes one or more sensors for providing various aspects of status assessment for the apparatus 400. For example, the sensor assembly 414 may detect an open/closed state of the apparatus 400, the relative positioning of the components, such as a display and keypad of the apparatus 400, the sensor assembly 414 may also detect a change in the position of the apparatus 400 or a component of the apparatus 400, the presence or absence of user contact with the apparatus 400, orientation or acceleration/deceleration of the apparatus 400, and a change in the temperature of the apparatus 400. The sensor assembly 414 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 414 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 414 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 416 is configured to facilitate wired or wireless communication between the apparatus 400 and other devices. The apparatus 400 may access a wireless network based on a communication standard, such as WiFi,2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 416 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 416 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 400 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described content presentation methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium comprising instructions, such as the memory 404 comprising instructions, executable by the processor 420 of the apparatus 400 to perform the content presentation method described above is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
In another exemplary embodiment, a computer program product is also provided, which comprises a computer program executable by a programmable apparatus, the computer program having code portions for performing the content presentation method described above when executed by the programmable apparatus.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (8)

1. A content display method is applied to intelligent equipment and comprises the following steps:
responding to a received preset operation in the process of displaying the target content by the intelligent equipment, and acquiring the duration of the preset operation, wherein the intelligent equipment comprises an intelligent terminal, the target content is two-dimensional content supporting three-dimensional display, and the preset operation is an auxiliary confirmation operation for carrying out three-dimensional display on the target content currently displayed by the intelligent equipment;
when the duration is smaller than a preset time threshold, determining current pose data of the intelligent device;
determining a display mode corresponding to the pose data according to the pose data, and performing three-dimensional display on the target content according to the display mode;
the preset operation comprises one or more of preset touch operation, preset gesture operation or preset physical key operation.
2. The content presentation method according to claim 1, wherein the determining the current pose data of the smart device comprises:
acquiring acceleration data representing the offset information of the intelligent equipment based on an Inertial Measurement Unit (IMU) of the intelligent equipment, wherein the acceleration data comprises an acceleration magnitude and an acceleration direction;
comparing the acceleration with a predetermined acceleration threshold;
and when the acceleration is larger than the acceleration threshold, determining the acceleration data as the current pose data of the intelligent device.
3. The content presentation method according to claim 2, wherein the target content is augmented reality content;
the determining the display mode corresponding to the pose data according to the pose data comprises the following steps:
determining a three-dimensional display mode matched with the acceleration direction according to the acceleration direction and a preset corresponding relation between the acceleration direction and the display mode;
the three-dimensional display of the target content according to the display mode comprises the following steps:
determining the display degree corresponding to the acceleration according to the acceleration and a preset proportional relation between the acceleration and the display degree;
and performing three-dimensional display on the target content according to a three-dimensional display mode matched with the acceleration direction and a display degree corresponding to the acceleration.
4. The content presentation method according to claim 2, wherein the acceleration threshold is predetermined by:
acquiring a plurality of historical pose data of the intelligent equipment in a preset historical time period;
inputting the plurality of historical pose data into a threshold determination model, determining an acceleration threshold matched with the plurality of historical pose data through the threshold determination model, and outputting the acceleration threshold.
5. The content presentation method of claim 4, wherein the threshold determination model is trained by:
collecting initial pose data of each intelligent device in a plurality of intelligent devices;
acquiring an acceleration threshold corresponding to the initial pose data;
and taking the initial pose data and an acceleration threshold corresponding to the initial pose data as pose training data, and training to obtain the pose determination model through the pose training data.
6. A content display device is applied to intelligent equipment and comprises:
the system comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for responding to the receiving of a preset operation in the process of displaying target content by the intelligent equipment and acquiring the duration time of the preset operation, the intelligent equipment comprises an intelligent terminal, the target content is two-dimensional content supporting three-dimensional display, and the preset operation is auxiliary confirmation operation of three-dimensional display on the target content currently displayed by the intelligent equipment;
the determining module is used for determining the current pose data of the intelligent equipment when the duration is less than a preset time threshold;
the processing module is used for determining a display mode corresponding to the pose data according to the pose data and performing three-dimensional display on the target content according to the display mode;
the preset operation comprises one or more of preset touch operation, preset gesture operation or preset physical key operation.
7. A content presentation device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to: the steps of the method of any one of claims 1 to 5 when executed.
8. A computer-readable storage medium, on which computer program instructions are stored, which program instructions, when executed by a processor, carry out the steps of the method according to any one of claims 1 to 5.
CN202110105983.5A 2021-01-26 2021-01-26 Content display method and device and storage medium Active CN112764658B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110105983.5A CN112764658B (en) 2021-01-26 2021-01-26 Content display method and device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110105983.5A CN112764658B (en) 2021-01-26 2021-01-26 Content display method and device and storage medium

Publications (2)

Publication Number Publication Date
CN112764658A CN112764658A (en) 2021-05-07
CN112764658B true CN112764658B (en) 2022-10-21

Family

ID=75705856

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110105983.5A Active CN112764658B (en) 2021-01-26 2021-01-26 Content display method and device and storage medium

Country Status (1)

Country Link
CN (1) CN112764658B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113608616A (en) * 2021-08-10 2021-11-05 深圳市慧鲤科技有限公司 Virtual content display method and device, electronic equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111638793A (en) * 2020-06-04 2020-09-08 浙江商汤科技开发有限公司 Aircraft display method and device, electronic equipment and storage medium
CN111651052A (en) * 2020-06-10 2020-09-11 浙江商汤科技开发有限公司 Virtual sand table display method and device, electronic equipment and storage medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20180108739A (en) * 2016-08-30 2018-10-04 베이징 시아오미 모바일 소프트웨어 컴퍼니 리미티드 VR control method, apparatus, electronic apparatus, program and storage medium
CN108021241B (en) * 2017-12-01 2020-08-25 西安维度视界科技有限公司 Method for realizing virtual-real fusion of AR glasses
CN108008817B (en) * 2017-12-01 2020-08-04 西安维度视界科技有限公司 Method for realizing virtual-actual fusion
CN108319363A (en) * 2018-01-09 2018-07-24 北京小米移动软件有限公司 Product introduction method, apparatus based on VR and electronic equipment
CN109067978A (en) * 2018-07-03 2018-12-21 Oppo广东移动通信有限公司 Button operation processing method, device, storage medium and electronic equipment

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111638793A (en) * 2020-06-04 2020-09-08 浙江商汤科技开发有限公司 Aircraft display method and device, electronic equipment and storage medium
CN111651052A (en) * 2020-06-10 2020-09-11 浙江商汤科技开发有限公司 Virtual sand table display method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN112764658A (en) 2021-05-07

Similar Documents

Publication Publication Date Title
EP3182716A1 (en) Method and device for video display
EP3561692A1 (en) Method and device for displaying web page content
EP3312702B1 (en) Method and device for identifying gesture
EP2988205A1 (en) Method and device for transmitting image
US11372516B2 (en) Method, device, and storage medium for controlling display of floating window
CN107798309B (en) Fingerprint input method and device and computer readable storage medium
CN106774849B (en) Virtual reality equipment control method and device
US20200402321A1 (en) Method, electronic device and storage medium for image generation
CN105094626A (en) Method and device for selecting text contents
EP3770763B1 (en) Method and device for presenting information on a terminal
CN106775210B (en) Wallpaper changing method and device
CN112764658B (en) Content display method and device and storage medium
CN113079493A (en) Information matching display method and device and electronic equipment
CN108829473B (en) Event response method, device and storage medium
CN106951171B (en) Control method and device of virtual reality helmet
US9832342B2 (en) Method and device for transmitting image
CN109407942B (en) Model processing method and device, control client and storage medium
CN114296587A (en) Cursor control method and device, electronic equipment and storage medium
CN109754452B (en) Image rendering processing method and device, electronic equipment and storage medium
CN108227927B (en) VR-based product display method and device and electronic equipment
CN107783704B (en) Picture effect adjusting method and device and terminal
CN112148183A (en) Processing method, device and medium of associated object
CN106293398B (en) Method, device and terminal for recommending virtual reality resources
CN108596719B (en) Image display method and device
CN112948704B (en) Model training method and device for information recommendation, electronic equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant