CN109068126B - Video playing method and device, storage medium and wearable device - Google Patents

Video playing method and device, storage medium and wearable device Download PDF

Info

Publication number
CN109068126B
CN109068126B CN201811000831.3A CN201811000831A CN109068126B CN 109068126 B CN109068126 B CN 109068126B CN 201811000831 A CN201811000831 A CN 201811000831A CN 109068126 B CN109068126 B CN 109068126B
Authority
CN
China
Prior art keywords
playing
video resource
video
wearable device
instruction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811000831.3A
Other languages
Chinese (zh)
Other versions
CN109068126A (en
Inventor
魏苏龙
林肇堃
麦绮兰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201811000831.3A priority Critical patent/CN109068126B/en
Publication of CN109068126A publication Critical patent/CN109068126A/en
Application granted granted Critical
Publication of CN109068126B publication Critical patent/CN109068126B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The embodiment of the application discloses a video playing method and device, a storage medium and wearable equipment. The method comprises the following steps: receiving a first operation for a wearable device; when a first instruction corresponding to the first operation is determined to be a 3D playing instruction, acquiring a 3D video resource; and controlling a first lens of the wearable device to play a first video resource corresponding to the 3D video resource, and controlling a second lens of the wearable device to play a second video resource corresponding to the 3D video resource, so as to realize 3D effect playing of the 3D video resource. By adopting the technical scheme, the embodiment of the application can utilize the wearable device to easily realize the playing of the 3D video, and enriches the playing function of the wearable device.

Description

Video playing method and device, storage medium and wearable device
Technical Field
The embodiment of the application relates to the technical field of wearable equipment, in particular to a video playing method and device, a storage medium and wearable equipment.
Background
At present, intelligent wearable equipment enters the daily life of a large number of users, and convenience in many aspects is provided for the life, work and the like of the users.
Along with the development of intelligence wearing technique, abundant various functions can be realized to present intelligent wearable equipment. However, the functions of the current smart wearable devices are still not complete enough, and improvements are needed.
Disclosure of Invention
The embodiment of the application provides a video playing method and device, a storage medium and wearable equipment, and a video playing scheme based on the wearable equipment can be optimized.
In a first aspect, an embodiment of the present application provides a video playing method, including:
receiving a first operation for the wearable device;
when the first instruction corresponding to the first operation is determined to be a 3D playing instruction, acquiring a 3D video resource;
and controlling a first lens of the wearable device to play a first video resource corresponding to the 3D video resource, and controlling a second lens of the wearable device to play a second video resource corresponding to the 3D video resource, so as to realize the 3D effect playing of the 3D video resource.
In a second aspect, an embodiment of the present application provides a video playing apparatus, including:
an operation receiving module for receiving a first operation for the wearable device;
the video resource acquisition module is used for acquiring a 3D video resource when a first instruction corresponding to the first operation is determined to be a 3D playing instruction;
and the video playing control module is used for controlling the first lens of the wearable device to play the first video resource corresponding to the 3D video resource and controlling the second lens of the wearable device to play the second video resource corresponding to the 3D video resource so as to realize the 3D effect playing of the 3D video resource.
In a third aspect, an embodiment of the present application provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements a video playing method according to an embodiment of the present application.
In a fourth aspect, an embodiment of the present application provides a wearable device, including a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor executes the computer program to implement the video playing method according to the embodiment of the present application.
According to the video playing scheme provided by the embodiment of the application, when the 3D playing instruction for the wearable device is determined to be received, the 3D video resource is obtained, and the first lens and the second lens of the wearable device are respectively controlled to play the first video resource and the second video resource corresponding to the 3D video resource, so that the 3D effect playing of the 3D video resource is realized. Through adopting above-mentioned technical scheme, can utilize wearable equipment to easily realize the video broadcast of 3D, richen the instant music of wearable equipment.
Drawings
Fig. 1 is a schematic flowchart of a video playing method according to an embodiment of the present application;
fig. 2 is a schematic flowchart of another video playing method according to an embodiment of the present application;
fig. 3 is a schematic diagram of a 3D playing effect of the smart glasses according to the embodiment of the present application;
fig. 4 is a block diagram of a video playing apparatus according to an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of a wearable device according to an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of another wearable device provided in the embodiment of the present application;
fig. 7 is a schematic entity diagram of a wearable device provided in an embodiment of the present application.
Detailed Description
The technical scheme of the application is further explained by the specific implementation mode in combination with the attached drawings. It is to be understood that the specific embodiments described herein are merely illustrative of the application and are not limiting of the application. It should be further noted that, for the convenience of description, only some of the structures related to the present application are shown in the drawings, not all of the structures.
Before discussing exemplary embodiments in more detail, it should be noted that some exemplary embodiments are described as processes or methods depicted as flowcharts. Although a flowchart may describe the steps as a sequential process, many of the steps can be performed in parallel, concurrently or simultaneously. In addition, the order of the steps may be rearranged. The process may be terminated when its operations are completed, but may have additional steps not included in the figure. The processes may correspond to methods, functions, procedures, subroutines, and the like.
Fig. 1 is a schematic flowchart of a video playing method provided in an embodiment of the present application, where the method may be executed by a video playing apparatus, where the apparatus may be implemented by software and/or hardware, and may be generally integrated in a wearable device. As shown in fig. 1, the method includes:
step 101, receiving a first operation for a wearable device.
In the embodiment of the application, the specific composition structure, shape, volume and other attributes of the wearable device are not limited. Wearable devices may include wearable devices worn on the head of a user, such as smart glasses, smart helmets, and the like. Illustratively, taking smart glasses as an example, the smart glasses include a glasses frame and a lens. The eyeglass frame body comprises eyeglass legs and an eyeglass frame. Optionally, the inner sides of the glasses legs can be provided with breathing lamps, the breathing lamps can be LED lamps, and the breathing lamps can flash according to the heartbeat frequency of the intelligent glasses wearer. The temple is further provided with a touch area (such as a touch panel) and a bone conduction area. The touch control area is arranged on the outer side of the glasses legs, and a touch detection module is arranged in the touch control area and used for detecting touch operation of a user. For example, a touch sensor module is used to detect a touch operation by a user, and the touch sensor module is at a low level in an initial state and at a high level when there is a touch operation. In a scene that a user wears smart glasses, the side of the temple close to the face is defined as the inner side, and the side opposite to the inner side and far from the face is defined as the outer side. Bone conduction regions are provided on the temples near the ears. Wherein, a bone conduction component such as a bone conduction earphone or a bone conduction sensor is arranged in the bone conduction area. Set up heart rate detection module (like heart rate sensor) in the position that the mirror leg is close to face temple for acquire the heart rate information of wearing intelligent glasses user. Set up intelligent microphone on the picture frame, can the current ambient noise size of locating of intelligent recognition, can be based on the performance of ambient noise automatically regulated microphone. The mirror frame is also provided with a distance sensor, a gyroscope and the like. In addition, the glasses frame and the nose support are also provided with an Electrooculogram (EOG) sensor for acquiring the eye state of the user. In addition, still be provided with the micro-processing district on the mirror leg, microprocessor sets up in the micro-processing district, is connected with devices such as above-mentioned touch detection module, bone conduction earphone, heart rate sensor, intelligent microphone, distance sensor, gyroscope, electrooculogram sensor electricity respectively for receive the data of treating, carry out data operation, data processing and output control command to corresponding device. It should be noted that the smart glasses may download the multimedia resource from the cloud for playing through the internet, and may also acquire the multimedia resource from the terminal device by establishing a communication connection with the terminal device, which is not limited in this application.
Wearable equipment can help the user to realize various functions as intelligent equipment, and the user can control wearable equipment by using human-computer interaction. In the embodiment of the present application, a specific manner of receiving an operation by a wearable device is not limited, and the first operation may be any operation in any form for controlling the wearable device. For example, a physical key or a virtual key (e.g., a touch key) may be provided on the wearable device, and a user may press or touch the key on the wearable device by specifying a trigger manner (e.g., clicking, long-pressing, or multiple continuous clicks, etc.) to express his/her operation intention. Illustratively, the wearable device can be further provided with a voice recognition module, the words spoken by the user are collected through a microphone, semantic analysis is performed on natural language spoken by the user through the voice recognition module, corresponding voice content is obtained, and the wearable device is controlled to respond to voice commands of the user according to the voice content. For example, a sensor (such as an ultrasonic sensor) for sensing a user action, such as a gesture action of the user, may be further disposed on the wearable device, and the sensor is used to recognize a motion of the user for expressing an operation intention, and control the wearable device to perform a corresponding response according to a type of the motion.
In the embodiment of the application, the state of the wearable device when receiving the first control instruction is not limited, for example, the wearable device may be in a standby state, may be in a process of playing a common video (2D video), and may also be in other states.
And 102, when the first instruction corresponding to the first operation is determined to be a 3D playing instruction, acquiring a 3D video resource.
For example, instruction contents corresponding to different operations may be preset according to types of operations that can be received by the wearable device. Taking the user action as an example, the instruction content corresponding to the user action a (such as erecting the thumb of the right hand) is to turn on the camera; the instruction content corresponding to the user action B (such as moving the upright thumb of the right hand close to the face of the user) is to control the camera to take a picture; the instruction content corresponding to the user action C (for example, moving the upright thumb of the right hand away from the face of the user) is to control the camera to start recording; the instruction content corresponding to the user action D (if the right hand makes an OK gesture) is to answer the incoming call; the instruction content corresponding to the user action E (such as erecting the index finger and the middle finger on the right hand) is to play a 2D video; the instruction content corresponding to the user action F (such as right hand holding up index finger, middle finger and ring finger) is to play a 3D video.
In the embodiment of the application, after a first operation for the wearable device is received, the first operation can be identified, a corresponding instruction is determined, and then whether a user has an operation intention for performing 3D playing is judged.
Optionally, in some embodiments, the receiving a first operation for the wearable device includes: detecting a first user action on the wearable device. The determining that the first instruction corresponding to the first operation is a 3D play instruction includes: and judging whether the first user action meets a preset 3D playing condition, and if so, determining that a first instruction corresponding to the first user action is a 3D playing instruction. The benefit that sets up like this lies in, because when the user wears wearable equipment, be difficult to see the button above the wearable equipment, it touches to take place the mistake easily, and the speech mode causes the puzzlement to other people on every side easily, adopt the mode that detects user's action, user's true operation intention can be discerned fast accurately, and user's action can have a very large variety of changes, can richen the kind of controlling the instruction, realize abundanter control operation, and the user is also more light convenient when moving, and then promote interactive efficiency. The preset 3D playing condition may include a preset gesture action type, a preset action amplitude, a preset action change speed, a preset action change trend, and the like, and the preset user action recognition condition corresponding to the 3D playing instruction is provided.
In the embodiment of the application, when it is determined that the first instruction corresponding to the received first operation is a 3D playing instruction, it indicates that the user wishes to watch a 3D video, and at this time, a 3D video resource can be acquired for playing. For example, the information may be obtained from a memory inside the wearable device, or may be obtained from an external terminal, where the external terminal may be a terminal device previously associated with the wearable device, such as a server or a mobile terminal.
Step 103, controlling the first lens of the wearable device to play the first video resource corresponding to the 3D video resource, and controlling the second lens of the wearable device to play the second video resource corresponding to the 3D video resource, so as to realize 3D effect playing of the 3D video resource.
Wearable equipment possesses the image display function, and this application embodiment does not limit wearable equipment's formation of image principle. For example, the imaging can be performed by a micro projector, for example, by using the principle of optical reflection projection, the micro projector projects light onto a reflective screen, and then refracts the light to the eyeball of a human body through a convex lens to realize the first-order amplification, so that a virtual screen large enough to display text information, images and the like is formed in front of the eye. For example, the projection may be performed by using a low-power laser, and the laser displays an image of a certain pixel on the glasses lens, and the image is reflected to the retina of the user, so as to realize the image display. Of course, other imaging modes are possible, and the embodiments of the present application are not described one by one.
The human eyes are left and right, the distance between the two eyes (pupils) is about 8 cm generally, when a user looks at a certain object at ordinary times, the two eyes have an angle difference, and 3D imaging is generated by the vision difference of the two eyes of the user. In order to make people see 3D images, the left eye and the right eye must see different images, so that the two images have a certain difference, that is, the situation when the people actually see the images is simulated, and the 3D stereoscopic feeling is the reason.
In the wearable device in the embodiment of the present application, the first lens and the second lens may display two different pictures at the same time or in a time-sharing manner, and each eye corresponds to one lens. For example, the left eye is opposite to the first lens, and the right eye is opposite to the second lens (this case is described as an example below); or the left eye is opposite to the second lens, and the right eye is opposite to the first lens, so that when the user wears the wearable device, the left eye and the right eye see different images, the situation that the actual human eyes watch is simulated, and the 3D playing effect is achieved. Specifically, the 3D video resources acquired by the wearable device may include a first video resource corresponding to the left eye and a second video resource corresponding to the right eye, and the wearable device extracts the first video resource and the second video resource from the 3D video resources and allocates the first video resource and the second video resource to the first lens and the second lens for playing respectively.
Illustratively, when the first lens and the second lens play the respective corresponding video resources, the matching mode of the playing time is not specifically limited in the present application, and can be designed in a targeted manner according to the imaging principle of the wearable device. For example, when the image displayed on the first lens cannot or is not easily seen by the right eye and the image displayed on the second lens cannot or is not easily seen by the left eye, the first lens and the second lens can be controlled to simultaneously display two different images corresponding to the same 3D picture, so that the user can view the picture with 3D effect; for another example, in the case that the image displayed on the first lens can be seen by the right eye and the image displayed on the second lens can be seen by the left eye, in order to avoid the interference between the left and right eyes, the first lens and the second lens can be controlled to display two different images in a time-sharing manner, since the human eyes have delay to flicker, it is generally hard to detect above 24Hz, for example, the frequency of the overall refreshing of the images displayed on the first lens and the second lens can be 120Hz, i.e. 120 times per second of alternate flickering, such as the first lens displays the first picture a1 in the first video resource within 0-1/120 seconds, the second lens displays the first picture b1 in the second video resource within 1/120-2/120 seconds, a1 and b1 together form the first sub-3D picture, the first lens displays the second picture a2 in the first video resource within 2/120-3/120 seconds, the second lens displays a second picture b2 in a second video resource within 3/120-4/120 seconds, a2 and b2 jointly form a second 3D picture, and the like, so that video playing of a 3D effect is achieved.
According to the video playing method provided by the embodiment of the application, when the 3D playing instruction for the wearable device is determined to be received, the 3D video resource is obtained, and the first lens and the second lens of the wearable device are respectively controlled to play the first video resource and the second video resource corresponding to the 3D video resource, so that the 3D effect playing of the 3D video resource is realized. Through adopting above-mentioned technical scheme, can utilize wearable equipment to easily realize the video broadcast of 3D, richen the instant music of wearable equipment.
In some embodiments, when it is determined that the first instruction corresponding to the first operation is a 3D play instruction, acquiring a 3D video resource includes: when determining that a first instruction corresponding to the first operation is a 3D playing instruction, sending a 3D video request to a preset terminal device, wherein the 3D video request is used for indicating the preset terminal device to send a 3D video resource to the wearable device; wherein the 3D video asset comprises a first video asset and a second video asset; and receiving the 3D video resource sent by the preset terminal equipment. The advantage that sets up like this lies in, reduces the occupation of 3D video resource to the inside storage space of wearable equipment, and storage space in the terminal equipment is generally great, and the expansion capacity is strong, can satisfy the higher video resource data volume requirement of user. For example, the 3D video request may instruct the preset terminal device to switch from currently sending the 2D video resource to the wearable device to send the 3D video resource; the preset terminal equipment can be instructed to directly send the 3D video resources to the wearable equipment, and after the preset terminal equipment receives the 3D video request, a user can be prompted to select the 3D video resources to be sent on the preset terminal equipment, such as the 3D video resources of a certain movie.
In some embodiments, the preset terminal device includes a preset server or a preset mobile terminal. When the preset terminal equipment comprises a preset server: before determining that the first instruction corresponding to the first operation is a 3D play instruction, the method further includes: logging in an account corresponding to the preset server; the sending of the 3D video request to the preset terminal device includes: and sending a 3D video request to the preset server through the Internet based on the account. The advantage of setting up like this lies in, can realize the online broadcast of 3D video resource, and the user can watch richer 3D video resource, and through the mode of logging in the account, the user can carry out individualized management to the video resource in predetermineeing the server, and the convenience of customers finds the video resource that oneself wanted to watch fast, can guarantee personal data's security simultaneously. Optionally, the wearable device may access a preset server and receive the 3D video resource through a wireless local area network or a mobile data network (e.g., a fourth generation mobile communication technology 4G mobile data network or a fifth generation mobile communication technology 5G mobile data network). When the preset terminal equipment comprises a preset mobile terminal: before determining that the first instruction corresponding to the first operation is a 3D play instruction, the method further includes: establishing wireless communication connection with the preset mobile terminal; the sending of the 3D video request to the preset terminal device includes: and sending a 3D video request to the preset mobile terminal based on the wireless communication connection. The advantage that sets up like this lies in, can realize the nimble pairing between wearable equipment and the predetermined mobile terminal, makes things convenient for the quick transmission of 3D video resource. Wherein, wireless communication connects and can realize based on closely wireless communication technique such as bluetooth, can avoid wearing formula equipment to visit the internet like this, when guaranteeing data security, can also avoid consuming too much data flow because of not possessing wireless internet access condition, practices thrift the cost.
In some embodiments, while controlling the first lens of the wearable device to play the first video resource corresponding to the 3D video resource and controlling the second lens of the wearable device to play the second video resource corresponding to the 3D video resource, the method further includes: and realizing the surround sound playing corresponding to the 3D video resource by utilizing the bone conduction component of the wearable device. The benefit that sets up like this lies in, when the user watches 3D video image, can experience surround the stereo, and the experience that the reinforcing is personally on the scene makes wearable equipment can bring more lifelike audition experience for the user. In addition, bone conduction is a sound conduction mode, that is, sound is converted into mechanical vibration with different frequencies, and sound waves are transmitted through the skull, the bone labyrinth, the lymph fluid transmission of the inner ear, the spiral organ, the auditory nerve, the auditory center and the like of a human. Compared with a classical sound conduction mode of generating sound waves through a vibrating diaphragm, the bone conduction mode omits a plurality of sound wave transmission steps, can realize clear sound restoration in a noisy environment, and does not influence other people due to the fact that the sound waves are diffused in the air. The bone conduction component is integrated in the wearable device, an in-ear earphone component is not required to be arranged, ears of a user are liberated, and discomfort caused by long-term wearing is avoided.
In some embodiments, while controlling the first lens of the wearable device to play the first video resource corresponding to the 3D video resource and controlling the second lens of the wearable device to play the second video resource corresponding to the 3D video resource, the method further includes: detecting a motion state and/or a facial expression of a user; and adjusting the playing modes and/or contents of the first video resource and the second video resource according to the motion state and/or the facial expression. The advantage of this arrangement is that, since the wearable device is used to play 3D video, the display device will not be fixed like a television or a cinema screen, but will change with the head movement of the user, so that the change of the user's motion state may affect the viewing, and therefore, the playing mode and/or the playing content can be adjusted accordingly according to the detected motion state. For example, when the user changes from a sitting state to a walking state, the images played by the wearable device may shake, and the playing mode and/or the playing content may be adjusted to reduce the vertigo caused by the image shaking to the user. Wherein, user's motion state can be detected by sensors such as integrated acceleration sensor and gyroscope in the wearable equipment, also can have other equipment (like intelligent bracelet or smart mobile phone) of communication connection to detect and feed back to wearable equipment by with wearable equipment, and this application embodiment does not do the restriction. In addition, when the user watches the 3D video, the user may be influenced by the video content to change facial expressions (such as fear and excitement), and may also generate some body movements (such as covering the mouth and flickering), so that the playing mode and/or the playing content may be adjusted according to the facial expressions or the motion state to weaken the excessive influence of the 3D video on the user. The playing mode includes but is not limited to playing volume, playing brightness, refreshing frequency of playing picture, etc.; playing back the content includes, but is not limited to, skipping the current picture or pictures, performing special processing (e.g., adding a mosaic, etc.) on the content in the playing picture, etc. In addition, a physiological state of the user, such as heart rate, etc., may be detected, and the playing manner and/or content of the first video resource and the second video resource may be adjusted according to at least one of a motion state, a physiological state, and a facial expression.
In some embodiments, while controlling the first lens of the wearable device to play the first video resource corresponding to the 3D video resource and controlling the second lens of the wearable device to play the second video resource corresponding to the 3D video resource, the method further includes: receiving a second operation for the wearable device; when the second instruction corresponding to the second operation is determined to be a 3D playing termination instruction, stopping acquiring the 3D video resource and acquiring the 2D video resource; and controlling the first lens and the second lens to cooperatively play the 2D video resource so as to realize switching from 3D effect playing to common effect playing. The advantage of this arrangement is that when the user wants to change to watch the normal effect instead of watching the 3D effect, the switch from the 3D playing effect to the 2D playing effect can be realized simply and quickly.
Fig. 2 is a schematic flowchart of another video playing method according to an embodiment of the present application, where the method includes the following steps:
step 201, when the wearable device is in a 2D video playing state, receiving a first user action acting on the wearable device.
Step 202, judging whether the first user action meets a preset 3D playing condition, if so, executing step 203; otherwise, the flow ends.
For example, the instruction content corresponding to the user action may be preset. Assuming that a user sticks 3 fingers (a right hand sticks an index finger, a middle finger and a ring finger) and shakes left and right to represent a 3D playing instruction, that is, to correspond to a preset 3D playing condition, when it is recognized that a first user action satisfies the preset 3D playing condition, it may be considered that the user wants to switch the 2D playing mode to the 3D playing mode.
And step 203, sending a 3D video request to a preset mobile terminal based on the wireless communication connection.
And step 204, receiving a 3D video resource sent by a preset mobile terminal.
Wherein the 3D video asset comprises a first video asset and a second video asset.
Step 205, controlling the first lens to play the first video resource, controlling the second lens to play the second video resource, and implementing surround sound playing corresponding to the 3D video resource by using the bone conduction headset, so as to implement a 3D stereo video playing effect.
For example, the first lens and the second lens may simultaneously play a left eye image and a right eye image corresponding to the same 3D picture, so as to achieve a 3D playing effect. Fig. 3 is a schematic diagram of a 3D playing effect of smart glasses provided in an embodiment of the present application, as shown in fig. 3, the smart glasses include a first lens 301 and a second lens 302, the first lens 301 plays a left eye image 303 corresponding to a 3D picture a, and the second lens 302 plays a right eye image 304 corresponding to the 3D picture a, so that when a left eye of a user sees the left eye image 303, a right eye sees the right eye image 304, and a 3D playing effect of the 3D picture a is simulated in a brain and sea of the user.
And step 206, detecting the motion state and the facial expression of the user, and adjusting the playing mode and/or the playing content of the first video resource and the second video resource according to the motion state and the facial expression.
Step 207, detecting a second user action acting on the wearable device.
And step 208, when the second user action is determined to meet the preset 2D playing condition, stopping acquiring the 3D video resource and acquiring the 2D video resource.
For example, assuming that the user holds up 2 fingers (holds up the index finger and the middle finger in the right hand) and shakes left and right to represent a 2D playing instruction, that is, to correspond to the preset 2D playing condition, when it is recognized that the second user action satisfies the preset 2D playing condition, it may be considered that the user wants to switch the 3D playing mode back to the 2D playing mode.
And step 209, controlling the first lens and the second lens to cooperatively play the 2D video resource so as to realize switching from 3D effect playing to common effect playing.
According to the video playing method provided by the embodiment of the application, in the process that a user watches videos by using the wearable device, the 2D playing effect and the 3D playing effect can be easily switched through actions such as gestures, in the process of watching the 3D videos, the wearable device can intelligently adjust the playing mode or the playing content of the video resources according to the motion state and the facial expression of the user, and the movie watching experience is improved.
Fig. 4 is a block diagram of a video playing apparatus provided in an embodiment of the present application, where the apparatus may be implemented by software and/or hardware, and is generally integrated in a wearable device, and may perform 3D video playing by executing a video playing method. As shown in fig. 4, the apparatus includes:
an operation receiving module 401, configured to receive a first operation for the wearable device;
a video resource obtaining module 402, configured to obtain a 3D video resource when it is determined that the first instruction corresponding to the first operation is a 3D play instruction;
the video playing control module 403 is configured to control the first lens of the wearable device to play the first video resource corresponding to the 3D video resource, and control the second lens of the wearable device to play the second video resource corresponding to the 3D video resource, so as to implement playing of a 3D effect of the 3D video resource.
The video playing device provided in the embodiment of the application acquires the 3D video resource when determining that the 3D playing instruction for the wearable device is received, and respectively controls the first lens and the second lens of the wearable device to play the first video resource and the second video resource corresponding to the 3D video resource, so as to realize the 3D effect playing of the 3D video resource. Through adopting above-mentioned technical scheme, can utilize wearable equipment to easily realize the video broadcast of 3D, richen the instant music of wearable equipment.
Optionally, when it is determined that the first instruction corresponding to the first operation is a 3D play instruction, acquiring a 3D video resource includes:
when determining that a first instruction corresponding to the first operation is a 3D playing instruction, sending a 3D video request to a preset terminal device, wherein the 3D video request is used for indicating the preset terminal device to send a 3D video resource to the wearable device; wherein the 3D video asset comprises a first video asset and a second video asset;
and receiving the 3D video resource sent by the preset terminal equipment.
Optionally, the preset terminal device includes a preset server or a preset mobile terminal;
when the preset terminal equipment comprises a preset server:
the device further comprises an account login module, a server and a server, wherein the account login module is used for logging in an account corresponding to the preset server before the first instruction corresponding to the first operation is determined to be a 3D playing instruction;
the sending of the 3D video request to the preset terminal device includes: sending a 3D video request to the preset server through the Internet based on the account;
when the preset terminal equipment comprises a preset mobile terminal:
the device also comprises a communication connection establishing module, which is used for establishing wireless communication connection with the preset mobile terminal before the first instruction corresponding to the first operation is determined to be a 3D playing instruction;
the sending of the 3D video request to the preset terminal device includes: and sending a 3D video request to the preset mobile terminal based on the wireless communication connection.
Optionally, the receiving a first operation for the wearable device includes:
receiving a first user action acting on the wearable device;
the determining that the first instruction corresponding to the first operation is a 3D play instruction includes:
and judging whether the first user action meets a preset 3D playing condition, and if so, determining that a first instruction corresponding to the first user action is a 3D playing instruction.
Optionally, the apparatus further comprises: and the sound playing control module is used for controlling the first lens of the wearable device to play the first video resource corresponding to the 3D video resource and controlling the second lens of the wearable device to play the second video resource corresponding to the 3D video resource, and meanwhile, the bone conduction component of the wearable device is utilized to realize the surround sound playing corresponding to the 3D video resource.
Optionally, the apparatus further comprises:
the detection module is used for detecting the motion state and/or the facial expression of a user while controlling a first lens of the wearable device to play a first video resource corresponding to the 3D video resource and controlling a second lens of the wearable device to play a second video resource corresponding to the 3D video resource;
and the playing adjustment module is used for adjusting the playing modes and/or the playing contents of the first video resource and the second video resource according to the motion state and/or the facial expression.
Optionally, the operation receiving module is further configured to: receiving a second operation aiming at the wearable device during the period of controlling a first lens of the wearable device to play a first video resource corresponding to the 3D video resource and controlling a second lens of the wearable device to play a second video resource corresponding to the 3D video resource;
the video resource acquisition module is further configured to: when the second instruction corresponding to the second operation is determined to be a 3D playing termination instruction, stopping acquiring the 3D video resource and acquiring the 2D video resource;
the video playing control module is further configured to: and controlling the first lens and the second lens to cooperatively play the 2D video resource so as to realize switching from 3D effect playing to common effect playing.
Embodiments of the present application also provide a storage medium containing computer-executable instructions, which when executed by a computer processor, are configured to perform a video playback method, the method including:
receiving a first operation for the wearable device;
when the first instruction corresponding to the first operation is determined to be a 3D playing instruction, acquiring a 3D video resource;
and controlling a first lens of the wearable device to play a first video resource corresponding to the 3D video resource, and controlling a second lens of the wearable device to play a second video resource corresponding to the 3D video resource, so as to realize the 3D effect playing of the 3D video resource.
Storage medium-any of various types of memory devices or storage devices. The term "storage medium" is intended to include: mounting media such as CD-ROM, floppy disk, or tape devices; computer system memory or random access memory such as DRAM, DDRRAM, SRAM, EDORAM, Lanbas (Rambus) RAM, etc.; non-volatile memory such as flash memory, magnetic media (e.g., hard disk or optical storage); registers or other similar types of memory elements, etc. The storage medium may also include other types of memory or combinations thereof. In addition, the storage medium may be located in a first computer system in which the program is executed, or may be located in a different second computer system connected to the first computer system through a network (such as the internet). The second computer system may provide program instructions to the first computer for execution. The term "storage medium" may include two or more storage media that may reside in different locations, such as in different computer systems that are connected by a network. The storage medium may store program instructions (e.g., embodied as a computer program) that are executable by one or more processors.
Of course, the storage medium provided in the embodiments of the present application and containing computer-executable instructions is not limited to the video playing operation described above, and may also perform related operations in the video playing method provided in any embodiment of the present application.
The embodiment of the application provides wearable equipment, and the video playing device provided by the embodiment of the application can be integrated in the wearable equipment. Fig. 5 is a schematic structural diagram of a wearable device according to an embodiment of the present application. The wearable device 500 may include: the video playing system comprises a memory 501, a processor 502 and a computer program stored on the memory 501 and capable of being executed by the processor 502, wherein the processor 502 executes the computer program to realize the video playing method according to the embodiment of the present application.
The wearable device provided by the embodiment of the application acquires the 3D video resource when the 3D playing instruction for the wearable device is received, and respectively controls the first lens and the second lens of the wearable device to play the first video resource and the second video resource corresponding to the 3D video resource, so as to realize the 3D effect playing of the 3D video resource. Through adopting above-mentioned technical scheme, can utilize wearable equipment to easily realize the video broadcast of 3D, richen the instant music of wearable equipment.
The memory and the microprocessor listed in the above examples are all part of components of the wearable device, and the wearable device may further include other components. Fig. 6 is a block diagram of a wearable device according to an embodiment of the present disclosure, and fig. 7 is a schematic entity diagram of a wearable device according to an embodiment of the present disclosure. As shown in fig. 6 and 7, the wearable device may include: the device comprises a memory 601, a processor (CPU) 602 (hereinafter, referred to as CPU), a display Unit 603, a touch panel 604, a heart rate detection module 605, a distance sensor 606, a camera 607, a bone conduction speaker 608, a microphone 609 and a breathing lamp 610. These components communicate over one or more communication buses or signal lines 611 (hereinafter also referred to as internal transmission lines).
It should be understood that the illustrated wearable device is merely one example, and that the wearable device may have more or fewer components than shown in the figures, may combine two or more components, or may have a different configuration of components. The various components shown in the figures may be implemented in hardware, software, or a combination of hardware and software, including one or more signal processing and/or application specific integrated circuits.
The wearable device for video playing provided in this embodiment is described in detail below, and the wearable device takes smart glasses as an example.
A memory 601, the memory 601 being accessible by the CPU602, the memory 601 may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic disk storage devices, flash memory devices, or other volatile solid state storage devices.
The display component 603 can be used for displaying image data and a control interface of an operating system, the display component 603 is embedded in a frame of the smart glasses, an internal transmission line 611 is arranged inside the frame, and the internal transmission line 611 is connected with the display component 603.
And a touch panel 604, the touch panel 604 being disposed at an outer side of at least one smart glasses temple for acquiring touch data, the touch panel 604 being connected to the CPU602 through an internal transmission line 611. The touch panel 604 can detect finger sliding and clicking operations of the user, and accordingly transmit the detected data to the processor 602 for processing to generate corresponding control commands, which may be, for example, a left shift command, a right shift command, an up shift command, a down shift command, and the like. For example, the display component 603 can display the virtual image data transmitted by the processor 602, and the virtual image data can be correspondingly changed according to the user operation detected by the touch panel 604, specifically, the screen switching can be performed, and when a left shift instruction or a right shift instruction is detected, the previous or next virtual image screen is correspondingly switched; when the display section 603 displays video play information, the left shift instruction may be to perform playback of the play content, and the right shift instruction may be to perform fast forward of the play content; when the display part 603 displays editable text content, the left shift instruction, the right shift instruction, the up shift instruction, and the down shift instruction may be displacement operations on a cursor, that is, the position of the cursor may be moved according to a touch operation of a user on the touch pad; when the content displayed by the display component 603 is a game moving picture, the left shift instruction, the right shift instruction, the upward shift instruction, and the downward shift instruction may be for controlling an object in a game, for example, in an airplane game, the flying direction of the airplane may be controlled by the left shift instruction, the right shift instruction, the upward shift instruction, and the downward shift instruction, respectively; when the display part 603 can display video pictures of different channels, the left shift instruction, the right shift instruction, the up shift instruction, and the down shift instruction can perform switching of different channels, wherein the up shift instruction and the down shift instruction can be switching to a preset channel (such as a common channel used by a user); when the display section 603 displays a still picture, the left shift instruction, the right shift instruction, the up shift instruction, and the down shift instruction may switch between different pictures, where the left shift instruction may be to a previous picture, the right shift instruction may be to a next picture, the up shift instruction may be to a previous picture set, and the down shift instruction may be to a next picture set. The touch panel 604 can also be used to control display switches of the display portion 603, for example, when the touch area of the touch panel 604 is pressed for a long time, the display portion 603 is powered on to display an image interface, when the touch area of the touch panel 604 is pressed for a long time again, the display portion 603 is powered off, and when the display portion 603 is powered on, the brightness or resolution of an image displayed in the display portion 603 can be adjusted by performing a slide-up and slide-down operation on the touch panel 604.
Heart rate detection module 605 for survey user's heart rate data, the heart rate indicates the heartbeat number of minute, and this heart rate detection module 605 sets up at the mirror leg inboard. Specifically, the heart rate detection module 605 may obtain the human body electrocardiographic data by using the dry electrode in an electric pulse measurement manner, and determine the heart rate according to the amplitude peak value in the electrocardiographic data; this rhythm of heart detection module 605 can also be by adopting the light transmission and the light receiver component of photoelectric method measurement rhythm of heart, corresponding, and this rhythm of heart detection module 605 sets up in the mirror leg bottom, the earlobe department of human auricle. Heart rate detection module 605 can be corresponding after gathering heart rate data send to processor 602 and carry out data processing and have obtained the current heart rate value of wearer, in an embodiment, processor 602 can show this heart rate value in display component 603 in real time after determining the heart rate value of user, optional processor 602 can be corresponding trigger the alarm when determining that heart rate value is lower (for example less than 50) or higher (for example more than 100), send this heart rate value and/or the alarm information that generates to the server through communication module simultaneously.
The distance sensor 606 may be disposed on the frame, the distance sensor 606 is used for sensing a distance from a human face to the frame, and the distance sensor 606 may be implemented by using an infrared sensing principle. Specifically, the distance sensor 606 transmits the acquired distance data to the processor 602, and the processor 602 controls the brightness of the display part 603 according to the distance data. Illustratively, the processor 602 controls the display 603 to be in an on state when it determines that the distance detected by the distance sensor 606 is less than 5 cm, and controls the display 604 to be in an off state when it determines that the distance sensor detects an object approaching.
The breathing lamp 610 may be disposed at an edge of the frame, and when the display component 603 turns off the display screen, the breathing lamp 610 may be turned on to generate a gradually changing light-dark effect according to the control of the processor 602.
The camera 607 may be a front camera module disposed at the upper frame of the frame for collecting image data in front of the user, a rear camera module for collecting eyeball information of the user, or a combination thereof. Specifically, when the camera 607 collects the front image, the collected image is sent to the processor 602 for recognition and processing, and a corresponding trigger event is triggered according to the recognition result. Illustratively, when a user wears the wearable device at home, by identifying the collected front image, if a furniture item is identified, correspondingly inquiring whether a corresponding control event exists, if so, correspondingly displaying a control interface corresponding to the control event in the display part 603, and controlling the corresponding furniture item through the touch panel 604 by the user, wherein the furniture item and the smart glasses are in network connection through bluetooth or wireless ad hoc network; when a user wears the wearable device outdoors, a target recognition mode can be started correspondingly, the target recognition mode can be used for recognizing specific people, the camera 607 sends collected images to the processor 602 for face recognition processing, if preset faces are recognized, sound broadcasting can be performed through a speaker integrated with the smart glasses correspondingly, the target recognition mode can also be used for recognizing different plants, for example, the processor 602 records current images collected by the camera 607 according to touch operation of the touch panel 604 and sends the current images to the server for recognition through the communication module, the server recognizes the plants in the collected images and feeds back related plant names to the smart glasses, and feedback data are displayed in the display component 603. The camera 607 may also be configured to capture an image of an eye of a user, such as an eyeball, and generate different control instructions by recognizing rotation of the eyeball, for example, the eyeball rotates upward to generate an upward movement control instruction, the eyeball rotates downward to generate a downward movement control instruction, the eyeball rotates leftward to generate a leftward movement control instruction, and the eyeball rotates rightward to generate a rightward movement control instruction, where the display component 603 may display virtual image data transmitted by the processor 602, where the virtual image data may be changed according to a control instruction generated according to a change in movement of the eyeball of the user detected by the camera 607, and specifically, may perform frame switching, and when a leftward movement control instruction or a rightward movement control instruction is detected, switch to a previous or next virtual image frame; when the display part 603 displays video playing information, the left control instruction may be to perform playback of the playing content, and the right control instruction may be to perform fast forward of the playing content; when the display part 603 displays editable text content, the left movement control instruction, the right movement control instruction, the upward movement control instruction, and the downward movement control instruction may be displacement operations of a cursor, that is, the position of the cursor may be moved according to a touch operation of a user on the touch pad; when the content displayed by the display component 603 is a game animation picture, the left movement control command, the right movement control command, the upward movement control command and the downward movement control command may be used to control an object in a game, for example, in an airplane game, the flying direction of an airplane may be controlled by the left movement control command, the right movement control command, the upward movement control command and the downward movement control command respectively; when the display part 603 can display video pictures of different channels, the left shift control instruction, the right shift control instruction, the up shift control instruction, and the down shift control instruction can switch different channels, wherein the up shift control instruction and the down shift control instruction can be switching to a preset channel (such as a common channel used by a user); when the display part 603 displays a still picture, the left shift control command, the right shift control command, the up shift control command, and the down shift control command may switch between different pictures, where the left shift control command may be to a previous picture, the right shift control command may be to a next picture, the up shift control command may be to a previous picture set, and the down shift control command may be to a next picture set.
And a bone conduction speaker 608, the bone conduction speaker 608 being provided on an inner wall side of at least one temple, for converting the received audio signal transmitted from the processor 602 into a vibration signal. The bone conduction speaker 608 transmits sound to the inner ear of the human body through the skull, converts an electrical signal of the audio frequency into a vibration signal, transmits the vibration signal into a cochlea of the skull, and is sensed by auditory nerves. Through bone conduction speaker 608 as the sound generating mechanism has reduced hardware structure thickness, and weight is lighter, and electromagnetic radiation does not also can not receive electromagnetic radiation's influence simultaneously to possess noise immunity, waterproof and the advantage of liberating ears.
A microphone 609, which may be located on the lower rim of the frame, is used to capture external (user, ambient) sounds and transmit them to the processor 602 for processing. Illustratively, the microphone 609 collects the sound emitted by the user and performs voiceprint recognition through the processor 602, and if the sound is recognized as a voiceprint for authenticating the user, the subsequent voice control can be correspondingly received, specifically, the user can emit voice, the microphone 609 sends the collected voice to the processor 602 for recognition so as to generate a corresponding control instruction according to the recognition result, such as "power on", "power off", "display brightness increase", "display brightness decrease", and the processor 602 executes a corresponding control process according to the generated control instruction subsequently.
The video playing device, the storage medium and the wearable device provided in the above embodiments can execute the video playing method provided in any embodiment of the present application, and have corresponding functional modules and beneficial effects for executing the method. For details of the video playing method provided in any of the embodiments of the present application, reference may be made to the technical details not described in detail in the above embodiments.
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present application and the technical principles employed. It will be understood by those skilled in the art that the present application is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the application. Therefore, although the present application has been described in more detail with reference to the above embodiments, the present application is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present application, and the scope of the present application is determined by the scope of the appended claims.

Claims (8)

1. A video playing method is applied to wearable equipment and comprises the following steps:
receiving a first operation for the wearable device;
when the first instruction corresponding to the first operation is determined to be a 3D playing instruction, acquiring a 3D video resource;
controlling a first lens of the wearable device to play a first video resource corresponding to the 3D video resource, and controlling a second lens of the wearable device to play a second video resource corresponding to the 3D video resource, so as to realize 3D effect playing of the 3D video resource;
receiving a second operation for the wearable device;
when the second instruction corresponding to the second operation is determined to be a 3D playing termination instruction, stopping acquiring the 3D video resource and acquiring the 2D video resource;
controlling the first lens and the second lens to cooperatively play the 2D video resource so as to realize switching from 3D effect playing to common effect playing;
wherein the receiving a first operation for the wearable device comprises:
receiving a first user action acting on the wearable device;
the determining that the first instruction corresponding to the first operation is a 3D play instruction includes:
judging whether the first user action meets a preset 3D playing condition, and if so, determining that a first instruction corresponding to the first user action is a 3D playing instruction; the preset 3D playing condition is a preset user action recognition condition corresponding to the 3D playing instruction, and comprises a preset gesture action type, a preset action amplitude, a preset action change speed and a preset action change trend;
when controlling the first lens of the wearable device to play the first video resource corresponding to the 3D video resource and controlling the second lens of the wearable device to play the second video resource corresponding to the 3D video resource, the method further comprises:
detecting a motion state, a physiological state, and/or a facial expression of a user;
adjusting the playing modes and/or the playing contents of the first video resource and the second video resource according to the motion state, the physiological state and/or the facial expression so as to eliminate the influence on the viewing of the video caused by the change of the motion state of the user and eliminate the influence of the video contents on the user; the playing mode at least comprises one or more of playing volume, playing brightness and refreshing frequency of a playing picture; playing the content at least includes skipping a current one or more pictures, performing special processing on the content in the played picture.
2. The method according to claim 1, wherein when it is determined that the first instruction corresponding to the first operation is a 3D play instruction, acquiring a 3D video resource includes:
when it is determined that a first instruction corresponding to the first operation is a 3D playing instruction, sending a 3D video request to a preset terminal device, where the 3D video request is used for instructing the preset terminal device to send a 3D video resource to the wearable device, where the 3D video resource includes a first video resource and a second video resource;
and receiving the 3D video resource sent by the preset terminal equipment.
3. The method according to claim 2, wherein the preset terminal device comprises a preset server or a preset mobile terminal;
when the preset terminal equipment comprises a preset server:
before determining that the first instruction corresponding to the first operation is a 3D play instruction, the method further includes: logging in an account corresponding to the preset server;
the sending of the 3D video request to the preset terminal device includes: sending a 3D video request to the preset server through the Internet based on the account;
when the preset terminal equipment comprises a preset mobile terminal:
before determining that the first instruction corresponding to the first operation is a 3D play instruction, the method further includes: establishing wireless communication connection with the preset mobile terminal;
the sending of the 3D video request to the preset terminal device includes: and sending a 3D video request to the preset mobile terminal based on the wireless communication connection.
4. The method of claim 1, wherein controlling the first lens of the wearable device to play the first video resource corresponding to the 3D video resource and controlling the second lens of the wearable device to play the second video resource corresponding to the 3D video resource further comprises:
and realizing the surround sound playing corresponding to the 3D video resource by utilizing the bone conduction component of the wearable device.
5. The method of claim 1, wherein the wearable device comprises smart glasses.
6. A video playback device, configured to be worn on a device, comprising:
an operation receiving module for receiving a first operation for the wearable device;
the video resource acquisition module is used for acquiring a 3D video resource when a first instruction corresponding to the first operation is determined to be a 3D playing instruction;
the video playing control module is used for controlling a first lens of the wearable device to play a first video resource corresponding to the 3D video resource and controlling a second lens of the wearable device to play a second video resource corresponding to the 3D video resource so as to realize 3D effect playing of the 3D video resource;
the operation receiving module is further configured to: receiving a second operation aiming at the wearable device during the period of controlling a first lens of the wearable device to play a first video resource corresponding to the 3D video resource and controlling a second lens of the wearable device to play a second video resource corresponding to the 3D video resource;
the video resource acquisition module is further configured to: when the second instruction corresponding to the second operation is determined to be a 3D playing termination instruction, stopping acquiring the 3D video resource and acquiring the 2D video resource;
the video playing control module is further configured to: controlling the first lens and the second lens to cooperatively play the 2D video resource so as to realize switching from 3D effect playing to common effect playing;
wherein the receiving a first operation for the wearable device comprises:
receiving a first user action acting on the wearable device;
the determining that the first instruction corresponding to the first operation is a 3D play instruction includes:
judging whether the first user action meets a preset 3D playing condition, and if so, determining that a first instruction corresponding to the first user action is a 3D playing instruction; the preset 3D playing condition is a preset user action recognition condition corresponding to the 3D playing instruction, and comprises a preset gesture action type, a preset action amplitude, a preset action change speed and a preset action change trend;
the detection module is used for detecting the motion state, the physiological state and/or the facial expression of a user while controlling a first lens of the wearable device to play a first video resource corresponding to the 3D video resource and controlling a second lens of the wearable device to play a second video resource corresponding to the 3D video resource;
the playing adjustment module is used for adjusting the playing modes and/or the playing contents of the first video resource and the second video resource according to the motion state, the physiological state and/or the facial expression so as to eliminate the influence on the watching of the video caused by the change of the motion state of the user and eliminate the influence of the video contents on the user; the playing mode at least comprises one or more of playing volume, playing brightness and refreshing frequency of a playing picture; playing the content at least includes skipping a current one or more pictures, performing special processing on the content in the played picture.
7. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out a video playback method according to any one of claims 1 to 5.
8. A wearable device comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor implements the video playback method of any of claims 1-5 when executing the computer program.
CN201811000831.3A 2018-08-30 2018-08-30 Video playing method and device, storage medium and wearable device Active CN109068126B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811000831.3A CN109068126B (en) 2018-08-30 2018-08-30 Video playing method and device, storage medium and wearable device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811000831.3A CN109068126B (en) 2018-08-30 2018-08-30 Video playing method and device, storage medium and wearable device

Publications (2)

Publication Number Publication Date
CN109068126A CN109068126A (en) 2018-12-21
CN109068126B true CN109068126B (en) 2021-03-02

Family

ID=64757819

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811000831.3A Active CN109068126B (en) 2018-08-30 2018-08-30 Video playing method and device, storage medium and wearable device

Country Status (1)

Country Link
CN (1) CN109068126B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110035173A (en) * 2019-02-28 2019-07-19 努比亚技术有限公司 A kind of terminal control method, terminal and computer readable storage medium
CN109839745A (en) * 2019-03-04 2019-06-04 丹阳市精通眼镜技术创新服务中心有限公司 A kind of floristics identification glasses and preparation method thereof
CN111225219B (en) * 2019-11-26 2021-12-14 深圳英伦科技股份有限公司 Light field three-dimensional immersion experience information transmission method and system based on 5G network
CN113923524A (en) * 2021-08-24 2022-01-11 深圳市科伦特电子有限公司 Play mode switching method and device, playing equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20120023218A (en) * 2010-09-01 2012-03-13 엠텍비젼 주식회사 Display device for 2d and 3d, 3d auxiliary device and optional display method
CN104335574A (en) * 2013-02-22 2015-02-04 索尼公司 Head-mounted display
CN104717477A (en) * 2014-12-18 2015-06-17 赵雪波 Online eye screen
CN106686367A (en) * 2015-11-05 2017-05-17 丰唐物联技术(深圳)有限公司 Display mode switching method and display control system of virtual reality (VR) display

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107241647A (en) * 2016-03-28 2017-10-10 中兴通讯股份有限公司 A kind of video playing control method, device and set top box

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20120023218A (en) * 2010-09-01 2012-03-13 엠텍비젼 주식회사 Display device for 2d and 3d, 3d auxiliary device and optional display method
CN104335574A (en) * 2013-02-22 2015-02-04 索尼公司 Head-mounted display
CN104717477A (en) * 2014-12-18 2015-06-17 赵雪波 Online eye screen
CN106686367A (en) * 2015-11-05 2017-05-17 丰唐物联技术(深圳)有限公司 Display mode switching method and display control system of virtual reality (VR) display

Also Published As

Publication number Publication date
CN109068126A (en) 2018-12-21

Similar Documents

Publication Publication Date Title
CN109068126B (en) Video playing method and device, storage medium and wearable device
CN109120790B (en) Call control method and device, storage medium and wearable device
US10175753B2 (en) Second screen devices utilizing data from ear worn device system and method
US20170105622A1 (en) Monitoring pulse transmissions using radar
US20180123813A1 (en) Augmented Reality Conferencing System and Method
CN109032384B (en) Music playing control method and device, storage medium and wearable device
CN109259724B (en) Eye monitoring method and device, storage medium and wearable device
US20180124497A1 (en) Augmented Reality Sharing for Wearable Devices
CN109254659A (en) Control method, device, storage medium and the wearable device of wearable device
CN109088815A (en) Message prompt method, device, storage medium, mobile terminal and wearable device
KR20190004088A (en) Virtual Reality Education System and Method based on Bio Sensors
CN109224432B (en) Entertainment application control method and device, storage medium and wearable device
CN109040462A (en) Stroke reminding method, apparatus, storage medium and wearable device
US9276541B1 (en) Event-based presentation and processing of content
CN109241900B (en) Wearable device control method and device, storage medium and wearable device
JPWO2018155026A1 (en) Information processing apparatus, information processing method, and program
CN109061903B (en) Data display method and device, intelligent glasses and storage medium
CN109358744A (en) Information sharing method, device, storage medium and wearable device
CN109119057A (en) Musical composition method, apparatus and storage medium and wearable device
US20240095948A1 (en) Self-tracked controller
CN109117819B (en) Target object identification method and device, storage medium and wearable device
CN109240498B (en) Interaction method and device, wearable device and storage medium
CN109144263A (en) Social householder method, device, storage medium and wearable device
CN212694166U (en) Head-mounted display equipment
CN109361727B (en) Information sharing method and device, storage medium and wearable device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant