WO2015078387A1 - 头戴式设备控制方法、装置和头戴式设备 - Google Patents

头戴式设备控制方法、装置和头戴式设备 Download PDF

Info

Publication number
WO2015078387A1
WO2015078387A1 PCT/CN2014/092381 CN2014092381W WO2015078387A1 WO 2015078387 A1 WO2015078387 A1 WO 2015078387A1 CN 2014092381 W CN2014092381 W CN 2014092381W WO 2015078387 A1 WO2015078387 A1 WO 2015078387A1
Authority
WO
WIPO (PCT)
Prior art keywords
wearer
seat
data
mounted device
head mounted
Prior art date
Application number
PCT/CN2014/092381
Other languages
English (en)
French (fr)
Inventor
李国庆
常新苗
Original Assignee
华为终端有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为终端有限公司 filed Critical 华为终端有限公司
Priority to EP14865292.8A priority Critical patent/EP3035651B1/en
Priority to US15/023,526 priority patent/US9940893B2/en
Publication of WO2015078387A1 publication Critical patent/WO2015078387A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/012Head tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/593Recognising seat occupancy
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/014Head-up displays characterised by optical features comprising information/image processing systems
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2340/00Aspects of display data processing
    • G09G2340/14Solving problems related to the presentation of information to be displayed

Definitions

  • the present invention relates to the field of data processing technologies, and more particularly to a head mounted device control method, apparatus, and head mounted device.
  • Head Mounted Display refers to an electronic device that is worn on the user's head and has a separate CPU, storage unit, communication module, and near-eye display.
  • the head mounted device needs to be worn on the user's head during use.
  • Typical head-mounted devices typically have network access capabilities to send and receive data, providing the wearer with a variety of services, such as providing electronic newspapers, maps, incoming call alerts, email alerts, short message alerts, and social network message updates. These services are displayed on the near-eye display of the headset. That is, the head mounted device provides a screen display service to the wearer using a near-eye display.
  • an object of embodiments of the present invention is to provide a head mounted device control method, apparatus, and head. Wear a device to solve the above problems.
  • the embodiment of the present invention provides the following technical solutions:
  • a headset device control method includes:
  • the context data including at least one of motion speed data, in-vehicle wireless network signal strength data, user schedule data, and ambient noise intensity data;
  • the preset service is disabled, and the preset service includes a screen display service of the near-eye display.
  • the method further includes: enabling or maintaining the seat if it is determined that the seat is not the driver's seat, or if it is determined that the wearer is not located in the traveling vehicle Default service.
  • the method further includes:
  • Determining whether the wearer is located in the traveling vehicle by using the context data includes: determining whether the situation evaluation value is greater than or equal to a threshold value;
  • the context evaluation function is denoted as F(x, y, z, v), including a signal strength evaluation function f 1 (x) a calendar evaluation function f 2 (y), at least one of an environmental noise evaluation function f 3 (z) and a motion speed evaluation function f 4 (v);
  • the x represents signal strength data
  • the y represents a user Calendar data
  • said z represents an environmental noise intensity evaluation value
  • said v represents motion speed data;
  • said substituting said context data into a context evaluation function, and obtaining said context evaluation value comprises: substituting said x into f 1 ( x) obtaining a signal strength evaluation value, substituting the y into f 2 (y) to obtain a calendar evaluation value, substituting the z into f 3 (z) to obtain an environmental noise intensity evaluation value, and substituting the v into the f 4 (v) obtaining at least one of the exercise speed evaluation values.
  • f 1 (x) ⁇ 1 x, the ⁇ 1 represents a signal strength weight, ⁇ 1 >0;
  • the ⁇ 2 represents a schedule weight, and ⁇ 2 >0;
  • the y includes a set of calendar events at a time of collecting data, and the ⁇ represents a preset specific event set;
  • the v 0 represents a speed threshold
  • the ⁇ 1 represents a first motion speed weight
  • the ⁇ 2 represents a second motion speed weight
  • the t 1 represents a first speed influence minimum value
  • the t 2 represents a second speed
  • the minimum value is affected, ⁇ 2 ⁇ ⁇ 1 > 0, and t 2 ⁇ t 1 > 0.
  • the acquiring an environment image includes: setting a photographing parameter and performing photographing to obtain the environment image; the photographing parameter is determined according to the motion speed data, or The photographing parameters are preset standard photographing parameters.
  • Using the collected environment image to determine whether the seat of the wearer is a driver's seat includes:
  • the photographing parameter includes a reciprocal F of the ratio of the exposure time, the aperture aperture, and the focal length of the lens, the sensitivity, and the Select the focus point.
  • the method further includes: after detecting the connection to the in-vehicle wireless network, sending device information of the wearable device connected to the headset to the An in-vehicle system to which the in-vehicle wireless network belongs, so that the in-vehicle system searches according to the acquired device information, and establishes a connection with the searched wearable device; and when detecting disconnection from the in-vehicle wireless network, searching A wearable device that has been connected to the head mounted device and reconnected with the searched wearable device.
  • the preset service further includes: at least one of a manual input service and a projection display service.
  • the method further includes: if it is determined that the seat is a driver's seat, converting the received information from the preset emergency contact into voice information playing.
  • the method further includes: if the seat is determined to be a driver's seat, pushing the screen display service to a display screen other than the head mounted device on.
  • a head mounted device control apparatus includes:
  • a context data collecting unit configured to collect context data, where the context data includes at least one of motion speed data, in-vehicle wireless network signal strength data, user schedule data, and ambient noise intensity data;
  • a first determining unit configured to determine, by using the context data, whether a wearer of the head mounted device is located in a traveling vehicle
  • An image acquisition control unit configured to control an image collection device in the head mounted device to collect an environment image when the wearer is located in a traveling vehicle;
  • a second determining unit configured to use the collected environment image to determine whether the seat where the wearer is located is a driver's seat
  • the service management unit is configured to disable the preset service when the seat of the wearer is the driver's seat, and the preset service includes a screen display service of the near-eye display.
  • the method further includes:
  • a first connecting unit configured to send, after detecting the connection to the in-vehicle wireless network, device information of the wearable device connected to the head mounted device to the in-vehicle system to which the in-vehicle wireless network belongs, so that The in-vehicle system performs a search according to the obtained device information, and establishes a connection with the searched wearable device;
  • a second connecting unit configured to search for a wearable device that has established a connection with the head mounted device when detecting disconnection from the in-vehicle wireless network, and re-establish a connection with the searched wearable device.
  • the method further includes: a converting unit, configured to: when determining that the seat is a driver's seat, convert the received information from the preset emergency contact into Voice information playback.
  • the method further includes: a pushing unit, configured to: when the seat is the driver's seat, push the screen display service to the outside of the headset On the display.
  • a head mounted device an image capture device, a near eye display, and a headset according to any one of the second aspect to the second aspect, the third possible implementation And a device control device, wherein the head mounted device control device is respectively connected to the image capturing device and the near-eye display.
  • the wearer is located in the traveling vehicle according to the collected situation data. If the wearer is determined to be located in the traveling vehicle, the environment image is collected to further determine the seat of the wearer. Whether it is a driver's seat or not, if it is determined that the seat of the wearer is the driver's seat, it is considered that the wearer is driving, the screen display service of the head-mounted device will be disabled to reduce the distraction of the wearer's attention and improve driving safety.
  • FIG. 1 is a flowchart of a method for controlling a head mounted device according to an embodiment of the present invention
  • FIG. 2 is another flowchart of a method for controlling a head mounted device according to an embodiment of the present invention
  • FIG. 3 is still another flowchart of a method for controlling a head mounted device according to an embodiment of the present disclosure
  • FIG. 4 is still another flowchart of a method for controlling a head mounted device according to an embodiment of the present invention.
  • FIG. 5 is still another flowchart of a method for controlling a head mounted device according to an embodiment of the present disclosure
  • FIG. 6 is a schematic diagram of selecting a focus point according to an embodiment of the present invention.
  • FIG. 7 is another schematic diagram of selecting a focus point according to an embodiment of the present invention.
  • FIG. 8 is a schematic structural diagram of a head-mounted device control apparatus according to an embodiment of the present invention.
  • FIG. 9 is another schematic structural diagram of a head-mounted device control apparatus according to an embodiment of the present invention.
  • FIG. 10 is a schematic structural diagram of a head-mounted device according to an embodiment of the present invention.
  • FIG. 11 is still another flowchart of a method for controlling a head mounted device according to an embodiment of the present invention.
  • FIG. 12 is still another flowchart of a method for controlling a head mounted device according to an embodiment of the present invention.
  • FIG. 13 is still another flowchart of a method for controlling a head mounted device according to an embodiment of the present invention.
  • FIG. 14 is still another flowchart of a method for controlling a head mounted device according to an embodiment of the present invention.
  • FIG. 15 is still another flowchart of a method for controlling a head mounted device according to an embodiment of the present invention.
  • FIG. 16 is still another flowchart of a method for controlling a head mounted device according to an embodiment of the present invention.
  • FIG. 17 is still another flowchart of a method for controlling a head mounted device according to an embodiment of the present invention.
  • FIG. 18 is still another flowchart of a method for controlling a head mounted device according to an embodiment of the present invention.
  • FIG. 19 is still another flowchart of a method for controlling a head mounted device according to an embodiment of the present invention.
  • FIG. 20 is still another flowchart of a method for controlling a head mounted device according to an embodiment of the present invention.
  • FIG. 21 is still another flowchart of a method for controlling a head mounted device according to an embodiment of the present invention.
  • FIG. 22 is still another flowchart of a method for controlling a head mounted device according to an embodiment of the present invention.
  • FIG. 23 is still another flowchart of a method for controlling a head mounted device according to an embodiment of the present disclosure.
  • FIG. 24 is still another flowchart of a method for controlling a head mounted device according to an embodiment of the present invention.
  • FIG. 25 is still another schematic structural diagram of a head-mounted device control apparatus according to an embodiment of the present invention.
  • FIG. 26 is still another schematic structural diagram of a head-mounted device control apparatus according to an embodiment of the present invention.
  • FIG. 27 is a schematic structural diagram of a head mounted device according to an embodiment of the present invention.
  • FIG. 1 is a flowchart of a method for controlling a head mounted device according to the present invention, which may include at least the following steps:
  • the above context data may include at least one of the motion speed data v, the in-vehicle wireless network signal strength data x, the user schedule data y, and the environmental noise intensity data z.
  • the image capturing device in the control head mounted device collects an environment image
  • the above image acquisition device is generally a camera. That is, when it is determined that the wearer is located in the traveling vehicle, the camera will be used to take a picture.
  • This article will provide a more detailed introduction to how to collect environmental images.
  • the preset service may include a screen display service of the near-eye display.
  • the driver is located in the driver's seat while driving, and therefore, if it is determined that the wearer is located in the traveling vehicle, if it is determined that it is still located in the driver's seat, it can be concluded that the wearer is driving. In this case, the default service should be disabled.
  • the wearer is located in the traveling vehicle according to the collected situation data. If the wearer is determined to be located in the traveling vehicle, the environment image is collected to further determine the seat of the wearer. Whether it is a driver's seat or not, if it is determined that the seat of the wearer is the driver's seat, it is considered that the wearer is driving, the screen display service of the head-mounted device will be disabled to reduce the distraction of the wearer's attention and improve driving safety. It should be noted that by disabling the on-screen display service of the near-eye display, all services requiring the use of the HMD near-eye display can be prohibited.
  • the HMD has a touchpad and buttons that the wearer can interact with by touching the touchpad or pressing/pushing buttons. Touching, pressing, and dialing may also distract the wearer.
  • the preset service in all the above embodiments may further include: manually inputting a service.
  • the manual input service can include a touchpad and button manual input.
  • control method in all the above embodiments may further include:
  • the received information from the preset emergency contact is converted into voice information playback.
  • An emergency contact list can be maintained in the HMD, and the contacts in the list are preset emergency contacts. Department.
  • the above information may include at least one of a mail header and a short message.
  • the email title and short message can be converted into a voice broadcast to the wearer.
  • control method in all of the above embodiments may further include: pushing the screen display service to a display screen other than the head mounted device when determining that the wearer is in a driving state. For example, it can be pushed to the display of the onboard system or to the terminal of the same passenger.
  • the preset service is enabled or maintained.
  • the wearer may not be located in the traveling vehicle, and the situation may be that the wearer is outside the vehicle, and in one case, the wearer is in a stationary vehicle (for example, the vehicle is not always in operation). , or the vehicle encountered a red light to stop). In both cases, the wearer does not drive, and naturally there is no need to disable the default service.
  • the situation may be as follows: the wearer is in the vehicle, but the wearer is the passenger in the car instead of the driver. At this time, the wearer does not drive or disable the preset service.
  • the method in all the foregoing embodiments may further include the following steps:
  • Whether the wearer is located in the traveling vehicle can be judged based on the situation evaluation value. For example, depending on the situation Whether the evaluation value is greater than or equal to the threshold value to determine whether the wearer is located in the traveling vehicle.
  • step S2 may further include: S21: determining whether the situation evaluation value is greater than or equal to a threshold.
  • the situation evaluation value is greater than or equal to the threshold value, it is determined that the wearer is located in the traveling vehicle; and if the situation evaluation value is less than the threshold value, it is determined that the wearer is not located in the traveling vehicle.
  • the threshold value can be set to an initial value of "2 or 3". Users can receive feedback for different thresholds, verify the correctness of the decision using the threshold, and finally select an optimal threshold.
  • control method in all the foregoing embodiments may further include the following steps:
  • the image acquisition device in the head mounted device is restarted to collect the environment image and the subsequent determination operation.
  • the restart condition may include a change in the situation in which the wearer is located, for example, the situation is changed from being located in the traveling vehicle to being not in the traveling vehicle (eg, the wearer leaves the vehicle, or the vehicle changes from driving to stopping). And, the situation is changed from at least one of the vehicles that are not traveling to the vehicle that is located in the vehicle (for example, the wearer enters the vehicle, or the vehicle changes from stop to travel).
  • the situation evaluation value (F1) calculated according to the current calculation is compared with the situation evaluation value (F0) calculated last time to determine whether the restart condition is met: when F0 is less than the threshold value and F1 is greater than or equal to The threshold value, or when F0 is greater than or equal to the threshold value and F1 is less than or equal to the threshold value, determines that the wearer's situation changes.
  • the flow of the above control method can also be as shown in FIGS. 4 and 5.
  • step S7 may further include: substituting the context data into the context evaluation function to obtain a context evaluation value.
  • the context evaluation function may include a signal strength evaluation function f 1 (x), a calendar evaluation function f 2 (y), an environmental noise evaluation function f 3 (z), and a motion velocity evaluation function f 4 (v) At least one.
  • x represents signal strength data
  • y represents user schedule data
  • z represents an environmental noise intensity evaluation value
  • v represents motion speed data.
  • the situation evaluation function can be written as F(x, y, z, v).
  • the above “substituting the context data into the context evaluation function to obtain the context evaluation value” may include:
  • the situation evaluation value is equal to the sum of the signal strength evaluation value, the schedule evaluation value, the schedule evaluation value, and the exercise speed evaluation value.
  • x can be acquired by an in-vehicle wireless network connection module (such as a WiFi module) or an external device (such as a mobile phone) in the HMD.
  • an in-vehicle wireless network connection module such as a WiFi module
  • an external device such as a mobile phone
  • x 90+P
  • P represents the actual signal strength
  • the value range of P is [-90 dbm, 0dbm]
  • ⁇ 1 1/90.
  • the weakest signal strength that can be detected when moving away from the vehicle is -90 dbm
  • the signal strength of the user in the vehicle is 0 dbm
  • the actual signal strength range that can be detected is (-90 dbm, 0 dbm).
  • x 90+P
  • f 2 (y) can have a variety of expression formulas, for example,
  • ⁇ 2 represents the schedule weight, ⁇ 2 >0; y includes a set of calendar events at the time of collecting data, and ⁇ represents a preset specific event set.
  • y can be provided by the calendar module in the HMD.
  • the set of calendar events at the time of collecting the data is specifically a set of calendar events falling in the sampling window, and the starting moment of the sampling window is the time of collecting data, and the duration of the sampling window may be 2 hours, 1 hour, 30 minutes, 40 minutes, etc.
  • the preset specific events may include at least one of a meeting, an office, and an exchange. Preset events contained in a specific event collection can be customized by the user.
  • ⁇ 2 1 can be taken.
  • ⁇ 2 can also be set by the user and will not be described here.
  • ⁇ 3 represents the ambient noise intensity weight
  • z represents the ambient noise intensity
  • Ambient noise can be collected by a microphone in the HMD or an external dedicated sound collection device.
  • the manner in which the ambient noise intensity data is collected may include:
  • Step 1 performing spectrum analysis on the collected sound to obtain a spectral component
  • Step 2 searching for the ambient noise spectral component in the spectral components obtained by the spectrum analysis
  • Step 3 Acquire the sound intensity of the spectrum of the ambient noise search.
  • the above ambient noise includes at least one of brake noise, engine noise, and road noise.
  • the ambient noise spectral component may comprise at least one of a brake noise spectral component, an engine noise spectral component, and a road noise spectral component.
  • typical brake noise, engine noise, and road noise can be pre-acquired using a professional sound intensity or sound pressure acquisition device, and spectral analysis is performed to record their spectral characteristics.
  • step 1 after the sound is collected, spectrum analysis is performed to obtain the spectral characteristics of each spectral component. By comparing the spectral characteristics, it is possible to search for whether the brake noise spectral component, the engine noise spectral component, and the road noise spectral component are included.
  • the above “acquiring the sound intensity of the searched environmental noise spectrum component” may include:
  • the sound intensity of the spectral component of the brake noise, the sound intensity of the spectral component of the engine noise, and the maximum of the sound intensity of the road noise spectral component are taken as the sound intensity of the ambient noise spectrum.
  • the acquired sound is spectrally analyzed to obtain five spectral components.
  • the sound intensity of the engine noise spectral component and the road noise spectral component are calculated according to the spectrum analysis result. How to calculate is a prior art and will not be described here.
  • the maximum value is selected from the sound intensity of the engine noise spectral component and the sound intensity of the road noise spectral component as the sound intensity of the environmental noise spectrum.
  • the above “acquiring the sound intensity of the spectrum of the searched environmental noise spectrum” may include:
  • A is the sound intensity weight of the brake noise spectrum component
  • B is the sound intensity weight of the engine noise spectrum component
  • C is the road noise spectrum component sound intensity weight.
  • f 4 (v) can have a variety of expression formulas, for example,
  • v 0 denotes a speed threshold
  • ⁇ 1 denotes a first motion velocity weight
  • ⁇ 2 denotes a second motion velocity weight
  • t 1 denotes a first velocity influence minimum value
  • t 2 denotes a second velocity influence minimum value
  • v can be calculated by GPS module or acceleration sensor in HMD, or directly by vehicle system.
  • v 0 is a speed threshold above which the faster the speed, the more likely the user is to be in a moving vehicle.
  • v 0 30 km/h
  • ⁇ 1 1/90
  • t 1 0.01
  • ⁇ 1 1/60
  • t 1 0.1.
  • v has a value range of [0, 120]
  • f 4 (v) has a value range of [0.01, 2.1].
  • the acquisition environment image will be described below.
  • the collecting environment image may include:
  • the camera parameter may be set according to the above v, or the camera parameter is a preset standard camera parameter (that is, the camera parameter is set as a preset standard camera parameter).
  • the photographing parameters may include an exposure time, a reciprocal F of the ratio of the aperture aperture to the focal length of the lens, sensitivity, and a selected focus point.
  • the selected focus point can be included, see Figure 6 or Figure 7, in (for HMD wearers observed
  • the center of the screen is the origin, the horizontal direction is the x-axis direction, the vertical direction is the y-axis direction in the Cartesian coordinate system, the focus point in the third quadrant, and the pair on the x-axis negative half-axis focus.
  • the user actively triggers the camera in the HMD to take a picture, and the photograph taken as a standard environment image, and the photographing parameters used when photographing the standard environment image are saved as standard photographing parameters.
  • the camera parameters are automatically adjusted to the standard camera parameters for taking photos.
  • step S4 in all the above embodiments may include:
  • detecting whether the preset environment object is included in the collected environment image includes:
  • Step 1 extracting image texture features of the collected environment image
  • the image texture feature is extracted from the prior art and will not be described here.
  • Step 2 matching the image texture feature of the preset marker with the extracted image texture feature; when the matching is successful, detecting that the preset marker is included; otherwise, detecting that the preset marker is not included.
  • the above markers may include a steering wheel.
  • the marker may further comprise at least one of a dashboard and an automotive A-pillar.
  • the steering wheel image can extract x image texture features, and match the image texture features of the collected environment image with the image texture features of the steering wheel image, if matching with N image texture features (N is less than or equal to x), Then the match is determined to be successful, otherwise the match is determined to have failed.
  • step S4 in all the foregoing embodiments may include:
  • the similarity is greater than (or greater than or equal to) the preset similarity threshold, it is determined that the seat of the wearer is the driver's seat; otherwise, it is determined that the seat of the wearer is not the driver's seat.
  • the similarity of the grayscale between the two images can be calculated, and the similarity of the image texture features can also be calculated. For example, suppose the standard environment image can extract G image texture features, and match the image texture features of the collected environment image with the image texture features of the standard environment image, if matching with m image texture features (m is less than or equal to G) , then the match is determined to be successful, otherwise the match is determined to have failed.
  • control method in all the above embodiments may further include:
  • the device information of the wearable device connected to the HMD is transmitted to the in-vehicle system to which the in-vehicle wireless network belongs.
  • Wearable devices are not part of the headset, but are connected to the headset via wifi, Bluetooth, etc.
  • the screen display service of the head mounted device can display the data collected by the wearable device.
  • the wearable device can include at least one of a wristwatch, a heart rate armband, and a chest strap.
  • Device information can include device attributes (functions), pairing information (whether it supports Bluetooth, wifi, infrared), and authentication information ( ⁇ At least one of a password>, a device name, and the like.
  • the in-vehicle system may search for the above wearable settings (such as a heart rate belt) and establish a connection with the searched wearable device. After the connection is established, the in-vehicle system can also send the status information of the wearable device with which the connection is established to the HMD, so that the HMD refreshes the state of the connected device that it manages.
  • the in-vehicle system can use the heart rate band data to assess driver fatigue and drowsiness.
  • the in-vehicle system is connected to the wearable device, in addition to avoiding the HMD wearer from viewing the data collected by the wearable device through the HMD's near-eye display device while driving, and reducing the energy consumption of the HMD.
  • the wearable device that establishes the connection with the HMD can be connected to the in-vehicle system.
  • control method in all the above embodiments may further include:
  • the wearable device that has been connected to the HMD can be actively searched for, and a connection is established with it to receive its data.
  • the embodiment of the invention further provides a head mounted device control device.
  • the head mounted device control device may be a software logic module installed in the head mounted device, or may be a controller independent of the head mounted device, or may be a processor of the head mounted device. Alternatively, it may be a chip built in the head mounted device other than the processor. See Figure 8, head The wearable device control device 800 can include a context data collection unit 1, a first determination unit 2, an image acquisition control unit 3, a second determination unit 4, and a service management unit 5, wherein:
  • the context data collection unit 1 is configured to collect context data.
  • the above context data may include at least one of the motion speed data v, the in-vehicle wireless network signal strength data x, the user schedule data y, and the environmental noise intensity data z.
  • x (actual signal strength P) may be provided by an in-vehicle wireless network connection module (for example, a WiFi module) or an external device (for example, a mobile phone) in the HMD, and y may be provided by a calendar module in the HMD, and z may be in the HMD.
  • the microphone or external dedicated sound collection device is provided, and the v can be calculated by the GPS module or the acceleration sensor in the HMD, or directly provided by the vehicle system.
  • the context data collecting unit 1 can acquire x or P from the in-vehicle wireless network connection module or the external device in the HMD, acquire y from the calendar module in the HMD, and obtain it from the microphone in the HMD or an external dedicated sound collecting device. z, obtain v from the GPS module/acceleration sensor/vehicle system in the HMD.
  • the first determining unit 2 is configured to use the context data to determine whether the wearer of the head mounted device is located in the traveling vehicle;
  • the image capturing control unit 3 is configured to control the head mounted device when determining that the wearer is located in the traveling vehicle
  • the image acquisition device collects an environmental image;
  • the second determining unit 4 is configured to use the collected environment image to determine whether the seat of the wearer is the driver's seat;
  • the service management unit 5 is configured to disable the preset service when determining that the seat of the wearer is the driver's seat.
  • the preset service includes at least a screen display service of the near-eye display.
  • the service management unit 5 can also be used to determine whether the seated seat is not the driver's seat, or to determine whether the wearer is not in the traveling vehicle, to enable or maintain the preset service.
  • the head mounted device control device is a controller other than the head mounted device, or is built into a chip other than the processor in the head mounted device, it can be directed to the head mounted device.
  • the processor in the middle sends a control command, so that the processor of the head mounted device stops providing the preset service, thereby achieving the purpose of disabling the preset service.
  • the preset service can be directly disabled.
  • the head mounted device control apparatus 800 may further include:
  • the first connecting unit is configured to, after detecting the connection to the in-vehicle wireless network, send device information of the wearable device connected to the head mounted device to the in-vehicle system to which the in-vehicle wireless network belongs, so that the in-vehicle system according to the acquired Searching for device information and establishing a connection with the searched wearable device;
  • the second connection unit is configured to, when detecting disconnection from the in-vehicle wireless network, search for a wearable device that has established a connection with the headset, and re-establish a connection with the searched wearable device.
  • the head mounted device control apparatus 800 may further include a first connection unit and a second connection unit, where:
  • the first connecting unit is configured to, after detecting the connection to the in-vehicle wireless network, send device information of the wearable device connected to the head mounted device to the in-vehicle system to which the in-vehicle wireless network belongs, so that the in-vehicle system according to the acquired Searching for device information and establishing a connection with the searched wearable device;
  • the second connection unit is configured to, when detecting disconnection from the in-vehicle wireless network, search for a wearable device that has established a connection with the headset, and re-establish a connection with the searched wearable device.
  • the head mounted device control apparatus 800 may further include:
  • the converting unit is configured to convert the received information from the preset emergency contact into voice information playing when the seat is the driver's seat.
  • the head mounted device control apparatus 800 may further include:
  • the pushing unit is configured to push the screen display service to the display screen other than the head mounted device when the seat of the wearer is the driver's seat.
  • FIG. 9 is a schematic structural diagram of hardware of a head mounted device control apparatus 800 (as a controller other than a head mounted device) according to an embodiment of the present invention, which may include a processor 801, a memory 802, a bus 803, and a communication interface. 804.
  • the processor 801, the memory 802, and the communication interface 804 are connected to each other through a bus 803; and the memory 802 is configured to store a program.
  • the program can include program code, the program code including computer operating instructions.
  • the memory 802 may include a high speed random access memory (RAM) memory, and may also include a non-volatile memory such as at least one disk memory.
  • RAM random access memory
  • non-volatile memory such as at least one disk memory.
  • the processor 801 may be a general-purpose processor, including a central processing unit (CPU), a network processor (Network Processor, NP for short), and the like; or a digital signal processor (DSP) or an application specific integrated circuit (ASIC). ), off-the-shelf programmable gate arrays (FPGAs) or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components.
  • CPU central processing unit
  • NP Network Processor
  • ASIC application specific integrated circuit
  • FPGAs off-the-shelf programmable gate arrays
  • the processor 801 executes the program stored in the memory 802, and is used to implement the headset device control method provided by the embodiment of the present invention, including:
  • the context data includes at least one of motion speed data, vehicle wireless network signal strength data, user schedule data, and ambient noise intensity data;
  • the image capturing device in the control head mounted device collects an environment image
  • the preset service is disabled, and the preset service includes the screen display service of the near-eye display.
  • processor 801 can also be used to complete other steps in the head-mounted device control method introduced in the method section of this document, and the refinement of each step, which is not described herein.
  • the CPU and memory can be integrated on the same chip or as two separate devices.
  • an embodiment of the present invention further provides a head mounted device, which may include an image capturing device, a near-eye display, and the above-described head mounted device control device, wherein the head mounted device control device 800 and the image capturing device respectively Connected to a near-eye display.
  • a head mounted device which may include an image capturing device, a near-eye display, and the above-described head mounted device control device, wherein the head mounted device control device 800 and the image capturing device respectively Connected to a near-eye display.
  • Figure 10 shows a specific structure of a head mounted device.
  • the above prior art solution must use a special mobile phone having a human body communication receiving device, and it is necessary to carry out a modification and installation signal generating device for the driver's car.
  • the head mounted device control device may be a software logic module installed in the head mounted device, or may be a processor of the head mounted device, or may be built in the headset. A chip other than a processor in a device.
  • the context data collecting unit 1 of the head mounted device control device may acquire x or P from the in-vehicle wireless network connection module in the HMD, acquire y from the calendar module in the HMD, and acquire z from the microphone in the HMD. , from the GPS module in the HMD / Obtain v at the acceleration sensor.
  • the head-mounted device it can determine whether the user is driving by analyzing the data collected by himself, and if it is determined to be driving, the preset service is automatically disabled. In this process, there is no need to communicate through the human body, and a device under the car seat does not need to issue a control command, so that the head-mounted device does not need to use a human body communication receiving device, and does not need to modify the car.
  • FIG. 11 it is a flowchart of a method for controlling a head mounted device according to the present invention, which may include at least the following steps:
  • S101 Determine the state of the wearer of the head mounted device by using the collected state data.
  • the foregoing status data includes context data and an environment image.
  • the context data may include at least one of the motion speed data v, the in-vehicle wireless network signal strength data x, the user schedule data y, and the environmental noise intensity data z.
  • the above states may include a driving state and a non-driving state.
  • the above preset service includes at least one of a screen display service of a near-eye display, a manual input service, and a projection display service.
  • the HMD has a touchpad and buttons that the wearer can interact with by touching the touchpad or pressing/pushing buttons. Touching, pressing, and dialing may also distract the wearer.
  • manual input services can include manual input of touch pads and buttons.
  • the preset service is disabled when the use status data determines that the wearer is in the driving state. To reduce the distraction of the wearer's attention and improve driving safety.
  • control method in all the above embodiments may further include:
  • the received information from the preset emergency contact is converted into voice information playback.
  • An emergency contact list can be maintained in the HMD, and the contacts in the list are preset emergency contacts.
  • the above information may include at least one of a mail header and a short message.
  • the email title and short message can be converted into a voice broadcast to the wearer.
  • control method in all of the above embodiments may further include: pushing the screen display service to a display screen other than the head mounted device when determining that the wearer is in a driving state. For example, it can be pushed to the display of the onboard system or to the terminal of the same passenger.
  • control method may further include the following steps:
  • step S101 (determining the state of the wearer of the head mounted device by using the state data) may further include:
  • the driving state includes that the wearer is located in the traveling vehicle and the seat is the driver's seat.
  • step 102 can further include the following steps:
  • the step S101 (determining the state of the wearer of the head mounted device using the state data) may further include:
  • the non-driving state includes that the above-mentioned wearer is not located in the traveling vehicle or that the above-mentioned seat is not the driver's seat.
  • the environmental image and the context data (S0) may be acquired first.
  • Step S101 or step S2 is performed again (please refer to FIG. 14).
  • the environment image may be collected first, and then the scenario data is collected, and then step S101 or step S2 is performed.
  • the scenario data may be collected first, and then the environment image is collected, and then step S101 or step S2 is performed.
  • the scene data (S1) may be collected first, and the environment image is collected (S3) after determining that the wearer is located in the traveling vehicle.
  • step S101 may further include the following steps:
  • the situation data is used to determine whether the wearer of the head mounted device is located in the traveling vehicle.
  • the wearer When the wearer is located in the traveling vehicle and the seat is the driver's seat, it is determined that the wearer of the head mounted device is in the driving state.
  • step 102 can further include the following steps:
  • the step S101 (determining the state of the wearer of the head mounted device using the state data) may further include:
  • environmental images and context data may be acquired first.
  • Step S101 or step S2' is performed again (see Fig. 17).
  • the environment image may be collected first, and then the scene data is collected, and then step S101 or step S2' is performed.
  • the scenario data may be collected first, and then the environment image is acquired, and then step S101 or step S2' is performed.
  • the environmental image (S1') may be acquired first, and it is determined that the seat where the wearer is located is the driver's seat, and the scene data is acquired (S3').
  • the environment image can be periodically acquired.
  • step S103 may further include the following steps: S6. If it is determined that the seat is not the driver's seat, or if it is determined that the wearer is not located in the traveling vehicle, the preset service is enabled or maintained.
  • step S2 if it is determined that the wearer is not located in the traveling vehicle when performing step S2, then the collection environment image action corresponding to step S3 is not required, and the user may be directly determined to be in a non-driving state, and step S6 is executed, or Keep the default service.
  • step S6 is executed, enabled or maintained.
  • the use of the context data determines whether the wearer of the head mounted device is located in the traveling vehicle, and uses the collected environment image to determine whether the seat of the wearer is the driver's seat. carried out.
  • the wearer When the wearer is located in the traveling vehicle and the seat is the driver's seat, it is determined that the wearer of the head mounted device is in the driving state. When the wearer is not in the traveling vehicle or the seat is not the driver's seat, it is determined that the wearer of the head mounted device is not in the driving state.
  • the order of collecting environment images and collecting scene data is not solid. If you decide, you can first collect the data environment image and the scenario data and then judge it. You can also collect the first data (data environment image or scenario data) first, and use the first data to make the first judgment (determine the headset device).
  • the second data (scenario data or data environment image) is collected, and then the second data is used for the second judgment ( Determining whether the seat of the wearer is the driver's seat or determining whether the wearer of the head-mounted device is located in the traveling vehicle; or after using the first data for the first determination, if the second determination is not made The second data is not collected.
  • the method in all the above embodiments may further include the following steps:
  • the situation evaluation value is calculated based on the situation data (S7).
  • Whether the wearer is located in the traveling vehicle can be judged based on the situation evaluation value. For example, whether the wearer is located in the traveling vehicle can be determined according to whether the situation evaluation value is greater than or equal to the threshold value.
  • the situation evaluation value is greater than or equal to the threshold. If the situation evaluation value is greater than or equal to the threshold value, it is determined that the wearer is located in the traveling vehicle; and if the situation evaluation value is less than the threshold value, it is determined that the wearer is not located in the traveling vehicle.
  • the threshold value can be set to an initial value of "2 or 3". Users can receive feedback for different thresholds, verify the correctness of the decision using the threshold, and finally select an optimal threshold.
  • control method in all the foregoing embodiments may further include the following steps:
  • the image acquisition device in the head mounted device is restarted to collect the environment image and the subsequent determination operation.
  • the restart condition may include that the situation of the wearer changes, for example, the situation is located by the driving
  • the change in the vehicle is not in the vehicle that is not in travel (for example, the wearer leaves the vehicle, or the vehicle changes from driving to stopping), and the situation is changed from being not in the traveling vehicle to being in the traveling vehicle (for example) At least one of the wearer entering the vehicle, or the vehicle is changed from stopping to driving.
  • the situation evaluation value (F1) calculated according to the current calculation is compared with the situation evaluation value (F0) calculated last time to determine whether the restart condition is met: when F0 is less than the threshold value and F1 is greater than or equal to The threshold value, or when F0 is greater than or equal to the threshold value and F1 is less than or equal to the threshold value, determines that the wearer's situation changes.
  • the process of the above control method can also be varied in many ways to restart the acquisition environment image operation based on whether the situation of the wearer changes. For example, it can be as shown in FIGS. 23 to 24.
  • step S7 may further include: substituting the context data into the context evaluation function to obtain a context evaluation value.
  • the context evaluation function may include a signal strength evaluation function f 1 (x), a calendar evaluation function f 2 (y), an environmental noise evaluation function f 3 (z), and a motion velocity evaluation function f 4 (v) At least one.
  • x represents signal strength data
  • y represents user schedule data
  • z represents an environmental noise intensity evaluation value
  • v represents motion speed data.
  • the situation evaluation function can be written as F(x, y, z, v).
  • the above “substituting the context data into the context evaluation function to obtain the context evaluation value” may include:
  • the situation evaluation value is equal to the sum of the signal strength evaluation value, the schedule evaluation value, the schedule evaluation value, and the exercise speed evaluation value.
  • x can be acquired by an in-vehicle wireless network connection module (such as a WiFi module) or an external device (such as a mobile phone) in the HMD.
  • an in-vehicle wireless network connection module such as a WiFi module
  • an external device such as a mobile phone
  • x 90+P
  • P represents the actual signal strength
  • the value range of P is [-90dbm, 0dbm]
  • ⁇ 1 1/90.
  • the weakest signal strength that can be detected when moving away from the vehicle is -90 dbm
  • the signal strength of the user in the vehicle is 0 dbm
  • the actual signal strength range that can be detected is (-90 dbm, 0 dbm).
  • x 90+P
  • f 2 (y) can have a variety of expression formulas, for example,
  • ⁇ 2 represents the schedule weight, ⁇ 2 >0; y includes a set of calendar events at the time of collecting data, and ⁇ represents a preset specific event set.
  • y can be provided by the calendar module in the HMD.
  • ⁇ 3 represents the ambient noise intensity weight
  • z represents the ambient noise intensity
  • Ambient noise can be collected by a microphone in the HMD or an external dedicated sound collection device.
  • f 4 (v) can have a variety of expression formulas, for example,
  • v 0 denotes a speed threshold
  • ⁇ 1 denotes a first motion velocity weight
  • ⁇ 2 denotes a second motion velocity weight
  • t 1 denotes a first velocity influence minimum value
  • t 2 denotes a second velocity influence minimum value
  • v can be calculated by GPS module or acceleration sensor in HMD, or directly by vehicle system.
  • v 0 is a speed threshold above which the faster the speed, the more likely the user is to be in a moving vehicle.
  • v 0 30 km/h
  • ⁇ 1 1/90
  • t 1 0.01
  • ⁇ 1 1/60
  • t 1 0.1.
  • v has a value range of [0, 120]
  • f 4 (v) has a value range of [0.01, 2.1].
  • the environmental image may be collected as follows:
  • the camera parameter may be set according to the above v, or the camera parameter is a preset standard camera parameter (that is, the camera parameter is set as a preset standard camera parameter).
  • the photographing parameters may include an exposure time, a reciprocal F of the ratio of the aperture aperture to the focal length of the lens, sensitivity, and a selected focus point.
  • "using the collected environment image to determine whether the seat worn by the wearer is the driver's seat” in all of the above embodiments may include:
  • "Using the collected environment image to determine whether the seat of the wearer is the driver's seat” in all of the above embodiments may include:
  • the similarity is greater than (or greater than or equal to) the preset similarity threshold, it is determined that the seat of the wearer is the driver's seat; otherwise, it is determined that the seat of the wearer is not the driver's seat.
  • control method in all the above embodiments may further include:
  • the device information of the wearable device connected to the HMD is transmitted to the in-vehicle system to which the in-vehicle wireless network belongs.
  • the in-vehicle system may search for the above wearable settings and establish a connection with the searched wearable device.
  • the in-vehicle system can also send the status information of the wearable device with which the connection is established to the HMD, so that the HMD refreshes the state of the connected device that it manages.
  • the in-vehicle system can use the heart rate band data to assess driver fatigue and drowsiness.
  • the in-vehicle system is connected to the wearable device, in addition to avoiding the HMD wearer from viewing the data collected by the wearable device through the HMD's near-eye display device while driving, and reducing the energy consumption of the HMD.
  • control method in all the above embodiments may further include:
  • the wearable device that has established a connection with the head mounted device can be searched for and reconnected with the searched wearable device.
  • the embodiment of the invention further provides a head mounted device control device.
  • the head mounted device control device may be a software logic module installed in the head mounted device, or may be a controller independent of the head mounted device, or may be a processor of the head mounted device. Alternatively, it may be a chip built in the head mounted device other than the processor.
  • the head mounted device control device 25 may include:
  • the state determining unit 251 is configured to determine, by using the collected state data, a state of the wearer of the head mounted device;
  • the above state may include a driving state and a non-driving state.
  • the above status data may include context data and an environmental image.
  • the above context data may include at least one of motion speed data, in-vehicle wireless network signal strength data, user schedule data, and ambient noise intensity data.
  • the service management unit 252 is configured to disable the preset service when the wearer is in a driving state, and the preset service includes a screen display service of the near-eye display.
  • the service management unit 252 is further configured to enable or maintain the preset service when the wearer is not in the driving state.
  • the head mounted device control device is a controller other than the head mounted device, or is built into a chip other than the processor in the head mounted device, it can be directed to the head mounted device.
  • the processor in the middle sends a control command, so that the processor of the head mounted device stops providing the preset service, thereby achieving the purpose of disabling the preset service.
  • the preset service can be directly disabled.
  • the head mounted device control device 25 may further include:
  • the first connecting unit is configured to, when detecting the connection to the in-vehicle wireless network, send device information of the wearable device connected to the head mounted device to the in-vehicle system to which the in-vehicle wireless network belongs, so that the in-vehicle system according to the acquired Searching for device information and establishing a connection with the searched wearable device;
  • the first connecting unit may send the device information of the wearable device connected to the head mounted device to the in-vehicle wireless network after detecting that the wearer is in the driving state and detecting the connection to the in-vehicle wireless network Onboard system.
  • the second connection unit is configured to, when detecting disconnection from the in-vehicle wireless network, search for a wearable device that has established a connection with the headset, and re-establish a connection with the searched wearable device.
  • the second connecting unit may be configured to: when determining that the wearer is in a non-driving state, search for a wearable device that has established a connection with the head mounted device, and re-establish a connection with the searched wearable device.
  • the second connecting unit may be configured to, after determining that the wearer is in a driving state, search for a wearable device that has established a connection with the headset when detecting disconnection from the in-vehicle wireless network, and search with the searched The wearable device re-establishes the connection.
  • the head mounted device control device 25 may further include:
  • the information conversion unit is configured to convert the received information from the preset emergency contact into the voice information during the driving state.
  • the head mounted device control device 25 may further include:
  • the screen display service pushing unit is configured to push the screen display service to the display screen other than the head mounted device when the wearer is in the driving state.
  • FIG. 26 is a schematic structural diagram of hardware of a head mounted device control apparatus 25 (as a controller independent of a head mounted device) according to an embodiment of the present invention, which may include a processor 251, a memory 252, a bus 253, and a communication interface. 254.
  • the processor 251, the memory 252, and the communication interface 254 are connected to each other through a bus 253; the memory 252 is configured to store a program.
  • the program can include program code, the program code including computer operating instructions.
  • the memory 252 may include a high speed random access memory (RAM) memory, and may also include a non-volatile memory such as at least one disk memory.
  • RAM random access memory
  • non-volatile memory such as at least one disk memory.
  • the processor 251 may be a general-purpose processor, including a central processing unit (CPU), a network processor (NP Processor, etc.), or a digital signal processor (DSP), an application specific integrated circuit (ASIC). ), off-the-shelf programmable gate arrays (FPGAs) or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components.
  • CPU central processing unit
  • NP Processor network processor
  • DSP digital signal processor
  • ASIC application specific integrated circuit
  • FPGAs off-the-shelf programmable gate arrays
  • FPGAs field-programmable gate arrays
  • the processor 251 executes the program stored in the memory 252, and is used to implement the headset device control method provided by the embodiment of the present invention, including:
  • the preset service is disabled when it is determined that the wearer is in the driving state.
  • the foregoing status data includes context data and an environment image.
  • the context data may include at least one of the motion speed data v, the in-vehicle wireless network signal strength data x, the user schedule data y, and the environmental noise intensity data z.
  • the above states may include a driving state and a non-driving state.
  • the preset service includes at least one of a screen display service of a near-eye display, a manual input service, and a projection display service.
  • processor 251 can also be used to complete other steps in the head-mounted device control method introduced in the method section of the present invention, and the refinement of each step, which is not described herein.
  • the memory 252 further stores executable instructions, and the processor 251 executes the executable instructions, and the following steps are performed:
  • the above predetermined service is enabled or maintained when it is determined that the wearer is not in the driving state.
  • the memory 252 further stores executable instructions, and the processor 251 executes the executable instructions, and the following steps are performed:
  • the received information from the preset emergency contact is converted into voice information playback.
  • the memory 252 further stores executable instructions, and the processor 251 executes the executable instructions, and the following steps are performed:
  • the screen display service is pushed to a display screen other than the above-described head mounted device.
  • the memory 252 further stores executable instructions
  • the processor 251 executes the executable instructions, and the following steps may be completed (corresponding to using state data to determine the state of the wearer of the head mounted device) :
  • the environment image is used to determine whether the seat of the wearer is the driver's seat
  • the wearer When the wearer is located in the traveling vehicle and the seat is the driver's seat, it is determined that the wearer of the head mounted device is in the driving state.
  • the CPU and memory can be integrated on the same chip or as two separate devices.
  • the embodiment of the present invention further provides a head mounted device, which may include an image capturing device, a near-eye display, and the above-described head mounted device control device, wherein the head mounted device control device 25 and the image capturing device respectively Connected to a near-eye display.
  • a head mounted device which may include an image capturing device, a near-eye display, and the above-described head mounted device control device, wherein the head mounted device control device 25 and the image capturing device respectively Connected to a near-eye display.
  • Fig. 27 shows a specific structure of the head mounted device.
  • the steps of a method or algorithm described in connection with the embodiments disclosed herein can be implemented directly in hardware, a software module executed by a processor, or a combination of both.
  • the software module can be placed in random access memory (RAM), memory, read only memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, removable disk, CD-ROM, or technical field. Any other form of storage medium known.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Computer Hardware Design (AREA)
  • Optics & Photonics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Fittings On The Vehicle Exterior For Carrying Loads, And Devices For Holding Or Mounting Articles (AREA)
  • User Interface Of Digital Computer (AREA)
  • Traffic Control Systems (AREA)

Abstract

一种头戴式设备控制方法包括:采集情境数据(S1);使用情境数据判断头戴式设备的佩带者是否位于行驶的车辆内(S2);若判定佩带者位于行驶的车辆内,则控制头戴式设备中的图像采集装置采集环境图像(S3);使用采集到的环境图像,判断佩带者所处座位是否是驾驶座(S4);若判定佩带者所处的座位是驾驶座,则禁用预设服务(S5),预设服务包括近眼显示器的屏幕显示服务。该方法能减少对佩带者注意力的分散,提高驾驶安全性。

Description

头戴式设备控制方法、装置和头戴式设备
本申请要求于2013年11月28日提交中国专利局、申请号为201310627752.6、发明名称为“头戴式设备控制方法、装置和头戴式设备”和于2014年1月8日提交中国专利局、申请号为201410008649.8,发明名称为“头戴式设备控制方法、装置和头戴式设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本发明涉及数据处理技术领域,更具体地说,涉及头戴式设备控制方法、装置和头戴式设备。
背景技术
头戴式设备(Head Mounted Display,HMD),是指佩戴在用户头部,具有独立的CPU、存储单元、通信模块和近眼显示器的电子设备。在使用过程中,头戴式设备需要佩戴在用户头上。
典型的头戴式设备一般具有网络接入能力,可以收发数据,为佩戴者提供各种服务,例如提供电子报纸、地图、来电提示、电子邮件提示、短消息提示和社交网络消息更新等服务。这些服务会显示在头戴式设备的近眼显示器上。也即,头戴式设备使用近眼显示器为佩戴者提供屏幕显示服务。
佩戴者如在驾车时使用HMD,特别是使用HMD提供的屏幕显示服务,将会分散其注意力,造成安全隐患。因此,如何在佩戴者驾车时限制其使用HMD,特别是屏幕显示服务,成为一个HMD的热点研究方向。
发明内容
有鉴于此,本发明实施例的目的在于提供头戴式设备控制方法、装置和头 戴式设备,以解决上述问题。
为实现上述目的,本发明实施例提供如下技术方案:
根据本发明实施例的第一方面,提供一种头戴式设备控制方法,包括:
采集情境数据,所述情境数据包括运动速度数据、车载无线网络信号强度数据、用户日程表数据和环境噪音强度数据中的至少一种;
使用所述情境数据判断所述头戴式设备的佩戴者是否位于行驶的车辆内;
若判定所述佩戴者位于行驶的车辆内,则控制所述头戴式设备中的图像采集装置采集环境图像;
使用采集到的环境图像,判断所述佩戴者所处座位是否是驾驶座;
若判定所述佩戴者所处座位是驾驶座,则禁用预设服务,所述预设服务包括近眼显示器的屏幕显示服务。
结合第一方面,在第一种可能的实现方式中,还包括:若判定所述所处座位不是驾驶座,或者,若判定所述佩戴者未位于行驶的车辆内,则启用或保持所述预设服务。
结合第一方面第一种可能的实现方式,在第二种可能的实现方式中,还包括:
将所述情境数据代入情境评价函数,得到情境评价值;
所述使用所述情境数据判断所述佩戴者是否位于行驶的车辆内包括:判断所述情境评价值是否大于等于门限值;
若所述情境评价值大于等于所述门限值,则判定所述佩戴者位于行驶的车辆内;
若所述情境评价值小于所述门限值,则判定所述佩戴者未位于行驶的车辆 内。
结合第一方面的第二种可能的实现方式,在第三种可能的实现方式中,所述情境评价函数记为F(x,y,z,v),包括信号强度评价函数f1(x),日程表评价函数f2(y),环境噪声评价函数f3(z)和运动速度评价函数f4(v)中的至少一种;所述x表示信号强度数据,所述y表示用户日程表数据,所述z表示环境噪声强度评价值,所述v表示运动速度数据;所述将所述情境数据代入情境评价函数,得到所述情境评价值包括:将所述x代入f1(x)得到信号强度评价值,将所述y代入f2(y)得到日程表评价值,将所述z代入f3(z)得到环境噪声强度评价值,以及,将所述v代入f4(v)得到运动速度评价值中的至少一种。
结合第一方面的第三种可能的实现方式,在第四种可能的实现方式中,
f1(x)=α1x,所述α1表示信号强度权重,α1>0;
Figure PCTCN2014092381-appb-000001
所述α2表示日程表权重,并且α2>0;所述y包括采集数据时刻的日程表事件集合,所述Ω表示预设特定事件集合;
f3(z)=α3z,所述α3表示环境噪声强度权重,α3>0;
Figure PCTCN2014092381-appb-000002
所述v0表示速度门限,所述β1表示第一运动速度权重,所述β2表示第二运动速度权重,所述t1表示第一速度影响最小值,所述t2表示第二速度影响最小值,β2≥β1>0,t2≥t1>0。
结合第一方面,在第五种可能的实现方式中,所述采集环境图像包括:设置拍照参数并进行拍照,得到所述环境图像;所述拍照参数根据所述运动速度数据确定,或者,所述拍照参数为预设的标准拍照参数。
结合第一方面的第五种可能的实现方式,在第六种可能的实现方式中,所 述使用采集到的环境图像,判断所述佩戴者所处座位是否是驾驶座包括:
检测所述环境图像中是否包含预设标记物;
若检测到包含所述预设标记物,则判定佩戴者所处座位是驾驶座;
若检测不到包含所述预设标记物,则判定佩戴者所处座位不是驾驶座;
或者,
计算所述采集到的环境图像与预设的标准环境图像的相似度;若计算得到的相似度大于等于预设的相似度阈值,则判定佩戴者所处座位是驾驶座;否则,判定佩戴者所处座位不是驾驶座。
结合第一方面的第五种或第六种可能的实现方式,在第七种可能的实现方式中,所述拍照参数包括曝光时间、光圈口径与镜头焦距的比值的倒数F、感光度和所选对焦点。
结合第一方面,在第八种可能的实现方式中,还包括:在检测到连接至车载无线网络后,将已连接至所述头戴式设备的可穿戴设备的设备信息,发送给所述车载无线网络所属的车载系统,以便所述车载系统根据获取到的所述设备信息进行搜索,并与搜索到的可穿戴设备建立连接;在检测到与所述车载无线网络断开连接时,搜索与所述头戴式设备建立过连接的可穿戴设备,并与搜索到的可穿戴设备重新建立连接。
结合第一方面,在第九种可能的实现方式中,所述预设服务还包括:手工输入服务和投影显示服务中的至少一种。
结合第一方面,在第十种可能的实现方式中,还包括:若判定所述所处座位是驾驶座,则将接收到的、来自预设紧急联系人的信息,转换为语音信息播放。
结合第一方面,在第十一种可能的实现方式中,还包括:若判定所述所处座位是驾驶座,则将所述屏幕显示服务推送至所述头戴式设备之外的显示屏上。
根据本发明实施例的第二方面,提供一种头戴式设备控制装置,包括:
情境数据采集单元,用于采集情境数据,所述情境数据包括运动速度数据、车载无线网络信号强度数据、用户日程表数据和环境噪音强度数据中的至少一种;
第一判断单元,用于使用所述情境数据判断所述头戴式设备的佩戴者是否位于行驶的车辆内;
图像采集控制单元,用于判定所述佩戴者位于行驶的车辆内时,控制所述头戴式设备中的图像采集装置采集环境图像;
第二判断单元,用于使用采集到的环境图像,判断所述佩戴者所处座位是否是驾驶座;
服务管理单元,用于判定所述佩戴者所处座位是驾驶座时,禁用预设服务,所述预设服务包括近眼显示器的屏幕显示服务。
结合第二方面,在第一种可能的实现方式中,还包括:
第一连接单元,用于在检测到连接至车载无线网络后,将已连接至所述头戴式设备的可穿戴设备的设备信息,发送给所述车载无线网络所属的车载系统,以便所述车载系统根据获取到的所述设备信息进行搜索,并与搜索到的可穿戴设备建立连接;
第二连接单元,用于在检测到与所述车载无线网络断开连接时,搜索与所述头戴式设备建立过连接的可穿戴设备,并与搜索到的可穿戴设备重新建立连接。
结合第二方面,在第二种可能的实现方式中,还包括:转换单元,用于判定所述所处座位是驾驶座时,将接收到的、来自预设紧急联系人的信息,转换为语音信息播放。
结合第二方面,在第三种可能的实现方式中,还包括:推送单元,用于判定所述所处座位是驾驶座时,将所述屏幕显示服务推送至所述头戴式设备之外的显示屏上。
根据本发明实施例的第三方面,提供一种头戴式设备,图像采集装置、近眼显示器和如第二方面至第二方面第三种可能的实现方式中任一项所述的头戴式设备控制装置,所述头戴式设备控制装置分别与所述图像采集装置和所述近眼显示器相连接。
可见,在本发明实施例中,会根据采集到的情境数据,来判断佩戴者是否位于行驶的车辆内,如判定佩戴者位于行驶的车辆内,则采集环境图像来进一步判断佩戴者所处座位是否为驾驶座,若判定佩戴者所处座位是驾驶座,则认为佩戴者正在驾驶,则将禁用头戴式设备的屏幕显示服务,以减少对佩戴者注意力的分散,提高驾驶安全性。
附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本发明实施例提供的头戴式设备控制方法流程图;
图2为本发明实施例提供的头戴式设备控制方法另一流程图;
图3为本发明实施例提供的头戴式设备控制方法又一流程图;
图4为本发明实施例提供的头戴式设备控制方法又一流程图;
图5为本发明实施例提供的头戴式设备控制方法又一流程图;
图6为本发明实施例提供的选择对焦点示意图;
图7为本发明实施例提供的选择对焦点另一示意图;
图8为本发明实施例提供的头戴式设备控制装置结构示意图;
图9为本发明实施例提供的头戴式设备控制装置另一结构示意图;图10为本发明实施例提供的头戴式设备结构示意图;
图11为本发明实施例提供的头戴式设备控制方法又一流程图;
图12为本发明实施例提供的头戴式设备控制方法又一流程图;
图13为本发明实施例提供的头戴式设备控制方法又一流程图;
图14为本发明实施例提供的头戴式设备控制方法又一流程图;
图15为本发明实施例提供的头戴式设备控制方法又一流程图;
图16为本发明实施例提供的头戴式设备控制方法又一流程图;
图17为本发明实施例提供的头戴式设备控制方法又一流程图;
图18为本发明实施例提供的头戴式设备控制方法又一流程图;
图19为本发明实施例提供的头戴式设备控制方法又一流程图;
图20为本发明实施例提供的头戴式设备控制方法又一流程图;
图21为本发明实施例提供的头戴式设备控制方法又一流程图;
图22为本发明实施例提供的头戴式设备控制方法又一流程图;
图23为本发明实施例提供的头戴式设备控制方法又一流程图;
图24为本发明实施例提供的头戴式设备控制方法又一流程图;
图25为本发明实施例提供的头戴式设备控制装置又一结构示意图;
图26为本发明实施例提供的头戴式设备控制装置又一结构示意图;
图27为本发明实施例提供的头戴式设备结构示意图。
具体实施方式
说明书中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
参见图1,为本发明提供的头戴式设备控制方法的流程图,其至少可包括如下步骤:
S1、采集情境数据;
上述情境数据可包括运动速度数据v、车载无线网络信号强度数据x、用户日程表数据y和环境噪音强度数据z中的至少一种。
S2、使用上述情境数据判断头戴式设备的佩戴者是否位于行驶的车辆内;
S3、若判定佩戴者位于行驶的车辆内,则控制头戴式设备中的图像采集装置采集环境图像;
更具体的,上述图像采集装置一般为摄像头。也即,在判断出佩戴者位于行驶的车辆内时,将使用摄像头进行拍照。本文后续将对如何采集环境图像进行更具体的介绍。
S4、使用采集到的环境图像,判断佩戴者所处座位是否为驾驶座;
S5、若判定所处座位是驾驶座(也即确定出佩戴者处于驾驶状态),则禁用 预设服务。预设服务可包括近眼显示器的屏幕显示服务。
司机在驾驶时位于驾驶座上,因此,在判定佩戴者位于行驶的车辆内的前提下,如果判定其还位于驾驶座上,则能断定佩戴者正在驾车。此时,则应禁用预设服务。
可见,在本发明实施例中,会根据采集到的情境数据,来判断佩戴者是否位于行驶的车辆内,如判定佩戴者位于行驶的车辆内,则采集环境图像来进一步判断佩戴者所处座位是否为驾驶座,若判定佩戴者所处座位是驾驶座,则认为佩戴者正在驾驶,则将禁用头戴式设备的屏幕显示服务,以减少对佩戴者注意力的分散,提高驾驶安全性。需要说明的是,通过禁用近眼显示器的屏幕显示服务,可禁止一切需要使用HMD近眼显示器的业务。例如,可禁止在近眼显示器的屏幕上显示邮件提示、导航提示、短信提示和社交网络消息提示,禁止视频电话接入(因为视频电话需要在屏幕上显示视频),禁止浏览各种图片网站、视频流网站等等。
一般的,HMD上具有触摸板和按钮,佩戴者可通过触摸触摸板,或者,按压/拨动按钮与HMD交互。而触摸、按压、拨动等也可能分散佩戴者注意力。
因此,在本发明其他实施例中,除近眼显示器的屏幕显示服务外,上述所有实施例中的预设服务还可包括:手工输入服务。
更具体的,手工输入服务可包括触摸板和按钮手工输入。
在本发明其他实施例中,在确定出佩戴者处于驾驶状态时,上述所有实施例中的控制方法还可包括:
将接收到的、来自预设紧急联系人的信息,转换为语音信息播放。
在HMD中可维护一个紧急联系人列表,该列表里的联系人都是预设紧急联 系人。
更具体的,上述信息可包括邮件标题和短消息中的至少一种。当接收到来自紧急联系人的邮件、短消息时,可将邮件标题、短消息转换成语音播放给佩戴者。
在本发明其他实施例中,在确定出佩戴者处于驾驶状态时,上述所有实施例中的控制方法还可包括:将屏幕显示服务推送至头戴式设备之外的显示屏上。例如,可将其推送到车载系统的显示屏或者同车乘客的终端上进行显示。
在本发明其他实施例中,请参见图2,还可包括如下步骤:
S6、若判定所处座位不是驾驶座,或者,若判定佩戴者未位于行驶的车辆内,启用或保持预设服务。
需要说明的是,佩戴者未位于行驶的车辆内可对应两种情况,一种情况是佩戴者在车外,一种情况是,佩戴者在静止的车辆内(例如,车辆始终未处于运行状态,或者车辆遇到红灯停驶)。这两种情况,佩戴者都未驾车,自然也不用禁用预设服务。
佩戴者所处座位不是驾驶座则可对应如下情况:佩戴者在行驶的车辆内,但佩戴者是车内的乘客而非司机,此时,佩戴者也未驾车,也不用禁用预设服务。
下面,将具体介绍如何使用情境数据判断佩戴者是否位于行驶的车辆内。
在本发明其他实施例中,请参见图3,上述所有实施例中方法还可包括如下步骤:
S7、根据情境数据计算得到情境评价值。
可根据情境评价值判断佩戴者是否位于行驶的车辆内。例如,可根据情境 评价值是否大于等于门限值来判断佩戴者是否位于行驶的车辆内。
因此,仍请参见图3,步骤S2可进一步包括:S21、判断情境评价值是否大于等于门限值。
若情境评价值大于或等于门限值,则判定佩戴者位于行驶的车辆内;而若情境评价值小于门限值,则判定佩戴者未位于行驶的车辆内。
门限值可设置初始值为“2或3”。并可针对不同的门限值,接收用户的反馈,检验使用该门限值进行判定的正确性,最后选取一个最优的门限值。
在本发明其他实施例中,上述所有实施例中的控制方法还可包括如下步骤:
在满足重启条件时,重新启动头戴式设备中的图像采集装置采集环境图像以及后续的判断操作。
重启条件可包括,佩戴者所处情境发生变化,例如所处情境由位于行驶的车辆内变化为未位于行驶的车辆内(例如,佩戴者离开车辆,或者,车辆由行驶变为停驶),以及,所处情境由未位于行驶的车辆内变化为位于行驶的车辆内(例如佩戴者进入车辆,或者,车辆由停驶变为行驶)中的至少一种。
更具体的,可根据本次计算得出的情境评价值(F1)与上一次计算得出的情境评价值(F0)相比较来判断是否符合重启条件:当F0小于门限值而F1大于等于门限值,或者,F0大于等于门限值而F1小于等于门限值时,判定佩戴者所处情境发生变化。
基于情境评价值,上述控制方法的流程还可如图4和图5所示。
在本发明其他实施例中,上述步骤S7可进一步包括:将情境数据代入情境评价函数,得到情境评价值。
更具体的,情境评价函数可包括信号强度评价函数f1(x),日程表评价函数 f2(y),环境噪声评价函数f3(z)和运动速度评价函数f4(v)中的至少一种。其中,x表示信号强度数据,y表示用户日程表数据,z表示环境噪声强度评价值,v表示运动速度数据。
情境评价函数可记为F(x,y,z,v)。
在本发明其他实施例中,上述“将情境数据代入情境评价函数,得到情境评价值”可包括:
将x代入f1(x)得到信号强度评价值,将y代入f2(y)得到日程表评价值,将z代入f3(z)得到环境噪声强度评价值,以及,将v代入f4(v),得到运动速度评价值中的至少一种。
更具体的,F(x,y,z,v)=f1(x)+f2(y)+f3(z)+f4(v)。此时,情境评价值等于信号强度评价值、日程表评价值、日程表评价值和运动速度评价值之和。
下面将对各函数进行详细介绍。
一,信号强度评价函数:
f1(x)可有多种表达公式,例如,f1(x)=α1x,其中,α1表示信号强度权重,α1>0,x表示检测到的车载无线网络(WiFi或其他无线通信网络)的信号强度。
x可由HMD中的车载无线网络连接模块(例如WiFi模块)或外部设备(例如手机)采集。x值越大,HMD越有可能在车辆内。
更具体的,x=90+P,P表示实际信号强度,P的取值范围为[-90 dbm,0dbm],α1=1/90。
x=90+P,以及α1的取值是依据下述假定而得到的:
假定远离车辆时能检测到的最弱的信号强度是-90dbm,而用户在车辆内的信号强度为0dbm,也即,可以检测到的实际信号强度范围是(-90dbm,0dbm)。然 后,做归一化处理令x=90+P,α1=1/90,则f1(x)的取值范围是[0,1]。
当然,x=90+P,α1=1/90只是本实施例所举的例子之一,本领域技术人员可根据实际情况进行灵活设置,例如,令α1=1/180,令x=180+P等。
二,日程表评价函数
f2(y)可有多种表达公式,例如,
Figure PCTCN2014092381-appb-000003
其中,α2表示日程表权重,α2>0;y包括采集数据时刻的日程表事件集合,Ω表示预设特定事件集合。
y可由HMD中的日程表模块采集提供。
更具体的,上述采集数据时刻的日程表事件集合具体为,落入采样窗口中的日程表事件的集合,采样窗口的开始时刻为采集数据时刻,采样窗口的时长可为2小时、1小时、30分钟、40分钟等。
上述预设特定事件可包括开会、上班和交流中的至少一种。预设特定事件集合中所包含的事件可由用户自定义设置。
下面,将举个具体的例子。假定,用户小A在2013年9月17日日程安排如下:
08:30-9:30在B酒店吃早饭
10:50-12:00在C大厦开会
12:00-13:00和Y先生吃午饭
14:00-15:00到D大厦和H先生交流
15:00-18:00上班
18:30-19:00到E超市购买食品。
用户小A定义的预设事件集合Ω={开会,上班,交流},并设定了采样窗口为一小时。
假定y(9:30),是2013年9月17日上午9:30采集到的日程表事件集合,由于9:30到10:30之间日程表上没有要做的事情,因此y(9:30)=0,则f2(y(9:30))=0。
假定y(10:00),是2013年9月17日上午10:00到的日程表事件集合,其包括“在C大厦开会”,则其与Ω之间的交集为“开会”,f2(y(10:00))=α2
更具体的,可取α2=1。α2亦可由用户设置,在此不作赘述。
三,环境噪声评价函数
f3(z)可有多种表达公式,例如,f3(z)=α3z。
其中,α3表示环境噪声强度权重,α3>0,z表示环境噪声强度。
环境噪声可由HMD中的话筒或外部专用声音采集装置采集。
更具体的,环境噪声强度数据的采集方式可包括:
步骤一,对采集到的声音进行频谱分析,得到频谱分量;
步骤二,在经频谱分析得到的频谱分量中,搜索环境噪音频谱分量;
步骤三,获取搜索到环境噪音频谱分的声强。
上述环境噪音包括刹车噪声、发动机噪声和路噪中的至少一种。
相应的,环境噪音频谱分量可包括刹车噪声频谱分量、发动机噪声频谱分量和路噪频谱分量中的至少一种。
在步骤一之前,可使用专业声强或声压采集装置预先采集典型刹车噪声、发动机噪声和路噪,并进行频谱分析,分别记录其频谱特性。
在步骤一中,采集到声音后,进行频谱分析,得到各频谱分量的频谱特性, 比对频谱特性,即可搜索出是否包含刹车噪声频谱分量、发动机噪声频谱分量、路噪频谱分量。
以环境噪音频谱分量包括刹车噪声频谱分量、发动机噪声频谱分量和路噪频谱分量这三种分量为例,上述“获取搜索到环境噪音频谱分的声强”可包括:
取刹车噪声频谱分量的声强、发动机噪声频谱分量的声强和路噪频谱分量的声强中的最大值,作为环境噪音频谱分的声强。
假定,对采集到的声音进行频谱分析,得到五个频谱分量。这五个频谱分量中,包括发动机噪声频谱分量和路噪频谱分量,但没有刹车噪声频谱分量,则根据频谱分析结果,计算得到发动机噪声频谱分量的声强和路噪频谱分量的声强。如何计算是现有技术,在此不作赘述。从发动机噪声频谱分量的声强和路噪频谱分量的声强中选最大值,作为环境噪音频谱分的声强。
或者,上述“获取搜索到环境噪音频谱分的声强”可包括:
计算刹车噪声频谱分量的声强z1、发动机噪声频谱分量的声强z2和路噪频谱分量的声强z3;
根据公式z=A*z1+B*z2+C*z3,计算得环境噪音频谱分的声强z。
A为刹车噪声频谱分量声强权重,B为发动机噪声频谱分量声强权重,C为路噪频谱分量声强权重。本领域技术人员可根据实际对其进行设置,例如,刹车声不是频繁出现的事件,则设置其权重取较小值。
同理,本领域技术人员可根据实际设置α3的取值。例如,如经过测量,发现z的分贝值取值范围为[0,60]。则可取α3=1/60,令f3(z)取值范围是[0,1]。当然,α3也可取其他值,例如1/70、1/90等等。
四,运动速度评价
f4(v)可有多种表达公式,例如,
Figure PCTCN2014092381-appb-000004
v0表示速度门限,β1表示第一运动速度权重,β2表示第二运动速度权重,t1表示第一速度影响最小值,t2表示第二速度影响最小值,β2≥β1>0,t2≥t1>0。
v可由HMD中的GPS模块或加速度传感器采集计算,也可直接由车载系统提供。
v0是一个速度门限值,超过此门限值,速度越快用户越有可能在行驶的车辆内。
更具体的,v0=30km/h,β1=1/90,t1=0.01,β1=1/60,t1=0.1。假定v的取值范围为[0,120],则f4(v)取值范围为[0.01,2.1]。
当然,本领域技术人员可对v0、β1、β2、t1、t2的取值进行灵活设置,在此不作赘述。
下面将介绍采集环境图像。
在本发明其他实施例中,上述采集环境图像可包括:
设置拍照参数并进行拍照,得到环境图像。其中,拍照参数可根据上述v设置,或者,上述拍照参数为预设的标准拍照参数(也即,设置拍照参数为预设的标准拍照参数)。
更具体的,拍照参数可包括曝光时间、光圈口径与镜头焦距的比值的倒数F、感光度和所选对焦点。
如何根据运动速度数据v设置拍照参数可有多种方式,例如:
当v1<v<v2时,确定曝光时间为1/64秒,F为4,感光度为ISO400。
当v2<v<v3时,确定曝光时间为1/32秒,F为2.8,感光度为ISO140。
而所选对焦点可包括,请参见图6或图7,在以(针对HMD佩戴者观察到 的画面)画面中心的对焦点为原点、以水平方向为x轴方向、铅直方向为y轴方向的直角坐标系中,位于第三象限的对焦点,以及位于x轴负半轴上的对焦点。
当然,上述仅为本实施例所提供的具体例子,本领域技术人员可根据实际,设计如何设置拍照参数,在此不作赘述。
至于预设的标准拍照参数,其获取的方式可如下:
用户主动触发HMD中的摄像头拍照,所拍照片作为标准环境图像,而拍摄该标准环境图像时所采用的拍照参数将被保存下来,作为标准拍照参数。
之后,再启动摄像头拍照时,自动将拍照参数调整为标准拍照参数进行拍照。
下面将介绍如何判断佩戴者的所处座位是否为驾驶座。
在本发明其他实施例中,上述所有实施例中的步骤S4可包括:
检测采集到的环境图像中是否包含预设标记物;若检测到包含预设标记物,则判定佩戴者所处座位是驾驶座;若检测不到包含预设标记物,则判定佩戴者的所处座位不是驾驶座。
更具体的,检测采集的环境图像中是否包含预设标记物包括:
步骤一、提取采集的环境图像的图像纹理特征;
提取图像纹理特征为现有技术,在此不作赘述。
步骤二,将预设标记物的图像纹理特征与提取的图像纹理特征进行匹配;在匹配成功时,检测出包含预设标记物,否则,检测出不包含预设标记物。
上述标记物可包括方向盘。
在本发明其他实施例中,上述标记物还可包括仪表盘和汽车A柱中的至少一种。
以方向盘为例,方向盘图像可以提取x个图像纹理特征,将采集到的环境图像的图像纹理特征与方向盘图像的图像纹理特征相匹配,如果与N个图像纹理特征匹配(N小于等于x),则判定匹配成功,否则判定匹配失败。
仪表盘和汽车A柱的匹配与之类似,在此不作赘述。
或者在本发明其他实施例中,上述所有实施例中的步骤S4可包括:
计算采集到的环境图像与标准环境图像之间的相似度;
在相似度大于(或大于等于)预设的相似度阈值时,判断出佩戴者所处座位是驾驶座;否则,判断出佩戴者的所处座位不是驾驶座。
在计算相似度时,可计算两图像间灰度的相似度,也可计算图像纹理特征的相似度。例如,假定标准环境图像可以提取出G个图像纹理特征,将采集到的环境图像的图像纹理特征与标准环境图像的图像纹理特征相匹配,如果与m个图像纹理特征匹配(m小于等于G),则判定匹配成功,否则判定匹配失败。
在本发明其他实施例中,在确定出佩戴者处于驾驶状态时,上述所有实施例中的控制方法还可包括:
在检测到连接至车载无线网络后,将已连接至HMD的可穿戴设备的设备信息,发送给车载无线网络所属的车载系统。
可穿戴设备并不属于头戴式设备,只是通过wifi、蓝牙等连接至头戴式设备。利用头戴式设备的屏幕显示服务可以显示可穿戴设备采集到的数据。
穿戴设备可包括手表、心率臂带和胸带中的至少一种。而设备信息可包括设备属性(功能),配对信息(是否支持蓝牙,wifi,红外),以及认证信息(< 密码>,设备名)等中的至少一种。
车载系统在获取到的设备信息后,可搜索上述可穿戴设置(例如心率带),并与搜索到的可穿戴设备建立连接。车载系统在建立连接后,还可将与之建立连接的可穿戴设备的状态信息发送给HMD,以便HMD刷新自己管理的已连接设备的状态。车载系统可使用该心率带数据评估司机疲劳、困倦状态。
令车载系统连接可穿戴设备,除了可避免HMD的佩戴者在驾车时通过HMD的近眼显示设备查看可穿戴设备采集到的数据外,还可减少HMD的能耗。
而出于减少HMD能耗的考虑,在本发明其他实施例中,无论HMD的佩戴者是否处于驾驶状态,都可令与HMD建立连接的可穿戴设备转而与车载系统建立连接。
在本发明其他实施例中,在确定出佩戴者处于驾驶状态时,上述所有实施例中的控制方法还可包括:
在检测到与车载无线网络断开连接时,搜索与头戴式设备建立过连接的可穿戴设备(或者说,与HMD断开连接的可穿戴设备),并与搜索到的可穿戴设备重新建立连接。
也即,在检测到与车载无线网络断开连接后,可主动搜索曾经与HMD连接的可穿戴设备,并与之建立连接,接收其数据。
与之相对应,本发明实施例还提供头戴式设备控制装置。
头戴式设备控制装置可以是安装于头戴式设备中的软件逻辑模块,或者,也可以是独立于头戴式设备之外的控制器,或者,也可以是头戴式设备的处理器,或者,也可以是内置于头戴式设备中除处理器之外的芯片。请参见图8,头 戴式设备控制装置800可包括情境数据采集单元1、第一判断单元2、图像采集控制单元3、第二判断单元4和服务管理单元5,其中:
情境数据采集单元1用于,采集情境数据。
上述情境数据可包括运动速度数据v、车载无线网络信号强度数据x、用户日程表数据y和环境噪音强度数据z中的至少一种。
根据前述的记载可知,x(实际信号强度P)可由HMD中的车载无线网络连接模块(例如WiFi模块)或外部设备(例如手机)提供,y可由HMD中的日程表模块提供,z可由HMD中的话筒或外部专用声音采集装置提供,v可由HMD中的GPS模块或加速度传感器采集计算得到,也可直接由车载系统提供。
因此,情境数据采集单元1可从HMD中的车载无线网络连接模块或外部设备处获取x或P,从HMD中的日程表模块处获取y,从HMD中的话筒或外部专用声音采集装置处获取z,从HMD中的GPS模块/加速度传感器/车载系统处获取v。第一判断单元2用于,使用情境数据判断头戴式设备的佩戴者是否位于行驶的车辆内;图像采集控制单元3用于,判定佩戴者位于行驶的车辆内时,控制头戴式设备中的图像采集装置采集环境图像;
第二判断单元4用于,使用采集到的环境图像,判断佩戴者所处座位是否是驾驶座;
服务管理单元5用于,判定佩戴者所处座位是驾驶座时,禁用预设服务。其中,预设服务至少包括近眼显示器的屏幕显示服务。
服务管理单元5还可用于判定佩戴者所处座位不是驾驶座时,或者,判定佩戴者未位于行驶的车辆内时,启用或保持上述预设服务。
相关内容请参见本文前述记载,在此不作赘述。
需要说明的是,当头戴式设备控制装置是独立于头戴式设备之外的控制器,或者是内置于头戴式设备中除处理器之外的芯片时,其可向头戴式设备中的处理器发送控制指令,令头戴式设备的处理器停止提供预设服务,从而达到禁用预设服务的目的。
而当头戴式设备控制装置是安装于头戴式设备中的软件逻辑模块,或者,头戴式设备的处理器时,则可直接禁用预设服务。
在本发明其他实施例中,上述头戴式设备控制装置800还可包括:
第一连接单元用于,在检测到连接至车载无线网络后,将已连接至头戴式设备的可穿戴设备的设备信息,发送给车载无线网络所属的车载系统,以便车载系统根据获取到的设备信息进行搜索,并与搜索到的可穿戴设备建立连接;
第二连接单元用于,在检测到与车载无线网络断开连接时,搜索与头戴式设备建立过连接的可穿戴设备,并与搜索到的可穿戴设备重新建立连接。
相关内容请参见本文前述记载,在此不作赘述。
在本发明其他实施例中,上述头戴式设备控制装置800还可包括第一连接单元和第二连接单元,其中:
第一连接单元用于,在检测到连接至车载无线网络后,将已连接至头戴式设备的可穿戴设备的设备信息,发送给车载无线网络所属的车载系统,以便车载系统根据获取到的设备信息进行搜索,并与搜索到的可穿戴设备建立连接;
第二连接单元用于,在检测到与车载无线网络断开连接时,搜索与头戴式设备建立过连接的可穿戴设备,并与搜索到的可穿戴设备重新建立连接。
相关内容请参见本文前述记载,在此不作赘述。
在本发明其他实施例中,上述头戴式设备控制装置800还可包括:
转换单元,用于判定所处座位是驾驶座时,将接收到的、来自预设紧急联系人的信息,转换为语音信息播放。
相关内容请参见本文前述记载,在此不作赘述。
在本发明其他实施例中,上述头戴式设备控制装置800还可包括:
推送单元,用于判定佩戴者所处座位是驾驶座时,将屏幕显示服务推送至头戴式设备之外的显示屏上。
相关内容请参见本文前述记载,在此不作赘述。
图9为本发明实施例提供的头戴式设备控制装置800(作为独立于头戴式设备之外的控制器)的硬件结构示意图,其可包括处理器801、存储器802、总线803和通信接口804。处理器801、存储器802、通信接口804通过总线803相互连接;存储器802,用于存放程序。具体地,程序可以包括程序代码,所述程序代码包括计算机操作指令。
存储器802可能包含高速随机存取存储器(random access memory,简称RAM)存储器,也可能还包括非易失性存储器(non-volatile memory),例如至少一个磁盘存储器。
处理器801可以是通用处理器,包括中央处理器(Central Processing Unit,简称CPU)、网络处理器(Network Processor,简称NP)等;还可以是数字信号处理器(DSP)、专用集成电路(ASIC)、现成可编程门阵列(FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。
处理器801执行存储器802所存放的程序,用于实现本发明实施例提供的头戴式设备控制方法,包括:
采集情境数据;情境数据包括运动速度数据、车载无线网络信号强度数据、用户日程表数据和环境噪声强度数据中的至少一种;
使用上述情境数据判断佩戴者是否位于行驶的车辆内;
若判定佩戴者位于行驶的车辆内,则控制头戴式设备中的图像采集装置采集环境图像;
使用采集到的环境图像,判断佩戴者所处座位是否为驾驶座;
若判定佩戴者所处座位是驾驶座,则禁用预设服务,预设服务包括近眼显示器的屏幕显示服务。
此外,上述处理器801亦可用于完成本文方法部分所介绍的头戴式设备控制方法中的其他步骤,以及各步骤的细化,在此不作赘述。
CPU和存储器可集成于同一芯片内,也可为独立的两个器件。
与之对应,本发明实施例还提供一种头戴式设备,其可包括图像采集装置、近眼显示器和上述的头戴式设备控制装置,其中,头戴式设备控制装置800分别与图像采集装置和近眼显示器相连接。
图10示出了头戴式设备的一种具体的结构。
需要说明的是,现有技术中有限制开车的司机使用手机的技术方案,在该方案中,通过速度检测确定司机是否正在驾驶,如果检测到驾驶则通过位于司机座位下的信号发生装置产生控制信号,通过人体,把信号传导给用户手中的手机,控制手机的功能,禁止司机边开车边使用手机;如果速度检测发现汽车停止,则终止对手机的限制允许司机使用手机。
上述现有技术方案必须使用具有人体通信接收装置的特殊手机,并且需要对司机驾驶的汽车进行改造安装信号发生装置才可实现。
而在本发明实施例中,头戴式设备控制装置可以是安装于头戴式设备中的软件逻辑模块,或者,也可以是头戴式设备的处理器,或者,也可以是内置于头戴式设备中除处理器之外的芯片。进一步的,头戴式设备控制装置的情境数据采集单元1可从HMD中的车载无线网络连接模块处获取x或P,从HMD中的日程表模块处获取y,从HMD中的话筒处获取z,从HMD中的GPS模块/ 加速度传感器处获取v。
也即,从头戴式设备的角度来讲,其可通过分析自身采集的数据,来判断使用者是否在驾驶,如判定在驾驶,则自动禁用预设服务。在此过程中,并不需要通过人体通信,也不需要汽车座位下的某设备发出控制指令,从而头戴设备并不需要使用具有人体通信接收装置,也不需要对汽车进行改造。
参见图11,为本发明提供的头戴式设备控制方法的流程图,其至少可包括如下步骤:
S101、使用采集的状态数据判断头戴式设备的佩戴者所处状态。
其中,上述状态数据包括情境数据和环境图像。而情境数据可包括运动速度数据v、车载无线网络信号强度数据x、用户日程表数据y和环境噪音强度数据z中的至少一种。
而上述状态可包括驾驶状态和非驾驶状态。
S102、在判定所述佩戴者处于驾驶状态时,禁用预设服务。
上述预设服务包括近眼显示器的屏幕显示服务、手工输入服务和投影显示服务中的至少一种。
一般的,HMD上具有触摸板和按钮,佩戴者可通过触摸触摸板,或者,按压/拨动按钮与HMD交互。而触摸、按压、拨动等也可能分散佩戴者注意力。因此,更具体的,手工输入服务可包括触摸板和按钮手工输入。
在本实施例中,在使用状态数据判定出佩戴者处于驾驶状态时,禁用预设服务。以减少对佩戴者注意力的分散,提高驾驶安全性。
在本发明其他实施例中,在确定出佩戴者处于驾驶状态时,上述所有实施例中的控制方法还可包括:
将接收到的、来自预设紧急联系人的信息,转换为语音信息播放。
在HMD中可维护一个紧急联系人列表,该列表里的联系人都是预设紧急联系人。
更具体的,上述信息可包括邮件标题和短消息中的至少一种。当接收到来自紧急联系人的邮件、短消息时,可将邮件标题、短消息转换成语音播放给佩戴者。
在本发明其他实施例中,在确定出佩戴者处于驾驶状态时,上述所有实施例中的控制方法还可包括:将屏幕显示服务推送至头戴式设备之外的显示屏上。例如,可将其推送到车载系统的显示屏或者同车乘客的终端上进行显示。
在本发明其他实施例中,请参见图12,上述控制方法还可包括如下步骤:
S103、在判定上述佩戴者不处于驾驶状态时,启用或保持预设服务。
在本发明其他实施例中,请参见图13,上述步骤S101(使用所述状态数据判断所述头戴式设备的佩戴者所处状态)可进一步包括:
S2、使用情境数据判断头戴式设备的佩戴者是否位于行驶的车辆内;
S4、若判定上述佩戴者位于行驶的车辆内,使用上述环境图像判断佩戴者所处座位是否是驾驶座。
需要说明的是,在本实施例中,当上述佩戴者位于行驶的车辆内并且所处座位为驾驶座时,判定上述头戴式设备的佩戴者处于驾驶状态。或者说,驾驶状态包括佩戴者位于行驶的车辆内并且所处座位为驾驶座。
也即,步骤102可进一步包括如下步骤:
S5、若判定所处座位是驾驶座,则禁用预设服务。
在本发明其他实施例中,上述步骤S101(使用所述状态数据判断所述头戴式设备的佩戴者所处状态)还可进一步包括:
在判定上述佩戴者未位于行驶的车辆内或者上述所处座位不为驾驶座时,判定上述佩戴者处于非驾驶状态。
也可以说,非驾驶状态包括上述佩戴者未位于行驶的车辆内或者上述所处座位不为驾驶座。
在本发明其他实施例中,可先采集环境图像和情景数据(S0)。再执行步骤S101或步骤S2(请参见图14)。进一步的,可先采集环境图像,再采集情景数据,再执行步骤S101或步骤S2。反之,也可先采集情景数据,再采集环境图像,再执行步骤S101或步骤S2。
或者,请参见图15,可先采集情景数据(S1),在判定上述佩戴者位于行驶的车辆内,再采集环境图像(S3)。
在本发明其他实施例中,请参见图16,上述步骤S101还可进一步包括如下步骤:
S2’、使用采集的环境图像,判断佩戴者所处座位是否为驾驶座。
S4’、若判定上述佩戴者所处座位为驾驶座,使用上述情境数据判断上述头戴式设备的佩戴者是否位于行驶的车辆内。
当上述佩戴者位于行驶的车辆内并且所处座位为驾驶座时,判定上述头戴式设备的佩戴者处于驾驶状态。
也即,步骤102可进一步包括如下步骤:
S5’、若判定所处座位是驾驶座,则禁用预设服务。
在本发明其他实施例中,上述步骤S101(使用所述状态数据判断所述头戴式设备的佩戴者所处状态)还可进一步包括:
在判定上述所处座位不为驾驶座或者上述佩戴者未位于行驶的车辆内时,判定上述佩戴者处于非驾驶状态。
在本发明其他实施例中,可先采集环境图像和情景数据(S0’)。再执行步骤S101或步骤S2’(请参见图17)。
进一步的,也可先采集环境图像,再采集情景数据,再执行步骤S101或步骤S2’。反之,也可先采集情景数据,再采集环境图像,再执行步骤S101或步骤S2’。
或者,请参见图18,可先采集环境图像(S1’),在判定上述佩戴者所处座位是驾驶座,再采集情景数据(S3’)。
在本实施例中,可周期性采集环境图像。
对应于图13-18提供的技术方案,步骤S103可进一步包括如下步骤:S6、若判定所处座位不是驾驶座,或者,若判定佩戴者未位于行驶的车辆内,启用或保持预设服务。
特别地,请参见图19,如果执行步骤S2时判定佩戴者未位于行驶的车辆内,那么无需执行步骤S3对应的采集环境图像动作,可直接判定用户处于非驾驶状态,执行步骤S6,启用或保持预设服务。
或者,请参见图20,如果执行步骤S2’时判定佩戴者未处于驾驶座,那么无需执行步骤S3’对应的采集情境数据动作,可直接判定用户处于非驾驶状态,执行步骤S6,启用或保持预设服务。
在本发明其他实施例中,上述使用情境数据判断头戴式设备的佩戴者是否位于行驶的车辆内,以及使用采集到的环境图像判断佩戴者所处座位是否是驾驶座可无先后顺序,并行执行。
当佩戴者位于行驶的车辆内并且所处座位为驾驶座时,判定头戴式设备的佩戴者处于驾驶状态。而当佩戴者未位于行驶的车辆内或者所处座位不是驾驶座时,判定头戴式设备的佩戴者不处于驾驶状态。
综上,在上述多个实施例中,采集环境图像、采集情景数据的顺序是不固 定的,可以先采集数据环境图像和情景数据再进行判断;也可以是先采集第一种数据(数据环境图像或情景数据),使用第一种数据进行第一个判断(判断头戴式设备的佩戴者是否位于行驶的车辆内或者判断佩戴者所处座位是否是驾驶座)之后,再采集第二种数据(情景数据或数据环境图像),然后使用第二种数据进行第二个判断(判断佩戴者所处座位是否是驾驶座或者判断头戴式设备的佩戴者是否位于行驶的车辆内);还可以是使用第一种数据进行第一个判断之后,如不进行第二种判断则不采集第二种数据。
下面,将具体介绍如何使用情境数据判断佩戴者是否位于行驶的车辆内。
在本发明其他实施例中,上述所有实施例中方法还可包括如下步骤:
根据情境数据计算得到情境评价值(S7)。
可根据情境评价值判断佩戴者是否位于行驶的车辆内。例如,可根据情境评价值是否大于等于门限值来判断佩戴者是否位于行驶的车辆内。
而上述“使用情境数据判断上述佩戴者是否位于行驶的车辆内”可进一步包括:
判断情境评价值是否大于等于门限值。若情境评价值大于或等于门限值,则判定佩戴者位于行驶的车辆内;而若情境评价值小于门限值,则判定佩戴者未位于行驶的车辆内。
由于前述介绍了多种流程,以图19或20所给出的流程为例,基于情境评价值的流程可参见图21或22。
门限值可设置初始值为“2或3”。并可针对不同的门限值,接收用户的反馈,检验使用该门限值进行判定的正确性,最后选取一个最优的门限值。
在本发明其他实施例中,上述所有实施例中的控制方法还可包括如下步骤:
在满足重启条件时,重新启动头戴式设备中的图像采集装置采集环境图像以及后续的判断操作。
重启条件可包括,佩戴者所处情境发生变化,例如所处情境由位于行驶的 车辆内变化为未位于行驶的车辆内(例如,佩戴者离开车辆,或者,车辆由行驶变为停驶),以及,所处情境由未位于行驶的车辆内变化为位于行驶的车辆内(例如佩戴者进入车辆,或者,车辆由停驶变为行驶)中的至少一种。
更具体的,可根据本次计算得出的情境评价值(F1)与上一次计算得出的情境评价值(F0)相比较来判断是否符合重启条件:当F0小于门限值而F1大于等于门限值,或者,F0大于等于门限值而F1小于等于门限值时,判定佩戴者所处情境发生变化。
基于佩戴者所处情境是否发生变化来重新启动采集环境图像操作,上述控制方法的流程还可进行很多种变化。例如,可如图23至图24所示。
在本发明其他实施例中,上述步骤S7可进一步包括:将情境数据代入情境评价函数,得到情境评价值。
更具体的,情境评价函数可包括信号强度评价函数f1(x),日程表评价函数f2(y),环境噪声评价函数f3(z)和运动速度评价函数f4(v)中的至少一种。其中,x表示信号强度数据,y表示用户日程表数据,z表示环境噪声强度评价值,v表示运动速度数据。
情境评价函数可记为F(x,y,z,v)。
在本发明其他实施例中,上述“将情境数据代入情境评价函数,得到情境评价值”可包括:
将x代入f1(x)得到信号强度评价值,将y代入f2(y)得到日程表评价值,将z代入f3(z)得到环境噪声强度评价值,以及,将v代入f4(v)得到运动速度评价值中的至少一种。
更具体的,F(x,y,z,v)=f1(x)+f2(y)+f3(z)+f4(v)。此时,情境评价值等于信号强度评价值、日程表评价值、日程表评价值和运动速度评价值之和。
下面将对各函数进行详细介绍。
一,信号强度评价函数:
f1(x)可有多种表达公式,例如,f1(x)=α1x,其中,α1表示信号强度权重,α1>0,x表示检测到的车载无线网络(WiFi或其他无线通信网络)的信号强度。
x可由HMD中的车载无线网络连接模块(例如WiFi模块)或外部设备(例如手机)采集。x值越大,HMD越有可能在车辆内。
更具体的,x=90+P,P表示实际信号强度,P的取值范围为[-90dbm,0dbm],α1=1/90。
x=90+P,以及α1的取值是依据下述假定而得到的:
假定远离车辆时能检测到的最弱的信号强度是-90dbm,而用户在车辆内的信号强度为0dbm,也即,可以检测到的实际信号强度范围是(-90dbm,0dbm)。然后,做归一化处理令x=90+P,α1=1/90,则f1(x)的取值范围是[0,1]。
当然,x=90+P,α1=1/90只是本实施例所举的例子之一,本领域技术人员可根据实际情况进行灵活设置,例如,令α1=1/180,令x=180+P等。
具体内容可参见本文前述记载,在此不作赘述。
二,日程表评价函数
f2(y)可有多种表达公式,例如,
Figure PCTCN2014092381-appb-000005
其中,α2表示日程表权重,α2>0;y包括采集数据时刻的日程表事件集合,Ω表示预设特定事件集合。
y可由HMD中的日程表模块采集提供。
具体内容可参见本文前述记载,在此不作赘述。
三,环境噪声评价函数
f3(z)可有多种表达公式,例如,f3(z)=α3z。
其中,α3表示环境噪声强度权重,α3>0,z表示环境噪声强度。
环境噪声可由HMD中的话筒或外部专用声音采集装置采集。
具体内容可参见本文前述记载,在此不作赘述。
四,运动速度评价
f4(v)可有多种表达公式,例如,
Figure PCTCN2014092381-appb-000006
v0表示速度门限,β1表示第一运动速度权重,β2表示第二运动速度权重,t1表示第一速度影响最小值,t2表示第二速度影响最小值,β2≥β1>0,t2≥t1>0。
v可由HMD中的GPS模块或加速度传感器采集计算,也可直接由车载系统提供。
v0是一个速度门限值,超过此门限值,速度越快用户越有可能在行驶的车辆内。
更具体的,v0=30km/h,β1=1/90,t1=0.01,β1=1/60,t1=0.1。假定v的取值范围为[0,120],则f4(v)取值范围为[0.01,2.1]。
当然,本领域技术人员可对v0、β1、β2、t1、t2的取值进行灵活设置,在此不作赘述。
具体内容可参见本文前述记载,在此不作赘述。
下面将介绍如何采集环境图像。
在本发明其他实施例中,上述环境图像可通过如下方式采集:
设置拍照参数并进行拍照,得到环境图像。其中,拍照参数可根据上述v设置,或者,上述拍照参数为预设的标准拍照参数(也即,设置拍照参数为预设的标准拍照参数)。
更具体的,拍照参数可包括曝光时间、光圈口径与镜头焦距的比值的倒数F、感光度和所选对焦点。
如何根据运动速度数据v设置拍照参数可参见本文前述介绍,在此不作赘述。
下面将介绍如何判断佩戴者的所处座位是否为驾驶座。
在本发明其他实施例中,上述所有实施例中的“使用采集的环境图像,判断佩戴者所处座位是否为驾驶座”可包括:
检测采集到的环境图像中是否包含预设标记物;若检测到包含预设标记物,则判定佩戴者所处座位是驾驶座;若检测不到包含预设标记物,则判定佩戴者的所处座位不是驾驶座。
具体内容请参见本文前述记载,在此不作赘述。
或者在本发明其他实施例中,上述所有实施例中的“使用采集的环境图像,判断佩戴者所处座位是否为驾驶座”可包括:
计算采集到的环境图像与标准环境图像之间的相似度;
在相似度大于(或大于等于)预设的相似度阈值时,判断出佩戴者所处座位是驾驶座;否则,判断出佩戴者的所处座位不是驾驶座。
具体内容请参见本文前述记载,在此不作赘述。
在本发明其他实施例中,在确定出佩戴者处于驾驶状态之后,上述所有实施例中的控制方法还可包括:
在检测到连接至车载无线网络后,将已连接至HMD的可穿戴设备的设备信息,发送给车载无线网络所属的车载系统。
车载系统在获取到的设备信息后,可搜索上述可穿戴设置,并与搜索到的可穿戴设备建立连接。
此外,车载系统在建立连接后,还可将与之建立连接的可穿戴设备的状态信息发送给HMD,以便HMD刷新自己管理的已连接设备的状态。车载系统可使用该心率带数据评估司机疲劳、困倦状态。
令车载系统连接可穿戴设备,除了可避免HMD的佩戴者在驾车时通过HMD的近眼显示设备查看可穿戴设备采集到的数据外,还可减少HMD的能耗。
而出于减少HMD能耗的考虑,在本发明其他实施例中,无论HMD的佩戴 者是否处于驾驶状态,都可令与HMD建立连接的可穿戴设备转而与车载系统建立连接。
具体内容请参见本文前述记载,在此不作赘述。
在本发明其他实施例中,在确定出佩戴者处于驾驶状态时,上述所有实施例中的控制方法还可包括:
在检测到与车载无线网络断开连接时,搜索与头戴式设备建立过连接的可穿戴设备(或者说,与HMD断开连接的可穿戴设备),并与搜索到的可穿戴设备重新建立连接。
当然,也可在判定佩戴者处于非驾驶状态时,搜索与头戴式设备建立过连接的可穿戴设备,并与搜索到的可穿戴设备重新建立连接。
与之相对应,本发明实施例还提供头戴式设备控制装置。
头戴式设备控制装置可以是安装于头戴式设备中的软件逻辑模块,或者,也可以是独立于头戴式设备之外的控制器,或者,也可以是头戴式设备的处理器,或者,也可以是内置于头戴式设备中除处理器之外的芯片。
请参见图25,头戴式设备控制装置25可包括:
状态判断单元251,用于使用采集的状态数据判断头戴式设备的佩戴者所处状态;
其中,上述状态可包括驾驶状态和非驾驶状态。上述状态数据可包括情境数据和环境图像。而上述情境数据可包括运动速度数据、车载无线网络信号强度数据、用户日程表数据和环境噪音强度数据中的至少一种。
具体内容请参见本文前述记载,在此不作赘述。
服务管理单元252,用于在佩戴者处于驾驶状态时,禁用预设服务,所述预设服务包括近眼显示器的屏幕显示服务。
服务管理单元252还可用于判定上述佩戴者不处于驾驶状态时,启用或保持上述预设服务。
相关内容请参见本文前述记载,在此不作赘述。
需要说明的是,当头戴式设备控制装置是独立于头戴式设备之外的控制器,或者是内置于头戴式设备中除处理器之外的芯片时,其可向头戴式设备中的处理器发送控制指令,令头戴式设备的处理器停止提供预设服务,从而达到禁用预设服务的目的。
而当头戴式设备控制装置是安装于头戴式设备中的软件逻辑模块,或者,头戴式设备的处理器时,则可直接禁用预设服务。
在本发明其他实施例中,上述头戴式设备控制装置25还可包括:
第一连接单元用于,在检测到连接至车载无线网络时,将已连接至头戴式设备的可穿戴设备的设备信息,发送给车载无线网络所属的车载系统,以便车载系统根据获取到的设备信息进行搜索,并与搜索到的可穿戴设备建立连接;
进一步的,第一连接单元可在判定所述佩戴者处于驾驶状态之后,检测到连接至车载无线网络时,将已连接至头戴式设备的可穿戴设备的设备信息,发送给车载无线网络所属的车载系统。
第二连接单元用于,在检测到与车载无线网络断开连接时,搜索与头戴式设备建立过连接的可穿戴设备,并与搜索到的可穿戴设备重新建立连接。
或者,第二连接单元可用于,判定上述佩戴者处于非驾驶状态时,搜索与头戴式设备建立过连接的可穿戴设备,并与搜索到的可穿戴设备重新建立连接。
或者,第二连接单元可用于,在确定出佩戴者处于驾驶状态之后,在检测到与车载无线网络断开连接时,搜索与头戴式设备建立过连接的可穿戴设备,并与搜索到的可穿戴设备重新建立连接。
相关内容请参见本文前述记载,在此不作赘述。
在本发明其他实施例中,上述头戴式设备控制装置25还可包括:
信息转换单元,用于判定上述佩戴者处于驾驶状态时,将接收到的、来自预设紧急联系人的信息,转换为语音信息播放。
相关内容请参见本文前述记载,在此不作赘述。
在本发明其他实施例中,上述头戴式设备控制装置25还可包括:
屏幕显示服务推送单元,用于判定上述佩戴者处于驾驶状态时,将屏幕显示服务推送至头戴式设备之外的显示屏上。
相关内容请参见本文前述记载,在此不作赘述。
图26为本发明实施例提供的头戴式设备控制装置25(作为独立于头戴式设备之外的控制器)的硬件结构示意图,其可包括处理器251、存储器252、总线253和通信接口254。处理器251、存储器252、通信接口254通过总线253相互连接;存储器252,用于存放程序。具体地,程序可以包括程序代码,所述程序代码包括计算机操作指令。
存储器252可能包含高速随机存取存储器(random access memory,简称RAM)存储器,也可能还包括非易失性存储器(non-volatile memory),例如至少一个磁盘存储器。
处理器251可以是通用处理器,包括中央处理器(Central Processing Unit,简称CPU)、网络处理器(Network Processor,简称NP)等;还可以是数字信号处理器(DSP)、专用集成电路(ASIC)、现成可编程门阵列(FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。
处理器251执行存储器252所存放的程序,用于实现本发明实施例提供的头戴式设备控制方法,包括:
使用采集的状态数据判断头戴式设备的佩戴者所处状态;
在判定上述佩戴者处于驾驶状态时,禁用预设服务。
其中,上述状态数据包括情境数据和环境图像。而情境数据可包括运动速度数据v、车载无线网络信号强度数据x、用户日程表数据y和环境噪音强度数据z中的至少一种。
而上述状态可包括驾驶状态和非驾驶状态。
其中,预设服务包括近眼显示器的屏幕显示服务、手工输入服务和投影显示服务中的至少一种。
具体内容请参见本文前述记载,在此不赘述。
此外,上述处理器251亦可用于完成本文方法部分所介绍的头戴式设备控制方法中的其他步骤,以及各步骤的细化,在此不作赘述。
例如,在本发明其他实施例中,上述存储器252进一步存放可执行指令,处理器251执行上述可执行指令,可完成如下步骤:
在判定上述佩戴者不处于驾驶状态时,启用或保持上述预设服务。
再例如,在本发明其他实施例中,上述存储器252进一步存放可执行指令,处理器251执行上述可执行指令,可完成如下步骤:
在判定佩戴者处于驾驶状态时,将接收到的、来自预设紧急联系人的信息,转换为语音信息播放。
再例如,在本发明其他实施例中,上述存储器252进一步存放可执行指令,处理器251执行上述可执行指令,可完成如下步骤:
在判定上述佩戴者处于驾驶状态时,将屏幕显示服务推送至上述头戴式设备之外的显示屏上。
再例如,在本发明其他实施例中,上述存储器252进一步存放可执行指令,处理器251执行上述可执行指令,可完成如下步骤(对应使用状态数据判断头戴式设备的佩戴者所处状态):
使用情境数据判断头戴式设备的佩戴者是否位于行驶的车辆内;
若判定佩戴者位于行驶的车辆内,使用环境图像判断佩戴者所处座位是否是驾驶座;
当佩戴者位于行驶的车辆内并且所处座位为驾驶座时,判定头戴式设备的佩戴者处于驾驶状态。
CPU和存储器可集成于同一芯片内,也可为独立的两个器件。
与之对应,本发明实施例还提供一种头戴式设备,其可包括图像采集装置、近眼显示器和上述的头戴式设备控制装置,其中,头戴式设备控制装置25分别与图像采集装置和近眼显示器相连接。
图27示出了头戴式设备的一种具体的结构。
本说明书中各个实施例采用递进的方式描述,每个实施例重点说明的都是与其他实施例的不同之处,各个实施例之间相同相似部分互相参见即可。
结合本文中所公开的实施例描述的方法或算法的步骤可以直接用硬件、处理器执行的软件模块,或者二者的结合来实施。软件模块可以置于随机存储器(RAM)、内存、只读存储器(ROM)、电可编程ROM、电可擦除可编程ROM、寄存器、硬盘、可移动磁盘、CD-ROM、或技术领域内所公知的任意其它形式的存储介质中。
对所公开的实施例的上述说明,使本领域专业技术人员能够实现或使用本发明。对这些实施例的多种修改对本领域的专业技术人员来说将是显而易见的,本文中所定义的一般原理可以在不脱离本发明的精神或范围的情况下,在其它实施例中实现。因此,本发明将不会被限制于本文所示的这些实施例,而是要符合与本文所公开的原理和新颖特点相一致的最宽的范围。

Claims (35)

  1. 一种头戴式设备控制方法,其特征在于,包括:
    采集情境数据,所述情境数据包括运动速度数据、车载无线网络信号强度数据、用户日程表数据和环境噪音强度数据中的至少一种;
    使用所述情境数据判断所述头戴式设备的佩戴者是否位于行驶的车辆内;
    若判定所述佩戴者位于行驶的车辆内,则控制所述头戴式设备中的图像采集装置采集环境图像;
    使用采集到的环境图像,判断所述佩戴者所处座位是否是驾驶座;
    若判定所述佩戴者所处座位是驾驶座,则禁用预设服务,所述预设服务包括近眼显示器的屏幕显示服务。
  2. 如权利要求1所述的方法,其特征在于,还包括:若判定所述所处座位不是驾驶座,或者,若判定所述佩戴者未位于行驶的车辆内,则启用或保持所述预设服务。
  3. 如权利要求2所述的方法,其特征在于,还包括:
    将所述情境数据代入情境评价函数,得到情境评价值;
    所述使用所述情境数据判断所述佩戴者是否位于行驶的车辆内包括:判断所述情境评价值是否大于等于门限值;
    若所述情境评价值大于等于所述门限值,则判定所述佩戴者位于行驶的车辆内;
    若所述情境评价值小于所述门限值,则判定所述佩戴者未位于行驶的车辆内。
  4. 如权利要求3所述的方法,其特征在于,所述情境评价函数记为 F(x,y,z,v),包括信号强度评价函数f1(x),日程表评价函数f2(y),环境噪声评价函数f3(z)和运动速度评价函数f4(v)中的至少一种;所述x表示信号强度数据,所述y表示用户日程表数据,所述z表示环境噪声强度评价值,所述v表示运动速度数据;
    所述将所述情境数据代入情境评价函数,得到所述情境评价值包括:将所述x代入f1(x)得到信号强度评价值,将所述y代入f2(y)得到日程表评价值,将所述z代入f3(z)得到环境噪声强度评价值,以及,将所述v代入f4(v)得到运动速度评价值中的至少一种。
  5. 如权利要求4所述的方法,其特征在于,
    f1(x)=α1x,所述α1表示信号强度权重,α1>0;
    Figure PCTCN2014092381-appb-100001
    所述α2表示日程表权重,并且α2>0;所述y包括采集数据时刻的日程表事件集合,所述Ω表示预设特定事件集合;
    f3(z)=α3z,所述α3表示环境噪声强度权重,α3>0;
    Figure PCTCN2014092381-appb-100002
    所述v0表示速度门限,所述β1表示第一运动速度权重,所述β2表示第二运动速度权重,所述t1表示第一速度影响最小值,所述t2表示第二速度影响最小值,β2≥β1>0,t2≥t1>0。
  6. 如权利要求1所述的方法,其特征在于,所述采集环境图像包括:设置拍照参数并进行拍照,得到所述环境图像;所述拍照参数根据所述运动速度数据确定,或者,所述拍照参数为预设的标准拍照参数。
  7. 如权利要求6所述的方法,其特征在于,所述使用采集到的环境图像,判断所述佩戴者所处座位是否是驾驶座包括:
    检测所述环境图像中是否包含预设标记物;
    若检测到包含所述预设标记物,则判定佩戴者所处座位是驾驶座;
    若检测不到包含所述预设标记物,则判定佩戴者所处座位不是驾驶座;
    或者,
    计算所述采集到的环境图像与预设的标准环境图像的相似度;若计算得到的相似度大于等于预设的相似度阈值,则判定佩戴者所处座位是驾驶座;否则,判定佩戴者所处座位不是驾驶座。
  8. 如权利要求6或7所述的方法,其特征在于,所述拍照参数包括曝光时间、光圈口径与镜头焦距的比值的倒数F、感光度和所选对焦点。
  9. 如权利要求1所述的方法,其特征在于,还包括:
    在检测到连接至车载无线网络后,将已连接至所述头戴式设备的可穿戴设备的设备信息,发送给所述车载无线网络所属的车载系统,以便所述车载系统根据获取到的所述设备信息进行搜索,并与搜索到的可穿戴设备建立连接;
    在检测到与所述车载无线网络断开连接时,搜索与所述头戴式设备建立过连接的可穿戴设备,并与搜索到的可穿戴设备重新建立连接。
  10. 如权利要求1所述的方法,其特征在于,所述预设服务还包括:手工输入服务和投影显示服务中的至少一种。
  11. 如权利要求1所述的方法,其特征在于,还包括:
    若判定所述所处座位是驾驶座,则将接收到的、来自预设紧急联系人的信息,转换为语音信息播放。
  12. 如权利要求1所述的方法,其特征在于,还包括:
    若判定所述所处座位是驾驶座,则将所述屏幕显示服务推送至所述头戴式 设备之外的显示屏上。
  13. 一种头戴式设备控制装置,其特征在于,包括:
    情境数据采集单元,用于采集情境数据,所述情境数据包括运动速度数据、车载无线网络信号强度数据、用户日程表数据和环境噪音强度数据中的至少一种;
    第一判断单元,用于使用所述情境数据判断所述头戴式设备的佩戴者是否位于行驶的车辆内;
    图像采集控制单元,用于判定所述佩戴者位于行驶的车辆内时,控制所述头戴式设备中的图像采集装置采集环境图像;
    第二判断单元,用于使用采集到的环境图像,判断所述佩戴者所处座位是否是驾驶座;
    服务管理单元,用于判定所述佩戴者所处座位是驾驶座时,禁用预设服务,所述预设服务包括近眼显示器的屏幕显示服务。
  14. 如权利要求13所述的装置,其特征在于,还包括:
    第一连接单元,用于在检测到连接至车载无线网络后,将已连接至所述头戴式设备的可穿戴设备的设备信息,发送给所述车载无线网络所属的车载系统,以便所述车载系统根据获取到的所述设备信息进行搜索,并与搜索到的可穿戴设备建立连接;
    第二连接单元,用于在检测到与所述车载无线网络断开连接时,搜索与所述头戴式设备建立过连接的可穿戴设备,并与搜索到的可穿戴设备重新建立连接。
  15. 如权利要求13所述的装置,其特征在于,还包括:转换单元,用于判 定所述所处座位是驾驶座时,将接收到的、来自预设紧急联系人的信息,转换为语音信息播放。
  16. 如权利要求13所述的装置,其特征在于,还包括:推送单元,用于判定所述所处座位是驾驶座时,将所述屏幕显示服务推送至所述头戴式设备之外的显示屏上。
  17. 一种头戴式设备,其特征在于,图像采集装置、近眼显示器和如权利要求14-16任一项所述的头戴式设备控制装置,所述头戴式设备控制装置分别与所述图像采集装置和所述近眼显示器相连接。
  18. 一种头戴式设备控制方法,其特征在于,包括:
    使用采集的状态数据判断所述头戴式设备的佩戴者所处状态,所述状态包括驾驶状态和非驾驶状态;
    在判定所述佩戴者处于驾驶状态时,禁用预设服务,所述预设服务包括近眼显示器的屏幕显示服务、手工输入服务和投影显示服务中的至少一种;
    所述状态数据包括情境数据和环境图像,所述情境数据包括运动速度数据、车载无线网络信号强度数据、用户日程表数据和环境噪音强度数据中的至少一种。
  19. 如权利要求18所述的方法,其特征在于,还包括:在判定所述佩戴者处于非驾驶状态时,启用或保持所述预设服务。
  20. 如权利要求18或19所述的方法,其特征在于,
    所述使用所述状态数据判断所述头戴式设备的佩戴者所处状态包括:
    使用所述情境数据判断所述头戴式设备的佩戴者是否位于行驶的车辆内;
    若判定所述佩戴者位于行驶的车辆内,使用所述环境图像判断所述佩戴者 所处座位是否是驾驶座;
    当所述佩戴者位于行驶的车辆内并且所处座位为驾驶座时,判定所述头戴式设备的佩戴者处于驾驶状态。
  21. 如权利要求18或19所述的方法,其特征在于,所述使用所述状态数据判断所述头戴式设备的佩戴者所处状态包括:
    使用所述环境图像判断所述佩戴者所处座位是否是驾驶座;
    若判定所述佩戴者所处座位为驾驶座,使用所述情境数据判断所述头戴式设备的佩戴者是否位于行驶的车辆内;
    当所述佩戴者位于行驶的车辆内并且所处座位为驾驶座时,判定所述头戴式设备的佩戴者处于驾驶状态。
  22. 如权利要求20或21所述的方法,还包括:
    在判定所述所处座位不是驾驶座或者所述佩戴者未位于行驶的车辆内时,判定所述头戴式设备的佩戴者处于非驾驶状态。
  23. 如权利要求20-22任一项所述的方法,其特征在于,
    还包括:
    将所述情境数据代入情境评价函数,得到情境评价值;
    所述使用所述情境数据判断所述佩戴者是否位于行驶的车辆内包括:判断所述情境评价值是否大于等于门限值;
    若所述情境评价值大于等于所述门限值,则判定所述佩戴者位于行驶的车辆内;
    若所述情境评价值小于所述门限值,则判定所述佩戴者未位于行驶的车辆内。
  24. 如权利要求23所述的方法,其特征在于,所述情境评价函数记为F(x,y,z,v),包括信号强度评价函数f1(x),日程表评价函数f2(y),环境噪声评价函数f3(z)和运动速度评价函数f4(v)中的至少一种;所述x表示信号强度数据,所述y表示用户日程表数据,所述z表示环境噪声强度评价值,所述v表示运动速度数据;
    所述将所述情境数据代入情境评价函数,得到所述情境评价值包括:将所述x代入f1(x)得到信号强度评价值,将所述y代入f2(y)得到日程表评价值,将所述z代入f3(z)得到环境噪声强度评价值,以及,将所述v代入f4(v)得到运动速度评价值中的至少一种。
  25. 如权利要求24所述的方法,其特征在于,
    f1(x)=α1x,所述α1表示信号强度权重,α1>0;
    Figure PCTCN2014092381-appb-100003
    所述α2表示日程表权重,并且α2>0;所述y包括采集数据时刻的日程表事件集合,所述Ω表示预设特定事件集合;
    f3(z)=α3z,所述α3表示环境噪声强度权重,α3>0;
    Figure PCTCN2014092381-appb-100004
    所述v0表示速度门限,所述β1表示第一运动速度权重,所述β2表示第二运动速度权重,所述t1表示第一速度影响最小值,所述t2表示第二速度影响最小值,β2≥β1>0,t2≥t1>0。
  26. 如权利要求18-25任一项所述的方法,其特征在于,所述环境图像通过如下方式采集:
    设置拍照参数并进行拍照,得到所述环境图像;
    所述拍照参数根据所述运动速度数据确定,或者,所述拍照参数为预设的 标准拍照参数。
  27. 如权利要求26所述的方法,其特征在于,所述使用采集到的环境图像,判断所述佩戴者所处座位是否是驾驶座包括:
    检测所述环境图像中是否包含预设标记物;
    若检测到包含所述预设标记物,则判定佩戴者所处座位是驾驶座;
    若检测不到包含所述预设标记物,则判定佩戴者所处座位不是驾驶座;
    或者,
    计算所述采集到的环境图像与预设的标准环境图像的相似度;若计算得到的相似度大于等于预设的相似度阈值,则判定佩戴者所处座位是驾驶座;否则,判定佩戴者所处座位不是驾驶座。
  28. 如权利要求26或27所述的方法,其特征在于,所述拍照参数包括曝光时间、光圈口径与镜头焦距的比值的倒数F、感光度和所选对焦点。
  29. 如权利要求18所述的方法,在判定所述佩戴者处于驾驶状态之后,还包括:
    若检测到所述头戴式设备连接至车载无线网络,则将已连接至所述头戴式设备的可穿戴设备的设备信息,发送给所述车载无线网络所属的车载系统;
    或者,
    判定所述佩戴者处于非驾驶状态时,搜索与所述头戴式设备建立过连接的可穿戴设备,并与搜索到的可穿戴设备重新建立连接。
  30. 如权利要求18所述的方法,其特征在于,还包括:
    在判定所述佩戴者处于驾驶状态时,将接收到的、来自预设紧急联系人的信息,转换为语音信息播放;或者,
    在判定所述佩戴者处于驾驶状态时,将所述屏幕显示服务推送至所述头戴式设备之外的显示屏上。
  31. 一种头戴式设备控制装置,其特征在于,包括:
    状态判断单元,用于使用采集的状态数据判断所述头戴式设备的佩戴者所处状态,所述状态包括驾驶状态和非驾驶状态;所述状态数据包括情境数据和环境图像,所述情境数据包括运动速度数据、车载无线网络信号强度数据、用户日程表数据和环境噪音强度数据中的至少一种;
    服务管理单元,用于在判定所述佩戴者处于驾驶状态时,禁用预设服务,所述预设服务包括近眼显示器的屏幕显示服务、手工输入服务和投影显示服务中的至少一种。
  32. 如权利要求31述的装置,其特征在于,还包括:
    第一连接单元,用于在判定所述佩戴者处于驾驶状态之后,若检测到所述头戴式设备连接至车载无线网络,则将已连接至所述头戴式设备的可穿戴设备的设备信息,发送给所述车载无线网络所属的车载系统;
    或者,还包括第二连接单元,用于在判定所述佩戴者处于非驾驶状态时,搜索与所述头戴式设备建立过连接的可穿戴设备,并与搜索到的可穿戴设备重新建立连接。
  33. 如权利要求31所述的装置,其特征在于,还包括:信息转换单元,用于在判定所述佩戴者处于驾驶状态时,将接收到的、来自预设紧急联系人的信息,转换为语音信息播放。
  34. 如权利要求31所述的装置,其特征在于,还包括:屏幕显示服务推送单元,用于在判定所述佩戴者处于驾驶状态时,将所述屏幕显示服务推送至所 述头戴式设备之外的显示屏上。
  35. 一种头戴式设备,其特征在于,图像采集装置、近眼显示器和如权利要求31-34任一项所述的头戴式设备控制装置,所述头戴式设备控制装置分别与所述图像采集装置和所述近眼显示器相连接。
PCT/CN2014/092381 2013-11-28 2014-11-27 头戴式设备控制方法、装置和头戴式设备 WO2015078387A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP14865292.8A EP3035651B1 (en) 2013-11-28 2014-11-27 Head mounted display control method, device and head mounted display
US15/023,526 US9940893B2 (en) 2013-11-28 2014-11-27 Head mounted device control method and apparatus, and head mounted device

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN201310627752 2013-11-28
CN201310627752.6 2013-11-28
CN201410008649.8A CN104681004B (zh) 2013-11-28 2014-01-08 头戴式设备控制方法、装置和头戴式设备
CN201410008649.8 2014-01-08

Publications (1)

Publication Number Publication Date
WO2015078387A1 true WO2015078387A1 (zh) 2015-06-04

Family

ID=53198381

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2014/092381 WO2015078387A1 (zh) 2013-11-28 2014-11-27 头戴式设备控制方法、装置和头戴式设备

Country Status (4)

Country Link
US (1) US9940893B2 (zh)
EP (1) EP3035651B1 (zh)
CN (1) CN104681004B (zh)
WO (1) WO2015078387A1 (zh)

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016096029A1 (en) * 2014-12-19 2016-06-23 Here Global B.V. A method, an apparatus and a computer program product for positioning
CN105516595A (zh) * 2015-12-23 2016-04-20 小米科技有限责任公司 拍摄方法及装置
CN109421732B (zh) * 2017-08-16 2021-08-31 深圳如一探索科技有限公司 设备控制方法及装置
CN108234756B (zh) * 2017-12-25 2020-10-30 北京小米松果电子有限公司 通话控制方法、装置及计算机可读存储介质
CN108614963A (zh) * 2018-04-28 2018-10-02 福建省汽车工业集团云度新能源汽车股份有限公司 一种基于用户权限的远程登录车载系统的方法及系统
US11422764B1 (en) 2018-06-03 2022-08-23 Epic Optix, Inc. Multi-platform integrated display
DE102018216613A1 (de) * 2018-09-27 2020-04-02 Bayerische Motoren Werke Aktiengesellschaft Datenbrillen mit automatischer Ausblendung von angezeigten Inhalten für Fahrzeuge
US11194176B2 (en) 2019-07-26 2021-12-07 Tectus Corporation Through-body ocular communication devices, networks, and methods of use
US11632346B1 (en) * 2019-09-25 2023-04-18 Amazon Technologies, Inc. System for selective presentation of notifications
US11302121B1 (en) * 2019-12-10 2022-04-12 BlueOwl, LLC Automated tracking of vehicle operation and synchronized gamified interface
US11039279B1 (en) * 2019-12-10 2021-06-15 BlueOwl, LLC Automated tracking of vehicle operation and synchronized media delivery
CN113408310A (zh) * 2020-03-17 2021-09-17 菜鸟智能物流控股有限公司 数据处理方法和装置、电子设备以及计算机可读存储介质
US20210402981A1 (en) * 2020-06-30 2021-12-30 Micron Technology, Inc. Virtual vehicle interface
US11747891B1 (en) * 2022-07-15 2023-09-05 Google Llc Content output management in a head mounted wearable device
CN115762049A (zh) * 2022-10-18 2023-03-07 湖北星纪时代科技有限公司 智能可穿戴设备的安全控制方法及装置、智能可穿戴设备

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020136422A1 (en) * 2001-03-21 2002-09-26 Koninklijke Philips Electronics N.V. Boomless hearing/speaking configuration for sound receiving means
US20040001588A1 (en) * 2002-06-28 2004-01-01 Hairston Tommy Lee Headset cellular telephones
CN1684500A (zh) * 2004-04-14 2005-10-19 奥林巴斯株式会社 摄像装置
CN102754412A (zh) * 2010-03-05 2012-10-24 高通股份有限公司 无线通信系统中的自动化消息接发响应

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4309361B2 (ja) 2005-03-14 2009-08-05 パナソニック株式会社 電子機器制御システム及び制御信号送信装置
US8315617B2 (en) 2009-10-31 2012-11-20 Btpatent Llc Controlling mobile device functions
US8442490B2 (en) * 2009-11-04 2013-05-14 Jeffrey T. Haley Modify function of driver's phone during acceleration or braking
US20130147686A1 (en) * 2011-12-12 2013-06-13 John Clavin Connecting Head Mounted Displays To External Displays And Other Communication Networks

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020136422A1 (en) * 2001-03-21 2002-09-26 Koninklijke Philips Electronics N.V. Boomless hearing/speaking configuration for sound receiving means
US20040001588A1 (en) * 2002-06-28 2004-01-01 Hairston Tommy Lee Headset cellular telephones
CN1684500A (zh) * 2004-04-14 2005-10-19 奥林巴斯株式会社 摄像装置
CN102754412A (zh) * 2010-03-05 2012-10-24 高通股份有限公司 无线通信系统中的自动化消息接发响应

Also Published As

Publication number Publication date
EP3035651B1 (en) 2018-09-05
CN104681004B (zh) 2017-09-29
EP3035651A4 (en) 2016-12-07
US9940893B2 (en) 2018-04-10
CN104681004A (zh) 2015-06-03
US20160247480A1 (en) 2016-08-25
EP3035651A1 (en) 2016-06-22

Similar Documents

Publication Publication Date Title
WO2015078387A1 (zh) 头戴式设备控制方法、装置和头戴式设备
TWI670191B (zh) 駕駛行為確定方法、裝置、設備及儲存介質
US9800717B2 (en) Mobile terminal and method for controlling the same
US9508259B2 (en) Wearable device and method of controlling the same
US9869556B2 (en) Mobile terminal and control method therefor
US20160267335A1 (en) Driver distraction detection system
CN105898089B (zh) 移动终端、移动终端的控制方法、车辆的控制系统及车辆
US9682622B2 (en) Driver monitoring system
US9815333B2 (en) Method and device for managing a self-balancing vehicle based on providing a warning message to a smart wearable device
US10699494B2 (en) Method and device for in-vehicle payment
US9932000B2 (en) Information notification apparatus and information notification method
US10609279B2 (en) Image processing apparatus and information processing method for reducing a captured image based on an action state, transmitting the image depending on blur, and displaying related information
WO2017175432A1 (ja) 情報処理装置、情報処理方法、およびプログラム
JP6750697B2 (ja) 情報処理装置、情報処理方法及びプログラム
CN105974918B (zh) 平衡车控制方法、装置及平衡车
CN105635460B (zh) 一种用于信息输出的控制方法、移动终端及穿戴式设备
US20170262696A1 (en) Wearable apparatus and information processing method and device thereof
CN113212257A (zh) 一种基于车联网的驾驶员座椅位置自动调整方法、装置、终端及存储介质
JP2013161098A (ja) サーバ装置及び監視システム
US9742988B2 (en) Information processing apparatus, information processing method, and program
JP6891879B2 (ja) 情報処理装置、情報処理方法、およびプログラム
KR20160064601A (ko) 차량 내의 객체 검출 장치 및 이의 방법
CN113779503B (zh) 一种汽车乘员舱空调多温区控制性能的评价方法、系统、终端和存储介质
EP3067827B1 (en) Driver distraction detection system
JP2017126892A (ja) 電子機器

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14865292

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2014865292

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 15023526

Country of ref document: US

NENP Non-entry into the national phase

Ref country code: DE